Picture for Lucas Beyer

Lucas Beyer

No Filter: Cultural and Socioeconomic Diversityin Contrastive Vision-Language Models

Add code
May 22, 2024
Viaarxiv icon

LocCa: Visual Pretraining with Location-aware Captioners

Add code
Mar 28, 2024
Figure 1 for LocCa: Visual Pretraining with Location-aware Captioners
Figure 2 for LocCa: Visual Pretraining with Location-aware Captioners
Figure 3 for LocCa: Visual Pretraining with Location-aware Captioners
Figure 4 for LocCa: Visual Pretraining with Location-aware Captioners
Viaarxiv icon

PaLI-3 Vision Language Models: Smaller, Faster, Stronger

Add code
Oct 17, 2023
Figure 1 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Figure 2 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Figure 3 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Figure 4 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Viaarxiv icon

Image Captioners Are Scalable Vision Learners Too

Add code
Jun 13, 2023
Figure 1 for Image Captioners Are Scalable Vision Learners Too
Figure 2 for Image Captioners Are Scalable Vision Learners Too
Figure 3 for Image Captioners Are Scalable Vision Learners Too
Figure 4 for Image Captioners Are Scalable Vision Learners Too
Viaarxiv icon

PaLI-X: On Scaling up a Multilingual Vision and Language Model

Add code
May 29, 2023
Figure 1 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 2 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 3 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 4 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Viaarxiv icon

Three Towers: Flexible Contrastive Learning with Pretrained Image Models

Add code
May 29, 2023
Figure 1 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Figure 2 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Figure 3 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Figure 4 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Viaarxiv icon

Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design

Add code
May 22, 2023
Figure 1 for Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design
Figure 2 for Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design
Figure 3 for Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design
Figure 4 for Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design
Viaarxiv icon

Sigmoid Loss for Language Image Pre-Training

Add code
Mar 30, 2023
Figure 1 for Sigmoid Loss for Language Image Pre-Training
Figure 2 for Sigmoid Loss for Language Image Pre-Training
Figure 3 for Sigmoid Loss for Language Image Pre-Training
Figure 4 for Sigmoid Loss for Language Image Pre-Training
Viaarxiv icon

A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision

Add code
Mar 30, 2023
Figure 1 for A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
Figure 2 for A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
Figure 3 for A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
Figure 4 for A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
Viaarxiv icon

Tuning computer vision models with task rewards

Add code
Feb 16, 2023
Figure 1 for Tuning computer vision models with task rewards
Figure 2 for Tuning computer vision models with task rewards
Figure 3 for Tuning computer vision models with task rewards
Figure 4 for Tuning computer vision models with task rewards
Viaarxiv icon