Alert button
Picture for Dian Chen

Dian Chen

Alert button

SusFL: Energy-Aware Federated Learning-based Monitoring for Sustainable Smart Farms

Add code
Bookmark button
Alert button
Feb 15, 2024
Dian Chen, Paul Yang, Ing-Ray Chen, Dong Sam Ha, Jin-Hee Cho

Viaarxiv icon

pix2gestalt: Amodal Segmentation by Synthesizing Wholes

Add code
Bookmark button
Alert button
Jan 25, 2024
Ege Ozguroglu, Ruoshi Liu, Dídac Surís, Dian Chen, Achal Dave, Pavel Tokmakov, Carl Vondrick

Viaarxiv icon

FSD: Fast Self-Supervised Single RGB-D to Categorical 3D Objects

Add code
Bookmark button
Alert button
Oct 19, 2023
Mayank Lunayach, Sergey Zakharov, Dian Chen, Rares Ambrus, Zsolt Kira, Muhammad Zubair Irshad

Viaarxiv icon

MotionLM: Multi-Agent Motion Forecasting as Language Modeling

Add code
Bookmark button
Alert button
Sep 28, 2023
Ari Seff, Brian Cera, Dian Chen, Mason Ng, Aurick Zhou, Nigamaa Nayakanti, Khaled S. Refaat, Rami Al-Rfou, Benjamin Sapp

Figure 1 for MotionLM: Multi-Agent Motion Forecasting as Language Modeling
Figure 2 for MotionLM: Multi-Agent Motion Forecasting as Language Modeling
Figure 3 for MotionLM: Multi-Agent Motion Forecasting as Language Modeling
Figure 4 for MotionLM: Multi-Agent Motion Forecasting as Language Modeling
Viaarxiv icon

Towards Zero-Shot Scale-Aware Monocular Depth Estimation

Add code
Bookmark button
Alert button
Jun 29, 2023
Vitor Guizilini, Igor Vasiljevic, Dian Chen, Rares Ambrus, Adrien Gaidon

Figure 1 for Towards Zero-Shot Scale-Aware Monocular Depth Estimation
Figure 2 for Towards Zero-Shot Scale-Aware Monocular Depth Estimation
Figure 3 for Towards Zero-Shot Scale-Aware Monocular Depth Estimation
Figure 4 for Towards Zero-Shot Scale-Aware Monocular Depth Estimation
Viaarxiv icon

Viewpoint Equivariance for Multi-View 3D Object Detection

Add code
Bookmark button
Alert button
Apr 07, 2023
Dian Chen, Jie Li, Vitor Guizilini, Rares Ambrus, Adrien Gaidon

Figure 1 for Viewpoint Equivariance for Multi-View 3D Object Detection
Figure 2 for Viewpoint Equivariance for Multi-View 3D Object Detection
Figure 3 for Viewpoint Equivariance for Multi-View 3D Object Detection
Figure 4 for Viewpoint Equivariance for Multi-View 3D Object Detection
Viaarxiv icon

Standing Between Past and Future: Spatio-Temporal Modeling for Multi-Camera 3D Multi-Object Tracking

Add code
Bookmark button
Alert button
Feb 07, 2023
Ziqi Pang, Jie Li, Pavel Tokmakov, Dian Chen, Sergey Zagoruyko, Yu-Xiong Wang

Figure 1 for Standing Between Past and Future: Spatio-Temporal Modeling for Multi-Camera 3D Multi-Object Tracking
Figure 2 for Standing Between Past and Future: Spatio-Temporal Modeling for Multi-Camera 3D Multi-Object Tracking
Figure 3 for Standing Between Past and Future: Spatio-Temporal Modeling for Multi-Camera 3D Multi-Object Tracking
Figure 4 for Standing Between Past and Future: Spatio-Temporal Modeling for Multi-Camera 3D Multi-Object Tracking
Viaarxiv icon

Depth Is All You Need for Monocular 3D Detection

Add code
Bookmark button
Alert button
Oct 05, 2022
Dennis Park, Jie Li, Dian Chen, Vitor Guizilini, Adrien Gaidon

Figure 1 for Depth Is All You Need for Monocular 3D Detection
Figure 2 for Depth Is All You Need for Monocular 3D Detection
Figure 3 for Depth Is All You Need for Monocular 3D Detection
Figure 4 for Depth Is All You Need for Monocular 3D Detection
Viaarxiv icon

COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles

Add code
Bookmark button
Alert button
May 04, 2022
Jiaxun Cui, Hang Qiu, Dian Chen, Peter Stone, Yuke Zhu

Figure 1 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Figure 2 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Figure 3 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Figure 4 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Viaarxiv icon

Contrastive Test-Time Adaptation

Add code
Bookmark button
Alert button
Apr 21, 2022
Dian Chen, Dequan Wang, Trevor Darrell, Sayna Ebrahimi

Figure 1 for Contrastive Test-Time Adaptation
Figure 2 for Contrastive Test-Time Adaptation
Figure 3 for Contrastive Test-Time Adaptation
Figure 4 for Contrastive Test-Time Adaptation
Viaarxiv icon