Picture for Qingyi Si

Qingyi Si

Think out Loud: Emotion Deducing Explanation in Dialogues

Jun 07, 2024
Viaarxiv icon

Are Large Language Models Table-based Fact-Checkers?

Add code
Feb 04, 2024
Viaarxiv icon

Towards Unified Interactive Visual Grounding in The Wild

Add code
Jan 30, 2024
Figure 1 for Towards Unified Interactive Visual Grounding in The Wild
Figure 2 for Towards Unified Interactive Visual Grounding in The Wild
Figure 3 for Towards Unified Interactive Visual Grounding in The Wild
Figure 4 for Towards Unified Interactive Visual Grounding in The Wild
Viaarxiv icon

An Empirical Study of Instruction-tuning Large Language Models in Chinese

Add code
Oct 20, 2023
Figure 1 for An Empirical Study of Instruction-tuning Large Language Models in Chinese
Figure 2 for An Empirical Study of Instruction-tuning Large Language Models in Chinese
Figure 3 for An Empirical Study of Instruction-tuning Large Language Models in Chinese
Figure 4 for An Empirical Study of Instruction-tuning Large Language Models in Chinese
Viaarxiv icon

Combo of Thinking and Observing for Outside-Knowledge VQA

Add code
May 10, 2023
Figure 1 for Combo of Thinking and Observing for Outside-Knowledge VQA
Figure 2 for Combo of Thinking and Observing for Outside-Knowledge VQA
Figure 3 for Combo of Thinking and Observing for Outside-Knowledge VQA
Figure 4 for Combo of Thinking and Observing for Outside-Knowledge VQA
Viaarxiv icon

Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question Answering

Oct 26, 2022
Figure 1 for Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question Answering
Figure 2 for Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question Answering
Figure 3 for Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question Answering
Figure 4 for Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question Answering
Viaarxiv icon

Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning

Add code
Oct 10, 2022
Figure 1 for Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning
Figure 2 for Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning
Figure 3 for Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning
Figure 4 for Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning
Viaarxiv icon

Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA

Add code
Oct 10, 2022
Figure 1 for Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA
Figure 2 for Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA
Figure 3 for Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA
Figure 4 for Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA
Viaarxiv icon

Spot the Difference: A Cooperative Object-Referring Game in Non-Perfectly Co-Observable Scene

Add code
Mar 16, 2022
Figure 1 for Spot the Difference: A Cooperative Object-Referring Game in Non-Perfectly Co-Observable Scene
Figure 2 for Spot the Difference: A Cooperative Object-Referring Game in Non-Perfectly Co-Observable Scene
Figure 3 for Spot the Difference: A Cooperative Object-Referring Game in Non-Perfectly Co-Observable Scene
Figure 4 for Spot the Difference: A Cooperative Object-Referring Game in Non-Perfectly Co-Observable Scene
Viaarxiv icon

Check It Again: Progressive Visual Question Answering via Visual Entailment

Add code
Jun 08, 2021
Figure 1 for Check It Again: Progressive Visual Question Answering via Visual Entailment
Figure 2 for Check It Again: Progressive Visual Question Answering via Visual Entailment
Figure 3 for Check It Again: Progressive Visual Question Answering via Visual Entailment
Figure 4 for Check It Again: Progressive Visual Question Answering via Visual Entailment
Viaarxiv icon