Alert button
Picture for Atsuyuki Miyai

Atsuyuki Miyai

Alert button

Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models

Add code
Bookmark button
Alert button
Mar 29, 2024
Atsuyuki Miyai, Jingkang Yang, Jingyang Zhang, Yifei Ming, Qing Yu, Go Irie, Yixuan Li, Hai Li, Ziwei Liu, Kiyoharu Aizawa

Figure 1 for Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models
Figure 2 for Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models
Figure 3 for Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models
Figure 4 for Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models
Viaarxiv icon

Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?

Add code
Bookmark button
Alert button
Oct 12, 2023
Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

Figure 1 for Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?
Figure 2 for Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?
Figure 3 for Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?
Figure 4 for Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?
Viaarxiv icon

LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning

Add code
Bookmark button
Alert button
Jun 10, 2023
Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

Figure 1 for LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning
Figure 2 for LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning
Figure 3 for LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning
Figure 4 for LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning
Viaarxiv icon

Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models

Add code
Bookmark button
Alert button
Apr 10, 2023
Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

Figure 1 for Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models
Figure 2 for Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models
Figure 3 for Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models
Figure 4 for Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models
Viaarxiv icon

Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation

Add code
Bookmark button
Alert button
Oct 23, 2022
Atsuyuki Miyai, Qing Yu, Daiki Ikami, Go Irie, Kiyoharu Aizawa

Figure 1 for Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation
Figure 2 for Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation
Figure 3 for Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation
Figure 4 for Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation
Viaarxiv icon