Alert button
Picture for Yixin Wan

Yixin Wan

Alert button

White Men Lead, Black Women Help: Uncovering Gender, Racial, and Intersectional Bias in Language Agency

Add code
Bookmark button
Alert button
Apr 16, 2024
Yixin Wan, Kai-Wei Chang

Viaarxiv icon

Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation

Add code
Bookmark button
Alert button
Apr 02, 2024
Yixin Wan, Arjun Subramonian, Anaelia Ovalle, Zongyu Lin, Ashima Suvarna, Christina Chance, Hritik Bansal, Rebecca Pattichis, Kai-Wei Chang

Viaarxiv icon

The Male CEO and the Female Assistant: Probing Gender Biases in Text-To-Image Models Through Paired Stereotype Test

Add code
Bookmark button
Alert button
Feb 16, 2024
Yixin Wan, Kai-Wei Chang

Viaarxiv icon

"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters

Add code
Bookmark button
Alert button
Oct 28, 2023
Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng

Figure 1 for "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Figure 2 for "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Figure 3 for "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Figure 4 for "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Viaarxiv icon

Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded Dialogue Generation

Add code
Bookmark button
Alert button
Oct 28, 2023
Yixin Wan, Fanyou Wu, Weijie Xu, Srinivasan H. Sengamedu

Viaarxiv icon

Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems

Add code
Bookmark button
Alert button
Oct 23, 2023
Yixin Wan, Jieyu Zhao, Aman Chadha, Nanyun Peng, Kai-Wei Chang

Figure 1 for Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems
Figure 2 for Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems
Figure 3 for Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems
Figure 4 for Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems
Viaarxiv icon

ABC-KD: Attention-Based-Compression Knowledge Distillation for Deep Learning-Based Noise Suppression

Add code
Bookmark button
Alert button
May 26, 2023
Yixin Wan, Yuan Zhou, Xiulian Peng, Kai-Wei Chang, Yan Lu

Figure 1 for ABC-KD: Attention-Based-Compression Knowledge Distillation for Deep Learning-Based Noise Suppression
Figure 2 for ABC-KD: Attention-Based-Compression Knowledge Distillation for Deep Learning-Based Noise Suppression
Figure 3 for ABC-KD: Attention-Based-Compression Knowledge Distillation for Deep Learning-Based Noise Suppression
Figure 4 for ABC-KD: Attention-Based-Compression Knowledge Distillation for Deep Learning-Based Noise Suppression
Viaarxiv icon

PIP: Parse-Instructed Prefix for Syntactically Controlled Paraphrase Generation

Add code
Bookmark button
Alert button
May 26, 2023
Yixin Wan, Kuan-Hao Huang, Kai-Wei Chang

Figure 1 for PIP: Parse-Instructed Prefix for Syntactically Controlled Paraphrase Generation
Figure 2 for PIP: Parse-Instructed Prefix for Syntactically Controlled Paraphrase Generation
Figure 3 for PIP: Parse-Instructed Prefix for Syntactically Controlled Paraphrase Generation
Figure 4 for PIP: Parse-Instructed Prefix for Syntactically Controlled Paraphrase Generation
Viaarxiv icon

TheoremQA: A Theorem-driven Question Answering dataset

Add code
Bookmark button
Alert button
May 23, 2023
Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan, Xueguang Ma, Jianyu Xu, Xinyi Wang, Tony Xia

Figure 1 for TheoremQA: A Theorem-driven Question Answering dataset
Figure 2 for TheoremQA: A Theorem-driven Question Answering dataset
Figure 3 for TheoremQA: A Theorem-driven Question Answering dataset
Figure 4 for TheoremQA: A Theorem-driven Question Answering dataset
Viaarxiv icon

Improving the Adversarial Robustness of NLP Models by Information Bottleneck

Add code
Bookmark button
Alert button
Jun 11, 2022
Cenyuan Zhang, Xiang Zhou, Yixin Wan, Xiaoqing Zheng, Kai-Wei Chang, Cho-Jui Hsieh

Figure 1 for Improving the Adversarial Robustness of NLP Models by Information Bottleneck
Figure 2 for Improving the Adversarial Robustness of NLP Models by Information Bottleneck
Figure 3 for Improving the Adversarial Robustness of NLP Models by Information Bottleneck
Figure 4 for Improving the Adversarial Robustness of NLP Models by Information Bottleneck
Viaarxiv icon