Alert button
Picture for Aylin Caliskan

Aylin Caliskan

Alert button

'Person' == Light-skinned, Western Man, and Sexualization of Women of Color: Stereotypes in Stable Diffusion

Add code
Bookmark button
Alert button
Nov 10, 2023
Sourojit Ghosh, Aylin Caliskan

Viaarxiv icon

Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition

Add code
Bookmark button
Alert button
Oct 29, 2023
Isaac Slaughter, Craig Greenberg, Reva Schwartz, Aylin Caliskan

Viaarxiv icon

Is the U.S. Legal System Ready for AI's Challenges to Human Values?

Add code
Bookmark button
Alert button
Sep 05, 2023
Inyoung Cheong, Aylin Caliskan, Tadayoshi Kohno

Figure 1 for Is the U.S. Legal System Ready for AI's Challenges to Human Values?
Figure 2 for Is the U.S. Legal System Ready for AI's Challenges to Human Values?
Figure 3 for Is the U.S. Legal System Ready for AI's Challenges to Human Values?
Figure 4 for Is the U.S. Legal System Ready for AI's Challenges to Human Values?
Viaarxiv icon

Evaluating Biased Attitude Associations of Language Models in an Intersectional Context

Add code
Bookmark button
Alert button
Jul 07, 2023
Shiva Omrani Sabbaghi, Robert Wolfe, Aylin Caliskan

Figure 1 for Evaluating Biased Attitude Associations of Language Models in an Intersectional Context
Figure 2 for Evaluating Biased Attitude Associations of Language Models in an Intersectional Context
Figure 3 for Evaluating Biased Attitude Associations of Language Models in an Intersectional Context
Figure 4 for Evaluating Biased Attitude Associations of Language Models in an Intersectional Context
Viaarxiv icon

Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks

Add code
Bookmark button
Alert button
Jun 08, 2023
Katelyn X. Mei, Sonia Fereidooni, Aylin Caliskan

Figure 1 for Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks
Figure 2 for Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks
Figure 3 for Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks
Figure 4 for Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks
Viaarxiv icon

ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages

Add code
Bookmark button
Alert button
May 17, 2023
Sourojit Ghosh, Aylin Caliskan

Figure 1 for ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages
Figure 2 for ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages
Figure 3 for ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages
Figure 4 for ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages
Viaarxiv icon

Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias

Add code
Bookmark button
Alert button
Dec 21, 2022
Robert Wolfe, Yiwei Yang, Bill Howe, Aylin Caliskan

Figure 1 for Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
Figure 2 for Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
Figure 3 for Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
Viaarxiv icon

Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale

Add code
Bookmark button
Alert button
Nov 07, 2022
Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan

Figure 1 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 2 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 3 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 4 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Viaarxiv icon

American == White in Multimodal Language-and-Image AI

Add code
Bookmark button
Alert button
Jul 01, 2022
Robert Wolfe, Aylin Caliskan

Figure 1 for American == White in Multimodal Language-and-Image AI
Figure 2 for American == White in Multimodal Language-and-Image AI
Figure 3 for American == White in Multimodal Language-and-Image AI
Figure 4 for American == White in Multimodal Language-and-Image AI
Viaarxiv icon

Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics

Add code
Bookmark button
Alert button
Jun 07, 2022
Aylin Caliskan, Pimparkar Parth Ajay, Tessa Charlesworth, Robert Wolfe, Mahzarin R. Banaji

Figure 1 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Figure 2 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Figure 3 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Figure 4 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Viaarxiv icon