Placeholder canvas

ChatGPT, Copilot, more likely to sentence African-American defendants to death, finds Cornell study

A recent study from Cornell University suggests that large language models (LLMs) are more likely to exhibit bias against users who speak African American English. The research indicates that the dialect of the language spoken can influence how artificial intelligence (AI) algorithms perceive individuals, affecting judgments about their character, employability, and potential criminality.

This study focused on large language models like OpenAI’s ChatGPT and GPT-4, Meta’s LLaMA2, and French Mistral 7B. These LLMs are deep learning algorithms designed to generate human-like text.

Researchers conducted “matched guise probing,” presenting prompts in African American and Standardized American English to the LLMs. They then analyzed how the models identified various characteristics of individuals based on the language used.

According to Valentin Hofmann, a researcher from the Allen Institute for AI, the results of the study indicate that GPT-4 technology is more inclined to issue death sentences to defendants who use English commonly associated with African Americans without any indication of their race being disclosed.

Hofmann highlighted these concerns in a post on the social media platform X (formerly Twitter), emphasizing the urgent need for attention to the biases present in AI systems utilizing large language models (LLMs), especially in domains such as business and jurisdiction where such systems are increasingly used.

The study also revealed that LLMs tend to assume that speakers of African American English hold less prestigious jobs compared to those who speak Standard English, despite not being informed about the speakers’ racial identities.

Interestingly, the research found that the larger the LLM, the better its understanding of African American English, and it would be more inclined to avoid explicitly racist language. However, the size of the LLM did not affect its underlying covert biases.

Hofmann cautioned against interpreting the decrease in overt racism in LLMs as a sign that racial bias has been resolved. Instead, he stressed that the study demonstrates a shift in the manifestation of racial bias in LLMs.

The study indicates that the traditional method of teaching large language models (LLMs) by providing human feedback does not effectively address covert racial bias.

Rather than mitigating bias, this approach can lead LLMs to learn how to “superficially conceal” their underlying racial biases while still maintaining them at a deeper level.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Newsletter

Follow Us

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed