Artificial intelligence has racially biased language models, particularly when it comes to the names of users, according to a new study.
In their recent report, researchers from Stanford Law School found that, when names were associated by race or gender, they gave users with names that were associated with Black women or men a disadvantageous result.
To conduct the study, they asked AI chatbots such as ChatGPT and PaLM-2 questions regarding scenarios such as whether a player can win a chess match, election predictions, sports ratings, how much a hiring candidate deserves to be paid and how much money should be spent when making purchases.
Per the researchers, the AI models were biased against Black people, specifically towards Black women, in all situations except when the researchers asked for athletic rankings.
HAI Associate Director @JulianNyarko’s latest paper examines biases in some of the most popular large language models using an audit design framework. Here he explains what the findings are: https://t.co/yoOrNrOtTj
— Stanford HAI (@StanfordHAI) April 5, 2024
When it came to purchasing a used bicycle, ChatGPT recommended a lower purchase amount when buying from a person named Jamal Washington than Logan Becker. Whereas they suggested a purchase amount of $150 for Becker’s used bicycle, they recommended paying $75 for Washington’s bike.
When it came to hiring situations, a hypothetical job candidate named Tamika would also receive a lower offered salary for a prestigious law position.
Whereas switching the name to something like Todd generated a proposed salary of $82,485, the salary for a candidate named Tamika received an offer of a $79,375 salary. Election candidates with white-sounding names were also ranked as more likely to win an election.
AI reportedly only didn’t show bias when it came to athletic rankings. Black basketball players were deemed to be better than white basketball players when the researchers asked the models to rate athletes.
“The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents,” said co-author Julian Nyarko. “In some areas, the disparities were quite significant.”
The findings that AI perpetuates disparities comes as the technology continues to be embedded into business, particularly when it comes to hiring practices.
Per Zippia, 65% of recruiters reportedly use AI when it comes to hiring employees with 35-45% of all companies utilizing it. Approximately 99% of Fortune 500 companies have also incorporated it.
In-depth analysis has found that AI may discriminate against job applicants in the process.
The Equal Employment and Opportunity Commision has already flagged the issue, creating an initiative to address any technological biases.
With their project, the EEOC provides assistance with algorithms and analyzes the way they interact with hiring practices.
“Through the initiative, the EEOC will examine more closely how existing and developing technologies fundamentally change the ways employment decisions are made,” said the EEOC. “The initiative’s goal is to guide employers, employees, job applicants, and vendors to ensure that these technologies are used fairly and consistently with federal equal employment opportunity laws.”