Experts explain that when algorithms show bias or create divisions, they are not failing.
Artificial intelligence is definitely a hot topic that’s impacting our everyday lives. As tech keeps evolving and businesses adapt, the discussions around what’s right and wrong are getting influenced more and more by AI. Its knack for understanding natural language and data is getting sharper, and that’s starting to shape our choices and the paths we take.
However, specialists are discovering a critical piece that is overlooked in this conversation.
They say that when we consider this technology, we shouldn’t only focus on how it will transform us. We also need to examine how we impact AI and what this relationship can tell us about ourselves.
Faisal Hoque, an expert in innovation and transformation for both government and private organizations, and the author of several books, including his latest, Transcend: Unlocking Humanity in the Age of AI, explains that every AI system we develop acts like a mirror that clearly shows our values, priorities and beliefs.
In his recent article for Psychology Today, he explains that, for example, when facial recognition technology has trouble identifying people with darker skin tones, it’s not a mistake. It is showing the biases that are part of the data used to train the system.
“When content recommendation engines amplify outrage and division, this doesn’t mean that they are broken,” Hoque writes. “They are successfully optimizing for engagement with how humans behave in reality.”
Hoque continues. “In many cases, the ‘threats’ and ‘dangers’ of AI have nothing to do with the technology itself. Instead, the things we have to worry about are reflections of [human] qualities.”
The Devil is In the Details
In 2018, Amazon stopped using an AI-powered hiring tool after finding out it was unfair to female applicants. Hoque explains that the artificial intelligence wasn’t set up to be biased, but it learned from past hiring data that preferred men, so it just copied those trends.
Research from UC Berkeley shows that mortgage approval algorithms often offer less favorable terms to Black and Hispanic applicants. This is happening because these algorithms are repeating existing inequities in lending practices.
Hoque also points out that the same discriminatory patterns are noticeable in law enforcement, healthcare and education when AI systems are used.
He explains that crime prediction tools used by police often focus on certain communities. This happens because the artificial intelligence for these tools was developed using crime rates that reflect past inequalities and prejudiced practices against communities of color, instead of using real crime data. In addition, healthcare algorithms might inaccurately diagnose patients from specific demographic groups. Plus, automated grading systems in schools have sometimes been found to prefer students from wealthier backgrounds, even when the quality of their work is the same as that of other students.
“In these cases, AI isn’t creating new biases, it is reflecting existing ones. “ Hoque said.
Ted A. James, MD, Medical Director and Vice Chair at Beth Israel Deaconess Medical Center in Boston, MA (he is also an associate professor of surgery at Harvard Medical School) agrees. His recent article for Harvard titled, “Confronting the Mirror: Reflecting on Our Biases Through AI in Health Care,” explains that although people often view AI as completely unbiased, that is far from the case. AI systems learn from large datasets that are influenced by human choices and decisions.
“When these data carry biases – such as underrepresentation of certain groups or gender biases – AI systems inevitably absorb and perpetuate these biases,” James writes.
Reflections of Underlying Issues
Hoque and James both emphasize that the issue of AI mirroring our biases isn’t necessarily a problem with AI itself; it’s also about how these flaws show society’s deep-seated, prejudiced beliefs and practices. And this highlights an urgent need for change.
They also explain that our current way of using AI is full of contradictions, and unless we do something about it, it will simply continue to reflect those contradictions back to us.
What’s more, while we appreciate AI as a tool that can make our lives more efficient, there’s also a growing worry about misinformation. Recent reporting shows that AI models tend to prioritize popular, viral content over what’s truly accurate.
The Road to Fair Artificial Intelligence
As AI keeps advancing, Hoque encourages us to think about how we want to influence its role in our society. It’s not just about making better algorithms; it’s about making sure AI is created and used in a responsible way.
Keeping this in mind, James recommends several steps we can take to communicate to our legislators, learning institutions, advocates and one another to help establish the essential guidelines that will ensure AI becomes more impartial and treats people without bigotry or favoritism.
1. Diversify the data. Use larger and more accurate datasets that show the different backgrounds of the population. For instance, by having more minority groups included in tech, we can develop fairer AI systems.
2. Continuous monitoring and updating of AI systems. AI should change as society changes, making sure that biases are quickly found and fixed. Regular checks on how AI makes decisions can help spot and reduce biases that may come up over time.
3. Promote the interdisciplinary collaborations that enhance AI development. Include ethicists, sociologists, advocates and other important groups in the development of AI. This will help create a well-rounded view and make sure that AI systems are ethical and culturally aware. Involving these voices can lead to more fair and balanced data, which is essential for training unbiased AI systems.
It’s reassuring to see that some organizations are beginning to take action. Instead of just improving AI models to boost economic efficiency, they are also examining the data, policies and beliefs that influence how they operate.
“As we continue to integrate AI into [our lives], let’s use it as a catalyst for change,” James urges. [This ensures] we leverage technology to advance in a way that is fair and just for us all.”