The U.K.’s Information Commissioner’s Office recently pledged to investigate whether or not artificial intelligence systems that are used during the job hiring process are racially biased in a new release.
Included as part of their three-year plan detailing their priorities, the ICO25, the Information Commissioner’s Office, also known more simply as the ICO, announced that they will look into the way that AI systems sort through candidate profiles and decide which to discard based on speech patterns and writing.
Throughout their investigation, they’ll be looking closely at the treatment of marginalized people, such as potential neurodivergent and BIPOC employees who are not often well represented in testing for AI software, to determine patterns of discrimination and racial bias. After concluding their research, the ICO is expected to send out new guidelines, such as improving Freedom of Information requests that allow public information to remain hidden, for AI developers to combat the issue.
“My office will focus our resources where we see data protection issues that are disproportionately affecting already vulnerable or disadvantaged groups,” said John Edwards, the UK Information Commissioner, at the launch of the three-year plan. “The impact that we can have on people’s lives is the measure of our success. This is what modern data protection looks like, and it is what modern regulation looks like.”
The new steps against AI discrimination comes amongst fears that it’s leading to multiple employees losing out on a job they’re qualified for and being wrongfully denied loans and benefits.
The problem’s not a new one as the ICO has previously spoken out about the issue; in 2019, they released an official statement acknowledging that machine learning systems, such as AI, can reflect previous biases based on their treatment of training data.
They reported that, in places where discrimination was previously a problem and BIPOC or women candidates were rejected at higher rates, the AI system would replicate this and reject candidates from ethnic groups or women automatically. A lack of representation in training data because of lower application rates in the past may also lead to AI systems deeming certain groups as less important, according to the ICO.
The issue could prove to be an even bigger problem as more companies around the globe adopt AI systems as part of their hiring practices. According to a recent report by Harvard Business Schools, 99% of Fortune 500 companies already use machine learning systems to hire and 55% of human resource leaders in the U.S. specifically use it to make hiring practice decisions.
“The proposals I set out today involve trying different approaches. Some may work well, some may not work, some may need tweaking,” said Edwards at the launch for the plan. “But it is absolutely clear to me that in a world of increasing demand, and shrinking resources, we simply cannot keep doing what we’ve been doing and expect the system to improve.”