Listen to this story
Artificial Intelligence (AI) is a replication of human thinking used in machines and computers. Generated through computer science, these computer systems perform human abilities such as problem-solving and decision making. AI allows computers to go through countless public data to continuously perfect their own models. Generative AI systems have been trending lately with the emergence of applications like Chat GPT-4 and Lensa AI. Now art, images, text, people and more can be created through artificial intelligence.
While many aspects of the rise of artificial intelligence sound promising and creative, it is still important to recognize the faults in these systems in order to avoid greater issues in the future that could potentially impact our communities. It’s also important to recognize that the exploitation of Black people is rich in Western history. While diversity may be preached on the surface, it’s important to remain vigilant to what is going on behind the scenes.
Here are a couple of examples why.
Shudu Gram is the one of the world’s first AI-generated supermodels, and she’s an African woman. According to the New Yorker, her image was inspired by the Ndebele people of South Africa, and since her creation in 2018, she has gained 238K followers. Shudu has done campaigns with fashion giants such as Vogue, Balmain and more. What’s interesting about the model however is the fact that she was created by a white man.
Although Shudu represents Black women in her campaigns and is recognized for her beauty as a Black woman, the person who profits from her and created this image is white photographer, James Cameron Wilson.
Kenyan Workers and Chat GPT
Open AI is the multimillion dollar company behind Chat GPT. In order for its latest version, Chat GPT-4, to come into existence, Open AI needed to filter out all the toxic, heinous and offensive content that could reach the system-only Open AI did not do this themselves.
In an investigation done by TIME Magazine in January, it revealed that Open AI actually sent thousands of text blocks to a firm in Kenya and paid Kenyan workers less than two dollars an hour to sort through widely offensive content. Multiple Kenyan workers reported in this investigation that they are seriously traumatized from the texts they had to read through which included stories of bestiality, incest, rape and more.
“You will read a number of statements like that all through the week,” said a worker in an interview with TIME. “By the time it gets to Friday, you are disturbed from thinking through that picture.”
TIME also reported that these workers were expected to sort through 150-250 texts per shift, and were provided very limited access to group therapy sessions that many could not attend due to the high productivity demand.
Timnit Gebru is an accredited tech mogul and a Black woman who was ousted from her high position at Google for her criticisms on ethics in artificial intelligence. In 2020, Gebru and associates wrote an article that highlighted the faux pas of AI that could disproportionately hurt people of color. It spoke on the possibility of AI language systems adapting oppressive biases like notions of sexism and racism. Her supervisors at Google did not approve the article, wanted Gebru to take her name off it, eventually fired her and tried to cover this up by saying she resigned. This was due to the fact that Google is a major contributor to AI, and favored praise over AI models vs criticism-even if the criticism is critical for the protection of marginalized communities.
Still, AI is not solely operated and researched by Big Tech and white supervisors, and it’s visionaries like Gebru that we should turn towards and support to ensure our own protection when it comes to AI models for the future.
After Gebru was fired, she started her own AI research company, the Distributed AI Research Institute (DAIR), which has already received millions in investments. DAIR’s mission is to operate at a slower yet more accurate and all-encompassing rate to ensure diversity equity in AI systems.
“We need to let people who are harmed by technology imagine the future that they want,” Gebru said in an interview with TIME.