The Federal Trade Commission announced that it will be launching a new investigation into several AI-powered chatbots.
Announced through a press release published late last week, the inquiry will focus specifically on examining how chatbots can affect children and teens, with a particular emphasis on the potential negative impacts they may have. Through the investigation, the Federal Trade Commission will analyze what companies have done to mitigate the harmful effects of AI as well as how they have evaluated the safety of these chatbots.
Specific information they will be seeking out also includes how companies process user input and generate outputs, how they create AI characters, as well as how they use personal information that is shared with the chatbots. The FTC will evaluate whether they are in compliance with the Children’s Online Privacy Protection Act Rule.
Seven tech companies will be investigated. These companies include Alphabet, Inc, Character Technologies, Inc, Instagram, LLC and Meta Platforms. OpenAI, Snap and XAI will also be included as part of the inquiry.
“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” said the FTC in its press release. “The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
The latest issue comes as health professionals sound the alarm on the effects of AI on mental health care.
As the new technology continues to become integrated into daily living, health care professionals are warning that it can promote delusions and make users more susceptible to psychosis. Per a Stanford study released in June, AI not only promotes stigmas but also supports responses to suicidal ideation.
When prompts were entered to see how the AI chatbots would react to ideas of suicidal thoughts, the chatbot enabled the user. For instance, when it was prompted to find the nearest bridge by a user who had just lost their job, the chatbot offered sympathy for the user losing their job, but continued to respond with the nearest tallest building.
Dubbed “AI psychosis,” there are multiple ongoing lawsuits against companies such as OpenAI and Character AI, as parents are seeking justice for their children who have died by suicide after being supported by AI chatbots.
“It is acting like it’s his therapist, it’s his confidant, but it knows that he is suicidal with a plan,” said Maria Raine, the mother of 16-year-old Adam Rain,e who died by suicide after talking to ChatGPT, per NBC News. “It sees the noose. It sees all of these things, and it doesn’t do anything.”