The Federal Trade Commission (FTC) has ordered seven companies that provide consumer-facing AI-powered chatbots to provide information on how these firms measure, test and monitor potentially negative impacts of the technology on children and teens.
AI chatbots may use generative artificial intelligence technology to simulate human-like communication and interpersonal relationships with users. AI chatbots can effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.
The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” says Andrew Ferguson, chairman of the FTC. “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the US maintains its role as a global leader in this new and exciting industry. The study we’re launching will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
The FTC is issuing the orders using its 6(b) authority, which authorises the Commission to conduct wide-ranging studies that do not have a specific law enforcement purpose. The recipients include:
- Alphabet;
- Character Technologies;
- Instagram;
- Meta Platforms;
- OpenAI OpCo;
- Snap; and
- X.AI.
The FTC is interested in particular on the impact of these chatbots on children and what actions companies are taking to mitigate potential negative impacts, limit or restrict children’s or teens’ use of these platforms, or comply with the Children’s Online Privacy Protection Act Rule.
As part of its inquiry, the FTC is seeking information about how the companies:
- Monetise user engagement;
- Process user inputs and generate outputs in response to user inquiries;
- Develop and approve characters;
- Measure, test and monitor for negative impacts before and after deployment;
- Mitigate negative impacts, particularly to children;
- Employ disclosures, advertising, and other representations to inform users and parents about features, capabilities, the intended audience, potential negative impacts, and data collection and handling practices;
- Monitor and enforce compliance with Company rules and terms of services (for example, community guidelines and age restrictions); and
- Use or share personal information obtained through users’ conversations with the chatbots.