OpenAI will roll out a new chatbot program to address mounting safety concerns surrounding teenagers in artificial intelligence forums, the company said Tuesday morning.
The company’s announcement comes as lawmakers and families of teenagers affected by chatbot safety lapses have raised alarm over the effects artificial intelligence responses can have on children’s mental health. California Attorney General Rob Bonta opened an investigation into OpenAI in early September after two parents said their child died by suicide following interactions with ChatGPT.
“Teens are growing up with AI, and it’s on us to make sure ChatGPT meets them where they are. The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” OpenAI wrote in the Tuesday announcement.
OpenAI is unveiling a “long-term system to understand whether someone is over or under 18,” that will subsequently adapt to provide a differently filtered chatbot based on the user’s determined age, according to the release.
“When we identify that a user is under 18, they will automatically be directed to a ChatGPT experience with age-appropriate policies, including blocking graphic sexual content and, in rare cases of acute distress, potentially involving law enforcement to ensure safety,” OpenAI wrote in the announcement.
OpenAI CEO Sam Altman released a statement explaining the program, saying the company is choosing to prioritize “safety ahead of privacy and freedom for teens” and that, out of caution, the chatbot for teenagers will be the default if there is a question about the user’s age.
“If there is doubt, we’ll play it safe and default to the under-18 experience,” Altman said. “In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.”
The announcement of the new ChatGPT “experience with age-appropriate policies” comes just before a Tuesday Senate Judiciary subcommittee hearing on artificial intelligence chatbots. The Senate Subcommittee on Crime and Counterterrorism will hold the “Examining the Harm of AI Chatbots” hearing at 2:30 p.m.
Subcommittee chairman Sen. Josh Hawley (R-MO) has been outspoken about the harm AI poses to juveniles. Hawley called for an investigation into Meta after an internal company document showed its AI chatbot was permitted to have “romantic or sensual” conversations with children. He also brought up child AI safety in his speech at the National Conservatism Conference in early September.
“We absolutely must require and enforce rigorous technical standards to bar inappropriate or harmful interactions with minors. And we should think seriously about age verification for chatbots and agents,” Hawley said during his 2025 NatCon speech.
GOOGLE DOESN’T HAVE TO SELL CHROME BUT MUST SHARE SEARCH RESULTS, JUDGE RULES
Though OpenAI did not clarify exactly when the new age-appropriate “ChatGPT experience” would be available on user interfaces, an OpenAI spokesperson told the Washington Examiner the company would be sharing updated timing on the program soon. In the meantime, the spokesperson said parental control features would be available at the end of September.
In OpenAI’s parental control model, parents can link their accounts to their teenager’s account to gain access to certain controls, including the ability to set times when they cannot use ChatGPT.