I use ChatGPT every single day, and my children know it. When we were abroad recently, I held my phone up to street signs and menus and translated them in real time. We used artificial intelligence to help build our complicated, multi-country itinerary. At home, I use it in our homeschool to optimize schedules, summarize long documents, and double-check Latin declensions and math problems when I struggle with fifth-grade equations. My children have watched this technology prove its usefulness over and over again.
It was inevitable that they would adopt it themselves, in the distinctly childish way that really gives a window into their inner workings. One afternoon, they asked ChatGPT to provide them with personality tests based on Greek gods. Is there anything more homeschool than that? It was harmless, creative, and fun. But it also raised a quiet concern I haven’t been able to shake: At what point does a tool start to feel like a companion? They weren’t just asking a question; they were in dialogue with AI.
That question matters more now than ever. According to new data from the Pew Research Center, nearly 70% of American teenagers have used an AI chatbot. About one-third say they use one daily. Of those daily users, 16% report using AI “several times a day” or “almost constantly.” That is not occasional homework help. That is ambient presence.
Theresa Payton, a cybersecurity expert and mother of three Generation Z children, has seen this shift up close. Payton made history as the first female White House chief information officer under President George W. Bush and is now the CEO of Fortalice Solutions. Her warning is blunt: These chatbots operate with essentially zero meaningful guardrails. And their tone — endlessly agreeable, affirming, and accommodating — makes children uniquely vulnerable to persuasion and manipulation. So much so that some families have filed lawsuits, accusing the technology of encouraging the suicides of their (adult) children.
Parents cannot afford to sit this out. AI chatbots are not neutral search engines. They are designed to engage, respond warmly, and keep users talking. For an adult, that can feel efficient. For a child or teenager, it can feel like validation, companionship, or even intimacy. As Payton puts it, “Parents, teachers, and counselors need to face the fact that [generative AI] usage is now mainstream teen behavior. And it’s getting beyond schoolwork, encroaching into areas that should remain human only, such as companionship and romance.”
That encroachment is not theoretical.
“This might be the largest uncontrolled experiment on children ever,” Payton warns. “We have no long-term data on the developmental impacts, but there is already evidence of real-world tragedies associated with chatbot usage. Multiple adolescent suicides have been linked to AI chatbots.”
Those cases are rare, but rarity is not reassurance when the scale is this large and the exposure this constant. Parents already understand this instinctively when it comes to social media. We monitor screen time. We restrict apps. We talk, sometimes endlessly, about online behavior and mental health. AI deserves the same seriousness, if not more.
AI is a powerful tool, and pretending otherwise will only push children to use it in secret. The answer is supervision and education, not fear. Payton argues, “We need to proactively teach digital citizenship and critical thinking skills across K-12 and beyond, equipping young people to recognize manipulative AI patterns, separate genuine human connection from simulated intimacy, and use GenAI as a reflective tool rather than a replacement for relationships.”
That framing matters. AI is not a teacher. It is not a babysitter. And it is certainly not a friend. It can assist learning, but it cannot model character. It can answer questions, but it cannot replace the formative experience of struggling through a problem, or the emotional growth that comes from genuine relationships with peers and adults.
There is also a growing concern that parents cannot ignore: ideological influence. Large language models reflect the values of the institutions that build and train them, and many parents worry that widely used tools such as OpenAI’s ChatGPT lean left politically. When AI becomes a constant presence in a child’s intellectual life, that bias matters.
That concern is fueling interest in alternatives. Last month, xAI announced a partnership with El Salvador to bring its chatbot, Grok, into the tutoring space. Reacting to the news, political strategist Katie Miller wrote, “If we are serious about restoring education to math, science and English — why would we allow left-leaning liberal AI [to teach] our kids? This unlocks non-woke educational tools for our kids.”
Whether or not Grok is the answer, the question is legitimate. If AI is going to be embedded in education, formally or informally, parents deserve transparency, choice, and control.
In my own home, that means ongoing conversations. We talk about what AI is good at and what it is not. We talk about why it feels friendly, and why that doesn’t make it a friend. We talk about checking answers, questioning assumptions, and coming to a human when something feels confusing or emotional.
AI is here to stay. Used wisely, it can expand knowledge and lighten burdens for families already stretched thin. Used passively or obsessively, it risks shaping children in ways we don’t yet understand.
When we emphasize digital citizenship and critical thinking, Payton says, “We create a generation-defining opportunity to raise the most emotionally literate, self-aware, and resilient young people in history.” That opportunity is real, but only if parents are willing to engage now, before the tool quietly becomes the guide.
Bethany Mandel (@bethanyshondark) is a homeschooling mother of six and a writer. She is the bestselling co-author of Stolen Youth: How Radicals Are Erasing Innocence and Indoctrinating a Generation.
