In his recent artificial intelligence framework, President Donald Trump emphasized something that has been missing from much of the policy conversation: empowering parents. That’s the right starting point for how to help families adjust to and thrive in the age of AI. At the same time, recent jury decisions involving Meta Platforms and YouTube reflect a growing concern that major platforms have not always been transparent or thoughtful about how their products are experienced by younger users.
Those concerns are worth taking seriously. But they should not lead us to the wrong policy conclusion.
The answer is not to replace parents with policymakers. Nor is the answer to compel platforms to offer “safe” tools that may hinder the ability of all Americans to express their values and access information. The path forward is to empower parents to make informed, durable decisions instead.
Our Constitution and our culture have long recognized that parents, not bureaucrats, are responsible for raising children and making fundamental decisions about their development. Yet across the country, state legislatures are moving in the opposite direction. From Washington to Florida, lawmakers are advancing proposals that would shift decisions about how, when, and whether children use AI tools away from families and toward regulators, judges, and compliance systems.
THE KIDS ACT TREATS EVERYONE LIKE A KID
That is a mistake.
Many of the policies gaining traction, from mandatory age verification to sweeping content restrictions to expansive compliance mandates, rest on a flawed premise. They all operate from the assumption that child safety in the age of AI requires centralized control. If we can verify everyone’s identity, filter enough outputs, and constrain enough systems, we can engineer safety from above.
In practice, that approach does something far more troubling. It crowds parents out.
It invites the government into the homes, living rooms, and kitchen tables where these technologies are actually used and replaces family-level judgment with one-size-fits-all rules. It assumes that policymakers are better positioned than parents to decide what a 10-year-old, a 13-year-old, or a 16-year-old should be able to explore. They most certainly are not. Every child has unique needs and interests. Every family has its own values and priorities. No one tool can align with all those contexts.
The goal should not be to eliminate every risk or standardize childhood. The goal should be to support parents in making better, more informed decisions for their own children in a competitive, transparent marketplace of AI tools. Pursuit of this policy route requires a different approach, and we propose three necessary pieces that should lay the groundwork for any policy aimed at protecting children in the internet age.
First, equip parents with usable, practical information about AI tools. Most parents are navigating this landscape with a limited understanding of how these systems work or what trade-offs they present. The solution is not for the state to fill that gap with mandated answers. Again, we should not squeeze all AI companies into an arbitrary and shifting mold of what some people think is best for children. The government should instead take the steps necessary for parents to easily determine if an AI tool is right for their children. This amounts to a sort of nutrition label for AI tools — what went into the tool, what tends to come out, where it was made and by whom, and whether there are any special attributes that people should be aware of.
Second, prioritize providing parents with substantive, not invasive, transparency of how children use AI. Parents do not need full transcripts of every interaction, nor do they want a surveillance regime inside their home. What they need are signals of proper and improper use based on their family values and norms: how often tools are used, in what general ways, and whether usage patterns suggest a problem. That kind of visibility supports parenting without undermining trust or hindering the right of children to seek out information and convey their ideas.
Third, require regular aggregate usage reports so that parents and policymakers alike can have a broader understanding of how young people are using AI. As it stands, data on how and when children use AI is shared by labs in an ad hoc, inconsistent manner. A more granular, frequent analysis of what kids these days are really using AI for can inform policy conversations and assist with recalibrating family norms.
This is where competition among AI labs can play a role in doing what hard legal mandates cannot. These regular usage policies will serve as the evidence necessary to see if AI tools are working as intended by their developers and by parents for younger people. These summaries will show which companies have succeeded in designing tools and parental controls that shape use by children.
Children do not become capable adults by living inside perfectly controlled systems. They develop judgment through guided exposure, conversation, and trust. AI, if used well, can support that process. The question is who should guide it.
CONGRESS WANTS TO PROTECT KIDS ONLINE. ITS SOLUTION MAKES THEM MORE VULNERABLE
We can move toward a future where these tools are governed primarily by distant rules and rigid mandates. Or we can build one where parents are given better tools, better information, and real authority to shape how their children engage with powerful new technologies. One path replaces parents while the other strengthens them.
We should be clear about which one we’re choosing.
Christopher Koopman is the CEO at the Abundance Institute, where Kevin Frazier is a senior fellow.
