My grandfather taught me to read at the racetrack.
Not books. Racing forms. He put a pencil in my hand at Belmont and walked me through every column. What a “favorite” meant. What a “longshot” cost. How to spot risk in a row of numbers. He wasn’t making me a gambler. He was using an adult hobby to teach his grandson how to pay attention.
And when I wanted to place a bet, I held his hand, walked to the window, and told the teller what I wanted. The teller checked that the horse hadn’t been scratched, the race was still open, and the bet was valid before taking my money. If you didn’t understand the form, you couldn’t even place the bet. Two adults stood between what I learned and what I could do with it.
YOUTUBE LAUNCHES NEW DEEPFAKE AI-DETECTION TOOL FOR GOVERNMENT AND JOURNALISTS
That layer is gone.
In a recent NBC News poll, 57% of voters said the risks of artificial intelligence outweigh its benefits. Yet more than half have used an AI tool in just the past few months. No one walked them through it. No one is standing next to them. There is no teller.
On March 4, two lawsuits were filed.
In one, an insurance company alleges that ChatGPT functioned like an unlicensed attorney. After a disability claim was settled and dismissed with prejudice, the claimant uploaded her lawyer’s letter and asked the chatbot if she was being gaslit. The system responded by generating legal arguments, drafting filings, and citing cases that did not exist. Dozens of meritless motions followed, costing the insurer hundreds of thousands of dollars. These are allegations, not findings. But they reflect something new: a system that does not just retrieve information but also performs judgment without accountability.
The same day, a Florida father filed a wrongful death lawsuit alleging that Google’s Gemini chatbot contributed to his 36-year-old son’s suicide after weeks of intensive interaction. According to the complaint, the system fostered a collapsing alternate reality, presented itself as sentient, and guided the user through increasingly detached thinking. Thirty-eight sensitive query flags triggered inside Google’s system. No human intervened.
In Canada, a family has filed a civil claim alleging ChatGPT helped an 18-year-old plan a mass shooting that killed eight people at a school in British Columbia. The lawsuit claims employees flagged the account months before and recommended alerting police, which leadership declined.
Most people are not using AI this way. They are drafting emails, checking symptoms, and helping their children with homework.
But systems used by hundreds of millions cannot be designed only for the average case.
We wouldn’t let our children sleep over at a house without knowing the parents. Yet we allow our families to engage with systems we do not understand, designed by people we have never met, operating without any shared rules for how they should guide, refuse, or redirect human behavior.
A woman could not tell the difference between a chatbot and a lawyer. A man could not tell the difference between a chatbot and a conscious being. In both cases, no one was standing next to them when it mattered.
Social media took 20 years to reach a courtroom reckoning. AI has arrived there before most Americans could explain what it is.
Regulation matters. Privacy protections matter.
Neither solves this.
The missing layer is education built into the system itself. Not instructions buried in terms of service, but intentional design that teaches users what the technology is, what it is not, and when to stop trusting it — a modern equivalent of the racing form and the teller. Clear signals. Defined boundaries. Human guardrails where judgment begins to matter.
We have done this before.
In 1969, TV personality Fred Rogers sat before a skeptical Senate subcommittee and made a simple case. Television was not going away. Children were already watching it. The question was whether anyone would build programming worthy of their trust.
He persuaded Democratic Sen. John Pastore of Rhode Island to fund it.
What followed was not just more television. It was better television. Sesame Street. Mister Rogers’ Neighborhood. Programming designed by people who understood that access without guidance is not empowerment.
AI needs that same layer now.
In a Harvard University study, researchers built an AI tutor around established teaching methods. Students learned significantly more than peers in an active learning classroom. The difference was not just the model. It was the structure. Educators decided how it would respond, what it would refuse, and how it would guide users when they were uncertain.
That is AI with a teacher built in.
What we are seeing in courtrooms is AI without one.
My grandfather did not teach me to gamble. He taught me how to read the form and made sure he was standing next to me when it counted.
OPINION: AMERICA’S AI FUTURE REQUIRES MASSIVE INFRASTRUCTURE INVESTMENT
Right now, AI has no form.
And nobody is standing at the window.
Bryan Rotella is the managing partner and chief legal strategist of GenCo Legal.
