Widespread student use of AI needs code of conduct: ‘Too big to ignore’

.

Artificial intelligence is fast becoming an integral part of everyday life. But as the technology explodes, the debate intensifies on how to harness it properly and ethically. This Washington Examiner series, The Integration Era, will look at how AI is being used responsibly in Congress, how its usage is causing headaches in schools, and how Congress and courts are addressing abuses that target vulnerable people and threats to intellectual property. Read Part 1 here.

When a Massachusetts high schooler used ChatGPT to help draft a history assignment, the school responded with discipline. His parents sued, saying their son was unfairly penalized for using artificial intelligence as a research assistant — not a ghostwriter.

The legal battle was the latest flashpoint in a growing national tug-of-war over whether generative AI should be treated in classrooms as a cheat or a research tool.

And right in time for finals, AI giants are targeting students directly. OpenAI CEO Sam Altman posted to X on April 3 that ChatGPT Plus would be “free for college students in the U.S. and Canada through May.” Less than a week later, Elon Musk matched the offer, promoting free access to xAI’s SuperGrok for students with .edu emails.

This rapid-fire rollout signals something bigger: a race to win over the next generation of users, even as schools scramble to define what responsible use looks like.

Classrooms playing catch-up

Some universities have embraced the shift, while others remain cautious.

Rebecca Winthrop, a senior fellow at the Brookings Institution who co-teaches a seminar at Georgetown University, allows students to use AI tools to aid their research — but only if it’s paired with rigorous discussion.

“We tell our students that they can definitely use AI in the research process,” Winthrop said. “We have them present and analyze their case verbally … you can certainly tell … how much they actually know and how much they’ve absorbed and how critically they’re thinking.”

Winthrop said she’s worried younger students lack the developmental foundation to navigate AI responsibly. Her analogy: Generative AI is like giving babies shoes that help them skip the crawling phase.

“Crawling builds spatial awareness, wires the brain — it’s foundational. We don’t know yet what cognitive muscles kids are skipping with AI,” she said.

She’s co-leading the Brookings Global Task Force on AI in Education, which is assessing the risks and benefits of generative AI for children and teenagers worldwide. A key focus is preventing what some AI researchers are calling “cognitive deskilling,” which is when users become so reliant on AI tools that they lose the ability to think critically on their own.

Kevin Frazier, an AI innovation and law fellow at the University of Texas School of Law, said the concern isn’t hypothetical.

“There have been some pretty robust studies showing that even for folks who are professionals, lawyers, doctors if they become too reliant on these tools, they do report an active kind of loss in critical thinking,” Frazier said. “And if we saw that replicated for our up-and-coming, you know, future leaders, that would be a severe issue.”

AI use is widespread — but still uneven

A recent report from Anthropic, the AI firm behind Claude, analyzed more than half a million real student interactions with its chatbot and found that students primarily use it to create and analyze academic material. These tasks, such as drafting practice questions or dissecting complex legal concepts, fall under higher-order thinking categories in Bloom’s Taxonomy.

In other words, students aren’t just using AI to get quick answers — they’re engaging it like a collaborative tutor.

Still, nearly half of student conversations with Claude were “direct” or queries seeking instant answers or summaries. While many serve legitimate learning purposes, Anthropic acknowledged some of this activity could constitute academic dishonesty, depending on context.

Winthrop lauded Claude’s new “learning mode,” which avoids simply providing answers and instead emphasizes conceptual dialogue. This version of Claude “doesn’t give students … the answer, but it’s like this conversational … tool to help them learn better. It’s almost like an AI tutor,” she said.

Frazier echoed that sentiment, saying schools should aspire to form “human-machine teams” in the classroom.

“When it’s a human paired with an AI in that kind of team dynamic, then we actually see huge gains in productivity, huge gains in improvement in terms of the quality of the work product,” he said.

He added that basic literacy development in early grades is too important to be outsourced.

“If we shortchange that by over-relying on AI, that could be a real issue,” he said.

A law student’s POV: Use AI with caution, but don’t ignore it

Education and AI experts who spoke with the Washington Examiner underscored that the guardrails for AI use in K-12 categories are much different than the constraints that should be applied to university students.

In essence, many jobs have begun integrating AI into daily work. Some law firms are even predicting diminished hiring forecasts as more attorneys rely on these tools to service their clients, according to a white paper published by Reuters.

Cassidy Atchison, a second-year student at Marquette Law School, said AI tools are already baked into legal education — whether professors like it or not.

“It’s pretty professor dependent … they’ll either say explicitly, or it’s in the syllabus,” she said.

Atchison has used AI to understand unfamiliar legal concepts, such as business incorporation language, by asking for examples and explanations. However, she’s wary of relying on chatbots for case law after one professor showed how an AI-generated podcast misrepresented a court decision.

“You can’t be like, ‘Oh, well, AI told me this,’” Atchison said. “It’s your name on the line. It’s an integrity thing at the end of the day.”

She believes AI could reduce busywork and let students focus more on “legal analysis” and “strategizing.”

Frazier said some law schools are falling behind in preparing students for this new legal environment.

“Some elite institutions are leaning in — Vanderbilt, Miami, UT Austin all have AI in Law labs,” he said. “But at many schools, there’s no one teaching faculty how to use these tools, let alone helping students develop the skills they need.”

He warned that this gap could soon have real professional consequences.

“Lawyers can’t have their head in the sand when it comes to AI,” Frazier said. “It’s part of the technical competency the profession requires.”

AI as the new ‘Space Race’ between China?

Loni Mahanta, a Brookings fellow and former tech executive, argued that the stakes go beyond the classroom.

In a recent op-ed for the 74, she called AI education the “new Space Race” and warned that the United States risks falling behind countries that have introduced AI coursework as early as elementary school, such as China.

She urged policymakers to consider a modern-day version of the National Defense Education Act to build national AI literacy. However, she said the decentralized nature of U.S. education means nonprofit organizations, states, and companies will need to lead the charge in the meantime.

Toward a new code of conduct

Across the board, experts and students have agreed that some kind of code is needed — if not legal, then at least for ethical purposes.

“There’s a lot of unknowns with AI,” Atchison said. “But it’s too big to ignore. We just have to figure out how to use it the right way.”

HOW AI IS TRANSFORMING WORK ON CAPITOL HILL

Winthrop argued that students need to build core cognitive skills before turning to tools such as chatbots. She said her research aims to help educators “figure out which muscles young people need to develop on their own, and which ones can be assisted by AI.”

Whether students cheat or innovate may depend less on the tool and more on how and when they use it.

Related Content