A ‘Department of AI’? Tech giants disagree over government regulatory effort

Artificial Intelligence processor unit. Powerful Quantum AI component on PCB motherboard with data transfers.
Artificial Intelligence processor unit. Powerful Quantum AI component on PCB motherboard with data transfers. da-kuk/Getty Images

A ‘Department of AI’? Tech giants disagree over government regulatory effort

Video Embed

The companies leading the race in artificial intelligence agree on the importance of regulation but are divided on what agency should oversee the new technology. They are giving the government conflicting advice on the path forward, with major ramifications for innovation and security.

Google, OpenAI, and several other developers of artificial intelligence-powered software submitted comments to the National Telecommunications and Information Administration, part of the Department of Commerce, in response to its April “Request for Comment” on implementing AI accountability. More than 1,400 comments were received by the June 12 due date from industry groups, tech companies, and individuals alike. The comments will now be reviewed and used in formulating rules.


Nearly all tech groups agreed that some sort of regulation was necessary but failed to conclude how to regulate the technology properly. ChatGPT developer OpenAI and Microsoft called for a new agency to be founded on a national level to enforce the regulation and restriction of AI and to provide licenses to developers.

In contrast, Google, which has developed the chatbot Bard, argued that the agencies required to rein in AI already exist and should be empowered with appropriate powers.

Leading AI developers have already presented their vision for a new agency to the legislative branch. OpenAI CEO Sam Altman told Congress in May that it should require licenses to develop advanced AI products through a new agency. Such a proposal would entail an expansion of government powers and require revenues to fund the agency.

Yet some in the industry, along with free market advocates, are skeptical of a single agency to regulate AI on the grounds that it would harm innovation and empower incumbent companies. They say that, for instance, a healthcare provider that uses AI is entirely different from a transportation company that uses the technology. Treating them the same would lead to inefficient rules for both. The proposed agency would either need to spread itself thin to account for all the possible ways AI is used or adopt a broad approach that fails to realize all the particulars.

Requiring AI developers to seek a license could also inhibit the creative work of open-source developers, who play an integral part in the sector’s development.

Some members of the AI industry believe the necessary guidelines to regulate the technology already exist and that no new agencies are necessary. “While A.I. may seem new, it has been in use for years across multiple industries and is already even considered by some existing rules and regulations,” A.I. training company Scale AI wrote in its public comments. For example, the Federal Trade Commission has guidelines in place to rein in software if it is used to engage in fraud or scams.

In fact, one consumer group said that the FTC could be the one to lead on AI regulations. “As the nation’s primary consumer protection agency with the remit to regulate large tracts of the nation’s commercial activity, the [FTC] appears to be the best-suited government body to take responsibility for A.I. accountability regulation,” Consumer Reports wrote in their public comments.

The FTC would not be the only agency, of course. The National Institute of Standards and Technology has already done much work to create a framework to ensure the technology is safe. The agency previously released its AI Risk Management Framework, which offers a plan for managing the dangers of the software to individuals, organizations, and others. Google said in its public comment that it supports a “central agency like the [NIST] informing sectoral regulators overseeing A.I. implementation — rather than a ‘Department of A.I.'”

The issue with allowing NIST to set the rules, though, is that it lacks the authority to enforce them. The agency would require additional power provided through Congress to succeed.


The Biden administration has also released its Blueprint for an AI Bill of Rights, which laid out the priorities developers need to keep in mind regarding the ethical problems raised by the technology.

The European Parliament also passed legislation that may implement the most comprehensive AI regulations yet. The A.I. Act would ban the use of AI facial recognition in public spaces and AI predictive police software. It would also set new transparency measures for software like ChatGPT.

© 2023 Washington Examiner

Related articles

Share article

Latest articles