Generative artificial intelligence models have had a tumultuous few weeks. Multiple image generators have displayed controversial inaccuracies, and an employee of a major tech company has publicly raised concerns about his companyās model, warning that it generates sexually harmful and violent images.
AI is the subject of constant controversy. Some are quick to argue that recent events highlight a need for government intervention, but in fact, they show the opposite: They show the marketĀ working to discipline AI, prompting companies to improve their models and make them more useful and appropriate.
In late February, Google was forced to remove its AI modelās capability to generate images of people after the model spit out some incredibly inaccurate images of historical figures. When prompted to create pictures of the Founding Fathers, for example, the model would portray them as black. After receiving public criticism from market participants, Google removed the modelās ability to create images of people.
Meta also ran into similar problems with its image generator, which portrayed professional football players as women, colonial Americans as Asian, and the Founding Fathers as black. It appears to have corrected the problem because its image generator is now creating images more accurately in response to the same prompts.Ā
In both cases, the models failed and feedback from the market catalyzed the companies to employ a solution. Unfortunately, both resulted from the companiesā teams aiming to make their models āwoke.ā Googleās āObjectives for AIā list includes all sorts of do-good phrases, such as ābe socially beneficialā and āavoid creating or reinforcing unfair bias,ā but nowhere on its list does ābe truthful and accurateā appear.
Creating truthful and accurate AI models should be the primary goal of AI developers, and the market is working to push these models toward that end. If models are useless and inaccurate, then people wonāt use them and companies wonāt make money from them.
Neither model, in the case of Google and Meta, was answering prompts accurately, and feedback from market participants prompted the companies to correct their mistakes and override much of the modelsā diversity, equity, and inclusion biases.
These issues also serve as an important reminder that AI is as fallible as the humans who create it. Models, whether large language models such as ChatGPT or image generators such as Metaās Imagine with Meta AI, should not be treated as omniscient and should be met with healthy skepticism, just like any information consumed through news media or by surfing the web.
Some models, such as Microsoftās image creator, have also had problems with producing violent or sexual material. An engineer from Microsoft has brought attention to these concerns with the companyās image applications, taking his concerns to the press and the Federal Trade Commission.Ā
While going to the government was not necessary, bringing concerns to the press and making people, especially parents and teachers, aware of the risks involved with using Microsoftās AI model can effect positive change because these risks will cause certain market participants to avoid it. Microsoftās spokespeople have reiterated that they are committed to addressing these concerns as creating inappropriate models will lead certain consumers to refrain from using their products.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Companies that manufacture the inputs for these models are also doing their part. For example, Nvidia, one of the main manufacturers of the chips that power AI, is committed to supporting initiatives that prioritize responsible AI development and offers a wide range of products to help developers tailor their models, protect privacy, and set boundaries that the models wonāt cross.
AI developers do not need the government lurking over their shoulders, intervening in attempts to make models more accurate. All that will do is turn AI into the next iteration of political football, stifling innovation in the process. The market is providing solutions in real time. Calls for regulation are not productive.
Benjamin Ayanian (@BenjaminAyanian) is a contributor for Young Voices, a public relations firm and talent agency for young, pro-liberty commentators.
