One man’s plan to prevent bias in artificial intelligence

.

Brain Artificial Intelligence. CPU Concept
Big data. Information concept. 3D render BlackJack3D/Getty Images/iStockphoto

One man’s plan to prevent bias in artificial intelligence

Video Embed

Artificial intelligence is a fascinating technological advancement that will undoubtedly transform human civilization. The main question, however, is in what way?

Will it be a benevolent advancement humankind can use for good? Or will it be compromised and manipulated in a form that will lead to the further erosion of humanity? They are two critical questions to which no one seems to know the answer. Given the technological rise of AI, solutions are needed sooner rather than later.

YOUR GUIDE TO THE 2024 PRESIDENTIAL CANDIDATES ON TECH POLICY

The many concerns surrounding artificial intelligence are well documented. One of the more prevalent concerns is the potential risks of censorship and bias infiltrating the technology. Mike Matthys, co-founder of the Institute for a Better Internet, shared his thoughts on how AI can remain free of any indoctrination or bias that will corrupt the technology.

“The easiest and most logical ways for a biased programmer to influence the AI is to limit the training data to a viewpoint-biased set of data or to simply disallow certain types of inputs or questions,” Matthys said. “The AI software itself is optimized to generate answers that are considered to be correct according to the training data. It would be exceedingly complex for a programmer to write AI software that is designed to generate ‘wrong’ answers based on the training data.”

“For example, if only government sources are used, the AI will generate answers that conform to the government narrative,” Matthys added. “Or if only right-wing sources are used, then the AI will be more likely to generate answers that conform to the right-wing perspective. This is similar to how bias shows up in Google Search where some sources of input information are prioritized over others based on viewpoint or based on a subjective reputational score that favors government and mainstream media sources.”

Matthys also suggested preventive measures that should be taken so that a set of universal protocols are in place to prevent the implementation of any bias of prominent AI vendors. He identified them as four pillars that all AI programs should be required to follow. They would act as guardrails predicated on safety, neutrality, transparency, and accountability, and the guardrails would also apply to content moderation, Matthys advised.

“Safety means that the AI answers are not imminently harmful to any person or group of persons. Neutrality means that the AI should not pick sides between viewpoints — except to protect against imminent harm,” Matthys told the Washington Examiner. “Transparency means that each AI tool would be required to publish in understandable detail the sources of its training data and how it was designed to ensure safety and neutrality. Accountability means that the AI users would have a simple mechanism to dispute AI answers with the AI vendor and an independent entity where users may appeal the initial dispute resolution.”

“Appeals would be resolved based on the four guardrails, such as whether the AI Answers were imminently harmful or not and whether the vendor complied with transparency requirements,” Matthys said.

Additionally, there would be legality issues to take into consideration. As the political divide on social media content moderation has descended into societal tribalism, many have brought up the potential liabilities surrounding Section 230 of the Communications Decency Act. Matthys also addressed this issue when discussing the potential liability of vendors and programmers.

“As creators of their own AI software algorithms, the AI vendors would NOT be protected by the existing Section 230 liability shield, which protects social media and search platforms today. For example, AI vendors may face more liability for defamation or exhorting violence if their AI software generates information that harms a person or group of persons,” Matthys said. “AI vendors are similar to publishers who effectively create the information generated by their AI platforms which is different from social media platforms that share content generated by independent users.”

Furthermore, regarding the different potential societal impacts of AI, many science-fiction stories and movies depict a dystopian future involving AI or “a rise of the machines.” While we are probably a long way from Terminator robots assassinating humans or using technology to mold themselves into human forms, there should be regulation and protection given AI’s precariousness. Matthys’s suggestions would be essential procedures put into effect to maintain the integrity of AI.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

“Without any regulation, we should be concerned. There are many industries that are already regulated, and most of these should include AI regulation for safety. These industries would include automotive safety, critical utility infrastructure for power/water/rail/electricity/telecom, airports and air safety, hospitals, and obviously anything related to the military or law enforcement. AI should assist human decision-makers with information, but AI should not be enabled to make decisions that may affect any form of safety,” Matthys said.

“The challenge is to enable the productivity and lifestyle enhancements that AI can provide while ensuring AI cannot impact the safety and fairness of our lives, infrastructure, and political systems,” Matthys concluded.

© 2023 Washington Examiner

Related Content