Employers are liable for discriminatory hiring learned by artificial intelligence: EEOC

.

Pausing Artificial Intelligence Petition
FILE – The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston. Are tech companies moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans? That is the conclusion of a group of prominent computer scientists and other tech industry notables who are calling for a 6-month pause to consider the risks. Their petition published Wednesday, March 29, 2023, is a response to San Francisco startup OpenAI’s recent release of GPT-4. (AP Photo/Michael Dwyer, File) Michael Dwyer/AP

Employers are liable for discriminatory hiring learned by artificial intelligence: EEOC

Corporations that use artificial intelligence for hiring and employee performance tracking are legally liable for discriminatory outcomes produced by the algorithms, according to a top federal official charged with enforcing workplace laws.

At a Federalist Society discussion Tuesday, Keith Sonderling, commissioner of the Equal Employment Opportunity Commission, said companies cannot use the fact that a computer derived an outcome as a legal defense for discriminatory practices.

Video Embed

ELON MUSK AND RESEARCHERS ASK FOR PAUSE ON ‘DANGEROUS RACE’ TO IMPROVE AI

Artificial intelligence is used by 83% of large employers for some type of employee decision-making, according to recent studies, and can be found in nearly every aspect of employment: from hiring to firing.

The technology is used to filter resumes, while chatbots answer applicant questions and schedule interviews. AI is used to monitor productivity and safety, and also predict an applicant’s potential for success.

As the trend grows, new legal questions regarding liability for employment discrimination are being presented to courts.

AI is “only as good as those who ‘feed the machine,'” Keith Sonderling, commissioner of the Equal Employment Opportunity Commission (EEOC), wrote in the University of Miami Law Review.

“[A]ddressing algorithmic bias can present a ‘whack-a-mole’ problem, where the new algorithm — re-engineered to have less negative impact on members of one protected group — now has an increased adverse impact on another protected group,” scholar Kelly Cahill Timmons wrote in “Pre-Employment Personality Tests, Algorithmic Bias, and the Americans with Disabilities Act.”

For example, an algorithm can be fed intentionally discriminatory information seeking a diversity outcome, downgrading job applicants who meet or do not meet certain criteria — such as age or race — that do not have to do with skill or merit. Likewise, the algorithm can “inherit” discriminatory practices already present at the company and apply them to future applicant pools.

In a 2022 lawsuit filed by the EEOC, the federal government alleges three English-language tutoring organizations discriminated against older applicants by adjusting their AI systems to automatically filter out job seekers above a certain age.

The groups rejected over 200 applicants based on age, according to the EEOC, as they set parameters to reject all females above the age of 55 and all males above the age of 60.

“Even when technology automates the discrimination, the employer is still responsible,” EEOC Chairwoman Charlotte Burrows said in a press release about the lawsuit. “This case is an example of why the EEOC recently launched an Artificial Intelligence and Algorithmic Fairness Initiative. Workers facing discrimination from an employer’s use of technology can count on the EEOC to seek remedies.”

Another lawsuit filed in February alleges Workday, a popular human resources platform that uses artificial intelligence, denied an applicant on the basis of his race, age, and disability in violation of Title VII of the Civil Rights Act of 1964. In this case, plaintiff Derek Mobley accuses the writers of Workday’s AI system of discrimination against black persons, those over the age of 40, and persons with mental disabilities such as depression and anxiety.

“These AI programs can truly help companies lawfully meet their goals of diversifying the workforce,” Sonderling, who was appointed by former President Donald Trump, told the Washington Examiner. “However, if they’re using these to intentionally get people of certain protected characteristics, age, race, gender, religion, national origin, they cannot be used to intentionally select those people for the jobs.”

Workday has denied wrongdoing, telling KRON4 News, “We believe this lawsuit is without merit. At Workday, we’re committed to responsible AI. Our decisions are guided by our AI ethics principles, which include amplifying human potential, positively impacting society, and championing transparency and fairness.”

Many companies have opened diversity, equity, and inclusion (DEI) offices which often make company-wide policies regarding racial- and gender-consciousness training, hiring, and other procedures. Similarly, the federal government, through a first-day executive order from President Joe Biden and one earlier this year, has incentivized these considerations across government and in the private sector.

While Sonderling says companies can set diversity goals in a legal way, ultimately making a hiring decision based on anything other than skill and merit would be an “unlawful decision.”

That requires a human element and finding the “right division of labor between artificial intelligence and human resources personnel — between using AI to improve human decision-making and delegating decision-making entirely to algorithms.”

“You can use those [DEI] statements as generic statements to help diversify your applicant pool, to get more people who historically wouldn’t apply to your job,” Sonderling said. If the algorithm produces discriminatory results, however, “there’s no defense that ChatGPT told me, ‘This is the best job candidate, the best job description.'”

Employment discrimination law is working from legislation often passed more than 30 years ago, and updates to account for new technology have largely not been made.

There are also several issues inherent in AI that make oversight and regulation particularly difficult.

For example, the “black box” issue, which makes it impossible to know how an algorithm got to a certain outcome, presents difficulties for employers and investigators to determine where discrimination might be coming from.

Sonderling told the Washington Examiner that regulating how algorithms are written, so as to not inject bias, would be “very complicated and takes an expertise that the federal government or people at the agencies are certainly lacking now.”

While black box considerations are “beyond our scope,” the ultimate outcome is “right in our wheelhouse,” he said.

Detecting discrimination is also more difficult, as employees are often unaware they are being assessed using this technology.

Part of the solution for Sonderling is for employers to “remember that AI is not a panacea for all employment challenges; personal human intervention must continue to play fundamental and critical roles in employment decisions.”

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

Recently, thousands of technology leaders, including Elon Musk, Steve Wozniak, and Andrew Yang, signed a letter asking for a pause on the “dangerous race” to improve AI.

The letter asks for “stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” stating such tech could pose “profound risks to society and humanity.”

© 2023 Washington Examiner

Related Content