As AI and Algorithms Grow, So Do Employment Law Questions

From virtual phone assistants to automated driving, artificial intelligence software is everywhere now, including in the workplace. But with the growing use of AI and algorithm programs in hiring and employment decisions come questions about how the programs might impact labor and employment law. 

Law Week Colorado caught up with local employee-plaintiff attorneys to learn more about the legal questions springing up from the use of AI and algorithms in work. 

In simplest terms, artificial intelligence, or AI, are programs or algorithms that recognize patterns from data inputs and apply the learned patterns to new questions and tasks. But unlike simpler computer programs, AI can complete tasks that usually require human knowledge. 

Businesses of all sizes and industries have been increasingly using programs to help them make decisions around the hiring and retention of employees. Common examples of employment decision AI include resume screening programs, video interview analysis programs and even software to monitor and assess worker productivity. 

While these programs offer an array of benefits, federal agencies and local governments have been wary of places where algorithms break employment law. The overlap of AI and employment decisions is evolving and local attorneys predict that questions about the use of the software will also keep evolving. 

“Its application is infinite, almost, you can theoretically apply it in almost any decision or process. So the ways that it can go wrong and areas that it can go wrong go hand in hand with that,” said Kiron Kothari, an employment law attorney with Livelihood Law. Kothari represents employees in discrimination, harassment and labor law matters and previously served as an investigator and supervisor with the U.S. Equal Employment Opportunity Commission. 

Kothari said potential pitfalls when using AI in the workplace fall into two main categories: hiring decisions and decisions about existing employees. He added that while AI can be very useful for businesses, it can potentially reaffirm traditional workplace biases or fail to account for nuances when making decisions. And sometimes those pitfalls can cross the line into discrimination. 

“Unlike the human brain, it is really difficult for AI systems to make individualized assessments or a discernment. So, a lot can go wrong in that process,” said Kothari. Examples of when AI could break employment law include video interview software that could rate a foreign-born job candidate lower due to mannerisms or speech that aren’t the cultural norm or resume screening programs that remove candidates who have different demographics, like race or gender, than the majority of company’s existing employees. 

But while the use of AI is becoming increasingly common in the hiring pipeline, job candidates might not realize it’s being used. 

“I think that the type of litigation that you would see around these is going to be bigger, you know, disparate impact cases where experts are evaluating statistics of massive companies not hiring certain populations,” said Ben Lebsack, a partner at Lowrey Parady Lebsack. He added that even if AI filters out job candidates based on protected characteristics, it’s unlikely that applicants would ever realize that was the reason they were denied a position. 

“I think it’s on the fringes right now,” said Deborah Yim, the founder of Primera Law Group and an employment law and civil rights litigation attorney. “Most employers right now are not required to put employees on notice of the fact that they’re using AI and algorithm-based technology in their hiring decisions.” 

Some states have passed laws requiring employers to disclose the use of AI to assess candidates. In 2021, New York City passed a law requiring employers to run a bias audit before using software to make employment decisions and notify job candidates and employees that they will be assessed with the software. And Illinois lawmakers in 2019 passed The Artificial Intelligence Video Interview Act, which requires employers that use AI to screen video interviews to disclose they do so to applicants and limits how long recordings can be kept or shared. 

The second place where the use of AI and algorithms could break employment law is within workplaces. 

In 2021, the EEOC launched an agency-wide initiative to look at and make recommendations for best practices in using AI in workplaces. In May 2022, it published its first guidance on how using software that relies on algorithms to make workplace decisions could discriminate against employees with disabilities in violation of the Americans with Disabilities Act. The guidance noted that relying only on algorithms to assess an employee or job candidate’s performance or capabilities could run the risk of excluding workers who have disabilities.

Yim noted that if businesses lean on AI to make an employment decision, such as layoffs or terminations, without factoring in reasonable accommodations, they could open themselves up to claims under the ADA. 

Lebsack, who has handled a significant number of sexual harassment cases in his practice, added that the use of AI programs to detect workplace harassment over email and chats might also create issues down the line. 

“It may just not catch harassment and then you have the problem of an employer just thinking that it’s got a solution and not actually observing the workplace,” said Lebsack, “But then I also worry about a more Black Mirror type of the workplace being completely monitored. It would probably stifle a lot of National Labor Relations Act protected concerted activity. A lot of that takes place over Slack, instant messages, that type of stuff, because people are working remotely and aren’t talking over the watercooler.”

While the use of AI and algorithms to make employment decisions opens up legal questions, attorneys say businesses shouldn’t shirk the technology altogether but should instead take steps to ensure they’re aware of potential bias and aren’t solely relying on the programs to make important decisions. 

“It’s not all doom and gloom, of course. It can be really valuable when used correctly,” said Kothari. “We need to make sure that humans have an active role in making sure that the AI and algorithms technology is being used and is operating in the way that was intended and appropriate.”

Yim, who also advises small businesses on employment law matters, added that employers should consult with technology vendors to learn more about a tool’s potential blindspots when making decisions. “Are employers working with the vendors to make sure that the algorithms that are being used are fair, and not discriminatory? A lot of employers are blindly relying on the vendors,” said Yim. 

Previous articleColorado Lawmakers Eye Permanent Remote Court Options
Next articleCivil Pro Bono Panel Releases Annual Report


Please enter your comment!
Please enter your name here