Almost Half of Employers Use AI According to Littler Study, but Legal Risks Abound

As artificial intelligence continues to evolve, so do the attitudes of businesses towards its use. In Littler’s 2024 annual employer survey, almost half of the companies surveyed told the firm they used generative AI in some part of their business. 

According to the survey, the most common usages of AI were in the creation of HR-related materials and chatbots for internal questions, at 26% and 24%. 


Zoe Argento, a shareholder and co-chair of the privacy and data security practice group at Littler, told Law Week she was surprised some of the respondents weren’t more concerned about the use of AI for HR purposes. 

“There are a lot of ways that AI tools can go wrong,” said Argento. “AI can be incredibly useful, but you have to put a lot of guardrails in place to make sure that you get the benefits in a manner that outweighs the risk.” 

Following the top two, the survey also found companies were using AI for job candidate interaction, employee development and performance management, at 16%, 11% and 7%, respectively. 

The split on company usage is also a factor of size and industry, according to the survey report, with larger companies and companies in the tech industry using generative AI tools more frequently. 

Argento noted the use of AI in the workplace breaks down largely into three buckets, with the fourth only really appearing in the entertainment industry. The three main areas include tools for evaluating and assessing applicants and employees, helping employees with their work and providing services to their employees or workforce. 

Argento added the use of AI to evaluate or assess applicants or employees is the highest risk use of AI in the workplace, and it’s also where she’s seeing the most amount of regulation. Recently, the Colorado Legislature regulated that type of AI usage in the state. 

“The chief risk there is discrimination against employees or applicants in some form,” said Argento. “An AI tool may be biased, potentially in a manner that has a disparate impact on people in a protected category. And there are all sorts of ways in which a tool might be biased.” 

She noted regulators are concerned not only about AI tools leading to discrimination based on race and gender. 

“Regulators are also concerned that an AI tool might discriminate against an applicant or employee because they’ve been involved in labor organizing in some way, or because they have a disability,” said Argento.

The second risk around the use of AI is data protection, an area where Colorado, the country and the broader world are regulating more. Argento said many of the laws proposed around the world are at least inspired by the European Union AI Act.

“It takes a bifurcated approach of regulating both the developers and the deployers of the tool, with a particular focus on regulating what they call high-risk tools, which includes tools that would be used to make decisions about employment, hiring, promotion and termination,” said Argento. “The EU AI Act imposes a framework of obligations on developers largely derived from product regulations and a framework of obligations on deployers of AI tools largely derived from data protection laws.” 

She added Colorado’s new AI law, Senate Bill 24-205, closely follows the approach of the EU AI Act. In particular, Colorado’s law imposes obligations on both developers and deployers of AI tools, with heavy obligations on both groups with respect to “high-risk” tools. These are tools that could serve as a substantial factor in a decision to provide or deny opportunities and services such as employment, health care, housing education or insurance. 

In the case of “high-risk” tools, deployers of these tools may have to provide three different types of notice, a privacy policy, a notice upon deployment and a notice if an adverse action is taken based on the tool. Argento gave the example of an employer that uses an AI tool to determine whether to hire an applicant as a case where this may apply. 

In addition to the notices, the bill adds rights, risk assessments and risk management. Crucially, the law requires the deployer to exercise reasonable care to protect individuals from discrimination resulting from the use of the tool, noted Argento. 

For companies interested in using these tools and protecting themselves legally, Argento noted vendor management, data control and data security are crucial for these tools. 

“These tools are virtually always provided by vendors. Companies should ask questions about the vendors’ data security, which party will control the data and whether the vendor has conducted disparate impact assessments on its tool,” said Argento. “Ideally companies build these questions into the RFP process. The vendor contract should also cover these points.” 

While 71% of employers surveyed by Littler are either moderately concerned or very concerned about the challenges of complying with data protection and information laws with the use of AI for HR purposes, the number of those employers that have taken action is significantly lower. 

Nearly 30% of those employers have not coordinated with the developers and providers of their AI tools for risk assessment of those tools, according to the survey. Just 13% have done so to a large extent and 22% to a moderate extent. 

The good news, at least for the job security of the employees of these companies, is that the survey found 88% of companies were either not at all concerned or slightly concerned about job displacement from the use of AI and other technologies. 

Previous articleCourt Opinions: Presiding Disciplinary Judge Opinions for May 20
Next articleErise IP Relocates to Downtown Denver, Triples Denver Employees in Three Years

LEAVE A REPLY

Please enter your comment!
Please enter your name here