New Guidance on AI Use in HR Processes

0

Although Artificial Intelligence, or AI, has been a topic of speculation for as long as we can remember, and ever since ChatGPT came out in November 2022, discussions around AI — including how might we use it, and how it may effect (or even take over) our jobs — have become virtually unavoidable. Earlier this month, the Washington Post reported on the emerging concern amongst employers that employee use of chatbots would leak company information. Many employers have banned the use of various AI platforms at work, while others have sought to harness its power for various applications in their organizations. Regardless of your personal feelings on the potential robot revolution, it has become clear that the use of generative AI and its associated consequences are not something that employers can ignore.

Currently, one of the key channels whereby AI has crept into the workplace is in hiring (many human resources managers have turned to AI to aid in determining which job candidates might be a good fit for a given position). Recently, the Equal Employment Opportunity Commission (EEOC) released guidance on this issue, and in sum, stated that the use or involvement of AI tools will not shield employers from claims of discriminatory employment practices.

AI & Supervised Machine Learning

In evaluating AI tools for potential use, it is helpful to have a base understanding of how they work, which is not always readily apparent. When AI is used in human resources, it is often through supervised machine learning (SML). SML uses algorithms to predict which job candidates are likely to succeed. This is accomplished by “training” the program and feeding it materials of employees that have succeeded in the past, which it will use to evaluate new application materials it is fed and eventually score those applicants based on who it believes will succeed. More tech-savvy employers are even beginning to utilize these tools in performing facial analysis in interviews, evaluating an interviewee’s attention span, optimism, or other traits. However, because AI is dependent on the material on which it is trained, it can end up reinforcing stereotypes or inadvertently dismissing diverse candidates who differ in some way from those who have succeeded in the past.

Because AI is a relatively new tool in the human resources sphere, there are few employment laws specifically targeted at AI. However, the EEOC and the Department of Justice (DOJ) have issued guidance regarding compliance with the Americans with Disabilities Act (ADA) and Title VII of the Civil Rights Act.

ADA Compliance

The DOJ guidance specifies that an employer utilizing an AI test must ensure that the test is accessible to all applicants regardless of disability, or they must provide a procedure for applicants to request reasonable accommodations that does not hurt the applicant’s chances of getting a job. The EEOC Guidance is more specific, giving several examples of technologies that could be implicated, including employee monitoring software that scores employees based on their keystrokes or other activity, video interview software that analyzes facial expressions and speech patterns, and “virtual assistants” that ask employees about their job qualifications. The EEOC’s position in the guidance is clear: if you use AI software that discriminates against a protected class, you could be held liable, regardless of whether the platform claims to have been “audited for compliance” with applicable laws.

There are certain best practices that can help an employer avoid an ADA violation, including clearly informing applicants and employees that reasonable accommodations are available to people with disabilities, providing clear instructions to request reasonable accommodations, and giving applicants and employees notice on what the tool is designed to measure, the methods by which it will be measured, and how this could potentially affect people with disabilities. Employers should also only use algorithmic decision-making tools to measure characteristics that are necessary for the job. Lastly, employers should ask vendors (1) whether the tool has recently been audited for bias, and (2) to confirm that the tool doesn’t ask job applicants questions that are likely to cause the applicant to disclose information about any disabilities unless it is related to a reasonable accommodation request.

Title VII Compliance

The EEOC guidance further advises that to avoid a violation of Title VII, employers should consider a few questions before implementing an algorithmic decision-making tool.

  1. Does the tool have the potential to adversely affect certain groups on the basis of race, color, sex, or national origin?
  2. If the tool has an adverse impact, can the employer show that the selection procedure is job-related and consistent with business necessity? Again, similarly to provisions under the ADA, employers should only test for characteristics that are necessary to perform the job.
  3. Even if the selection procedure is job-related and consistent with business necessity, is there a less discriminatory alternative available?
  4. Finally, similarly to provisions under the ADA, has the vendor taken steps to evaluate whether the tool causes a lower selection rate for members of a protected class under Title VII?

Employers should use caution when implementing AI in their hiring practices and continually re-evaluate their employment practices to ensure they do not have an adverse effect on a protected class. While AI may be intended as a measure to decrease human bias, it can ultimately replicate this bias, even unintentionally, if it is not properly trained. Because of these risks, HR professionals should consult legal counsel before implementing a new AI tool.

Wilson Jarrell is an attorney and Hannah LaChance is a law clerk at Barran Liebman LLP. For questions about AI in the workplace or for any other employment matters, contact Wilson at 503-276-2181 or wjarrell@barran.com.

barran.com

Share.

About Author

Leave A Reply