As artificial intelligence (AI) tools become increasingly common in the workplace, employers should consider the legal risks of AI use. While AI offers efficiency and convenience, its use in employment decision-making — particularly hiring and firing — could raise concerns about compliance with anti-discrimination laws.
Recent surveys indicate the majority of HR professionals use AI, and the most common use is in recruitment. Whether this involves screening resumes, assessing interview performance, or recommending candidates, the adoption of AI in these contexts must be approached with caution.
AI’s “Black Box” Problem
One of the most critical limitations of AI is its inability to explain its decision-making processes. Many AI tools operate as “black box” systems where users can see what they input and what AI produces, but the user does not know how the AI system arrived at its conclusions. Unlike human decision-makers, AI cannot articulate its rationale.
AI’s lack of transparency creates significant legal challenges when employers need to justify employment decisions. In the event of litigation, employers will find it difficult or impossible to defend a decision made by an AI system. Simply pointing to an algorithm is not a viable legal defense; employers must be able to article legitimate, non-discriminatory rationale supporting employment decisions.
The Illusion of Neutrality
A common misconception is that AI, by virtue of being machine-driven, is objective. However, AI systems are trained on human-created data and inevitably reflect the biases embedded within that data. This is especially problematic with closed AI models, which are trained on limited datasets curated by a small group of individuals.
For example, consider a closed AI system used to identify ideal candidates for an electrician position. If the training data consists primarily of resumes from previously hired electricians — most of whom are men — the system may begin to associate men as the ideal candidates and recommend men as applicants.
Even open AI models, which are trained on broader, publicly available data, are not immune to bias. As men represent a disproportionate percentage of electricians historically in the general workforce, AI programs may still generate biased outcomes that favor men.
In both cases, the use of AI may perpetuate existing disparities rather than eliminating them. This creates legal risk under Title VII and similar state anti-discrimination laws if an employer’s use of an AI algorithm in hiring results in a gender bias.
Bias at Scale
AI has the capacity to process information at a scale far beyond human capabilities. This is a double-edged sword; while it enables faster decision-making, it also amplifies any existing bias. A human might apply flawed reasoning to a handful of candidates, whereas an AI tool can apply flawed reasoning to thousands of applicants.
Additionally, AI systems are often designed to optimize results based on user preferences or past outcomes. This can create a form of confirmation bias — AI essentially becomes a “yes machine,” seeking to give the user the desired result without engaging in any risk-assessment.
Risks in the Hiring Process
AI is already widely used in hiring, often without full awareness of the extent of its role. Common platforms use AI to filter resumes and rank candidates. Some employers also rely on AI-powered video interview tools that evaluate tone, facial expressions, and word choice to assess a candidate’s potential fit.
These practices raise several legal concerns. Research shows that some AI systems struggle to interpret the facial expressions of Black candidates accurately, potentially resulting in discriminatory outcomes. Additionally, applicants with disabilities — such as autism — may engage differently in interviews, and AI may incorrectly interpret these differences as poor interview performance.
The Equal Employment Opportunity Commission (EEOC) has cautioned against such technologies, noting that they may create unlawful disparate impacts. Employers must also consider implications under the Americans with Disabilities Act (ADA) and similar laws, especially when using tools that may penalize candidates for behavior unrelated to job performance.
Employee Use of AI and Confidentiality Risks
Beyond the HR application, AI is becoming a common tool for general workplace tasks, from content creation to research. Surveys indicate that half of employees use AI tools at work. Despite widespread use, many employers have no policy regarding AI use.
Employers are encouraged to develop policies regulating employee use of AI, including clear guidance on what data can and cannot be shared with AI platforms. One significant area of concern is confidentiality and protection of intellectual property. If employees upload sensitive or proprietary information into open AI platforms, that data may no longer be confidential or protected.
Risk Assessment
While there are clear risks associated with AI in hiring and employment, not all uses carry the same level of legal exposure. High-risk uses include allowing AI to analyze or score interview performance and delegating hiring decisions directly to AI. Lower-risk applications include summarizing applicant materials, automating interview scheduling, and generating drafts of emails.
Employers can mitigate risk with AI tools by ensuring human oversight in employment-related decision making, documenting rationale behind all employment decisions, and establishing internal policies governing AI use.
Employers that proactively assess and manage AI-related risks will be best positioned to harness the benefits of this evolving technology while avoiding costly legal pitfalls. When in doubt, consult experienced employment counsel before implementing or expanding your use of HR-related AI tools.
Sean Ray is an attorney at Barran Liebman LLP, where he represents employers on a wide range of employment issues. Contact him at 503-276-2135 or sray@barran.com.
Avery Tunstill is a law clerk at Barran Liebman LLP, where she partners with attorneys in client trainings, legal research, and the drafting of employment policies and handbooks.
