Abstract
Excerpted From: Jordan Dailey, Algorithmic Bias: AI and the Challenge of Modern Employment Practices, 21 UC Law Business Journal 215 (April, 2025) (226 Footnotes) (Full Document)
In 2014, the e-commerce giant Amazon began using its own artificial intelligence (AI) program to review job applications with the goal of finding ““top talent.” However, by the following year, the company realized that its recruiting tool was flawed. Amazon's hiring system was trained to vet applicants by observing data over a 10-year period that mainly reflected male candidates. Because of the information it was provided, the program taught itself that male applicants were preferable and penalized résumés that included the word “women,” as in “captain of the women's soccer team.” A group of developers at Amazon worked to combat the issue but later decided that it was too difficult to stop the system from finding ways to discriminate against female candidates. On account of the failed efforts, the company removed the tool from hiring in 2017 and ceased the automated recruiting project altogether.
While AI has great potential in the employment field, the Amazon dilemma provides an example of how this technology is far from perfect. Because bias is a result of human nature, it can go unnoticed and influence the underpinnings of automated software. When employers assign hiring responsibilities to AI programs, they run the risk of replicating and potentially amplifying human biases. This is known as algorithmic bias, and it occurs when AI and similar technology systematically disadvantage certain groups. If left unmonitored, biased software can create a disparate impact on marginalized groups even without the programmer's intention to discriminate. To address this issue effectively and prevent future concerns, employers ought to ensure that their algorithmic hiring software aligns with current regulations. This can be achieved through incorporating direct human involvement and offering clear insights into the automated candidate selection process.
This note seeks to evaluate the algorithmic bias problem as it relates to hiring and employment decision-making. The first section provides an overview of current AI technology and its utility within the hiring space. The second section introduces the algorithmic bias issue and presents the challenges that come with using AI in employment. The third section examines current federal, state, and local regulations with respect to artificial intelligence. The fourth section discusses whether there is a need for additional legislation and how employers can comply with current laws. Finally, the fifth section proposes ways that employers, businesses, and software developers can improve AI technology and ensure algorithmic fairness going forward.
[. . .]
AI is here to stay and for good reason. Algorithmic systems can save time, effort, and money while allowing employers to focus more on innovation and less on the tedious tasks that come with hiring. The sky is the limit for AI, and there is no telling what this technology will be capable of in the next 10 years. However, employers and businesses should be aware of the challenges that algorithmic software can create if the right measures are not taken to prevent bias. AI is only as good as the data it is given, and it cannot eliminate prejudice and underrepresentation without human intervention. Because of AI's growing popularity, businesses, employers, and software developers must take accountability and ensure that their products do not reflect unjustifiable bias.
The absence of federal AI regulation does not act as an excuse for employers to forgo compliance with existing laws. Employers should take action now to account for discrimination that may be lingering within their hiring software. To prevent potential misuse and avoid civil liability, employers should ensure that their hiring methods comply with laws like Title VII and the ADA. As AI continues to be used in employment decisions across the U.S., there likely will be an increase in state regulation to combat algorithmic bias. Perhaps New York City's Local Law 144 will serve as a model for governments across the country, spanning from local to federal. Faced with liability, however, employers should be prepared to address AI bias regardless of new regulations.
Artificial intelligence, while impressive, is nowhere near perfect. Until the day comes when this technology can operate worry-free, there will be a continuous need to ensure that businesses and employers are doing their part to maintain fairness. Therefore, steps should be taken to reduce the potential for disparate impact. In combatting bias, human intervention and transparency is key. Employers must work closely with software developers and vendors to assess their technology and look for ways to improve their systems. Because human behavior allows for the existence of bias within artificial intelligence, it is imperative that action is taken to counteract the consequences that algorithmic software may create.
J.D. University of California, College of the Law, San Francisco, 2025; B.A. Political Science, Washington State University, 2021.