Home Blog The Impact of Artificial Intelligence on HR Processes

The Impact of Artificial Intelligence on HR Processes

,
August 12, 2024
Woman thinking at a laptop with computer data as an overlay

Artificial intelligence (AI) has the potential to reshape the nature of work and its future possibilities. AI tools are being integrated across a variety of fields to increase efficiencies in the workplace, including the realm of AI in HR. This fast-moving field seemingly releases new tools and products every week. According to the Society of Human Resources Management, 25% of HR managers utilize artificial intelligence tools in the workplace.1 In a recent survey of HR professionals, 70% of individuals said that artificial intelligence will increasingly come to shape the future of human resources and people management.2 Because artificial intelligence has the potential to change how we work and because the field moves so quickly, human resources professionals, people managers, operational managers, union professionals and others who work with law in the labor and employment context need to understand some basic information about its operation. They should also take account of the emerging regulatory frameworks that may come to govern its use in the workplace.

AI technologies raise a number of questions. Will AI compromise worker privacy? Will the algorithms reproduce bias in decision-making and perpetuate discrimination? Will AI lead to mass job displacement? This blog post provides an introduction to artificial intelligence and discusses the emerging legal regulations concerning AI in the workplace.

Artificial Intelligence Explained

Artificial intelligence refers to the various computer systems capable of executing tasks that in the past only humans could perform. There are different kinds of artificial intelligence that may be utilized in the workplace that are categorized based on what they can do. AI can be found in tools that focus on only a single task and have a limited range of abilities—this includes Narrow AI, which is classified as “weak.”3 General AI and Artificial Super Intelligence are classified as “strong” and are capable of performing at the same level as a human or surpassing a human, respectively. Neither of these strong forms exists yet.3

While powerful and game-changing, the artificial intelligence frequently used in the workplace is Narrow AI. This kind of AI can take many forms, including reactive machine types (with no memory designed to perform a narrow, specific task) or more sophisticated limited memory types (a form of AI that can recount past and present data to accomplish tasks). Chatbots that engage with potential applicants are an example of Weak AI.

General Artificial Intelligence and Artificial Super Intelligence are speculatively believed to be capable of self-awareness—the stuff of science fiction fantasies. These types of AI may also have a “theory of mind,” demonstrating and understanding of human emotions and intelligence and expressing emotions and intelligence. General Artificial Intelligence and Artificial Super Intelligence may even someday mirror Data from Star Trek, Hal from 2001 Space Odyssey or even (heaven forbid) the Terminators that descend from Skynet.

Different kinds of AI use different types of technological architecture and data inputs to create the foundation for intelligent interactions. Examples include:

  • Large Language Models (LLMs): These AI systems are trained on massive amounts of text data, enabling them to generate almost human-quality writing, translate languages and answer questions in a way that mirrors the data that the system draws upon. HR professionals may draw on this kind of artificial intelligence to quickly create presentations and other training tools
  • Machine Learning Models: These algorithms are trained on specific datasets to perform tasks like image recognition, spam filtering and fraud detection. They are often specialized. For example, they might be used to analyze resumes for keywords or screen out unqualified candidates
  • Deep Learning Models: This type of machine learning model is inspired by the structure of the human brain. Deep learning has a significant capacity for generalization and adaptation, and it excels at tasks involving complex data like images, speech and video. Deep learning systems outperform algorithms and in some tasks, they exceed human performance. In the workplace, they could be used for facial recognition security systems or analyzing customer sentiment in social media posts

What Are the Possibilities and Limitations of Artificial Intelligence?

Artificial intelligence promises opportunities for time-saving efficiency and possibilities for reducing costs and streamlining work. In this way, it has the potential to rapidly transform the workplace. Legal commentators, including scholars, advocates, people managers and human resources professionals, are grappling with its implications for labor and employment law. Artificial Intelligence can assist people managers and HR professionals with automating many tasks like creating initial drafts of training presentations, reviewing contracts for troubleshooting, drafting job descriptions, reviewing employee resumes and applications, interacting with applicants via chatbot and onboarding new employees.

For all its promise, artificial intelligence has the potential to negatively disrupt established norms and expectations in the workplace. These tools have the potential to replace human workers. For example, professionals with hard-won skills related to research, writing, graphic design and creative arts may be made redundant. There is even evidence that people managers and human resources professionals may be negatively impacted by the uncritical adoption of artificial intelligence tools to streamline employee performance monitoring and employment decisions.

There is also evidence that artificial intelligence tools are only as good as the human beings who train them.4 For this reason, some commentators have argued that AI has the potential to reproduce the biases of those who have created them.5 Bias can arise at many stages of the execution and implementation of AI.6 Stories abound of algorithms operating in a way that reproduce hierarchies and exclusions around race, gender and disability.7

It is also important to note AI’s substantive limitations. The use of artificial intelligence still requires human oversight in many cases. This tech can be described like a puppy—it can be a useful tool that is eager to help but it needs training and oversight to maximize its potential. There have been circumstances where artificial intelligence tools lied, dreamed or hallucinated inaccurate information. This is especially challenging in efforts to accurately reflect the complexity of legal frameworks and regulations. In a widely discussed case, lawyers were fined for using fake legal cases created by ChatGPT in legal briefs. ChatGPT made up cases and the lawyers uncritically incorporated them into their own case without review.8 Even the most comprehensive artificial intelligence tools require human oversight, engagement and critical review to ensure accuracy.

Existing Legal Frameworks for Regulating Artificial Intelligence

Unlike the European Union, with its Artificial Intelligence Act,9 the U.S. does not have comprehensive legislation regulating the use and impact of artificial intelligence. In many ways, employers are free to use AI in a variety of ways. While no federal laws currently exist, there has been movement by President Biden’s administration and some legislation introduced in Congress.

In 2023, President Biden issued Executive Order 14110, (“Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence”). In the executive order, President Biden called for coordinated agency action to mitigate the risks of artificial intelligence as it relates to equity and civil rights, data privacy, consumer protection, technological innovation and advancement and other matters.10 The executive order also discusses future risks to the workforce in the U.S., directs the creation of a report on the impact of AI in the workplace and directs agencies to develop best practices to mitigate the potential harms of artificial intelligence.11 The White House also issued a blueprint for an AI Bill of Rights.12

Various agencies have issued guidance and other materials related to artificial intelligence to answer the call of President Biden’s executive order. For example, the Department of Labor issued a field assistance bulletin clarifying the use of artificial intelligence in implementing scheduling, timekeeping, labor management, wage tracking, medical leave and other aspects of the Fair Labor Standards Act. According to this bulletin, the use of artificial intelligence does not eliminate liability for employers when such systems fail or result in breaking the law. Employers retain liability if the law is broken and are responsible for properly implementing and applying the law. In the wake of President Biden’s Executive order, the Office of Federal Contract Compliance issued a document providing guidance for federal contractors on equal employment opportunity-related issues.13 The Department of Labor also issued principles designed to address worker well-being as it relates to the implementation of artificial intelligence in its, “Artificial Intelligence and Worker Well-Being: Principles for Developers and Employers” in response to President Biden’s executive order.14

Even before the 2023 executive order, however, there was some agency action in regulating artificial intelligence in the workplace. In 2021, the EEOC launched an agency-wide initiative designed to ensure the fair and consistent use of emerging artificial intelligence technologies in the workplace in accordance with civil rights protections.15 The initiative is designed to ensure that the use of artificial intelligence in workplace decisions, as it applies to employers, employees, job applicants, vendors and others, complies with civil rights laws.

As part of this initiative, the EEOC issued a technical assistance document clarifying that artificial intelligence could be used in ways that violate the Americans with Disabilities Act through the use of automated decision-making tools to make determinations in 2022.16 Even when an employer's actions are unintentional, arise from artificial intelligence or rely on automatic decision-making tools, employers would be liable for violating the ADA. For example, liability may arise with automated decision-making tools or algorithms intentionally or unintentionally screen out individuals with disabilities from the application pool even though the employer is using artificial intelligence driven tools.

The EEOC also recently clarified that employers should adopt a proactive stance in assessing their use of artificial intelligence in the workplace to ensure that they do not fall afoul of compliance. These interventions are only part of a larger EEOC project designed to make it clear that discrimination in violation of the law can arise from the use of automatic decision-making tools, algorithms and artificial intelligence.17

Despite these agency clarifications from the EEOC and other federal agencies, it may not be easy for plaintiffs to successfully bring claims related to algorithmic discrimination in the workplace under law. For example, in Saas v. Major, Lindsey & Africa, L.L.C., a Maryland District court rejected a claim for algorithmic discrimination as speculative and insufficient based upon the facts because the plaintiff failed to point to a specific employment practice by the employment agency.18 Even more problematically, it was not clear that the recruiting firm utilized artificial intelligence to make determinations about individuals.

Proposed Federal Regulations

While Congress has not passed comprehensive federal legislation related to AI in the workforce, some bills have been introduced for consideration and examination. For example, Senator Bob Casey of Pennsylvania introduced two potential bills that attempt to address AI in the workplace—S.2419 the No Robot Bosses Act18 and S.262 the Stop Spying Bosses Act.20 S.2419 limits covered employers with more than 11 employees in their use of automated decision-making technology. Covered employers may not solely rely upon automated decision making technology to make employment decisions. If an employer uses an automated decision-making technology to make a decision it may not do so without appropriately validating the efficacy of the system and its compliance with appropriate anti-discrimination statutes. The employer must also make appropriate informational disclosures in plain language to the employee when using automated decision-making technology.

S.262 was introduced in 2023, “[t]o prohibit, or require disclosure of, the surveillance, monitoring, and collection of certain worker data by employers, and for other purposes.”21 Though this law has not been passed, in 2022, general counsel of the National Labor Relations Board Jennifer Abruzzo expressed concern that various monitoring technologies and automated management tools may be used in ways that interfere with the ability of workers to exercise their Section 7 rights to engage in protected activity under the National Labor Relations Act.22

More legislation is likely forthcoming from the federal government. In May of 2024, a bipartisan working group of Senators in Congress led by Majority Leader Senator Chuck Schumer issued a “roadmap” focused on legislative priorities for innovation, regulation, and oversight of artificial intelligence technology. The report was crafted after a series of education forums that brought 150 experts on artificial intelligence to Congress to provide insights into issues related to artificial intelligence. The bipartisan working group identified a number of priorities for Congress to focus on artificial intelligence including increasing funding for innovation, adopting nationwide standards for safety, ensuring national security interests are protected in relation to artificial intelligence, addressing privacy issues by regulating the use of “deepfakes” in elections and fake images of intimate conduct, ensuring training and innovation in artificial intelligence for schools and companies and protecting the interests of workers in relationship to artificial intelligence.23

In relation to employment, the bipartisan working group recommended bringing together diverse stakeholders to discuss the challenges of how artificial intelligence will shape the future of work. It also recommended adopting significant measures for training, reskilling and upskilling workers in industries impacted by artificial intelligence.

State-Based Regulation

In the U.S., the federal government is only one of the regulatory regimes that HR professionals and people managers must consider when addressing compliance. Savvy legal professionals understand that state and local laws can significantly impact compliance issues. States have begun to regulate artificial intelligence in three spaces: data privacy, consumer protection and employment.

Colorado is at the forefront of passing such measures across a variety of areas that artificial intelligence impacts. Colorado’s legislature passed SB24-205 The Colorado AI Act.24 The Act requires developers of high risk artificial intelligence programs to exercise reasonable care in developing programs. Developers must create consumer disclosures related to high-risk artificial intelligence and perform particular risk assessments and engage in risk management.25 If developers comply with all the provisions of the act, they enjoy a rebuttable presumption that they have exercised reasonable care. Colorado’s SB24-205 not only addresses consumer protection issues, but the high-risk use of artificial intelligence is regulated and monitored by the act including the use of artificial intelligence in employment-related decisions. It is designed to limit the impact of algorithmic discrimination, i.e. circumstances in which the use of artificial intelligence results in differential impact or differential impact on the basis of protected categories like race, gender, sexual orientation, disability and others.26 The Colorado Artificial Intelligence Act is not a settled law. The Act goes into effect in 2026, and the legislature has been encouraged to improve it through clarification. A similar bill was considered in Connecticut but failed because Governor Ned Lamont threatened to veto the act if passed.27

Utah has also passed consumer protection regulations related to artificial intelligence—the Artificial Intelligence Policy Act.28 This Act requires disclosure when individuals interact with artificial intelligence and creates a state agency, the Office of Artificial Intelligence Policy, which has rule-making ability over AI programs and the ability to create regulatory exemptions.

States have also passed healthcare-related restrictions on the use of artificial intelligence.29 The states of Colorado and Georgia have passed laws limiting the use of AI in healthcare decisions related to insurance premiums and other health-related measures. Pennsylvania and Georgia are also considering similar legislation. The state of New York’s Superintendent of Financial Services issued a directive making it clear that the efficiencies of artificial intelligence should not be gained at the expense of consumers.

In terms of privacy in employment, states have also adopted other types of legislation. In 2020, Illinois passed novel legislation designed to address the use of artificial intelligence in video interviews.30 The Artificial Intelligence Video Act requires that employers using this technology provide information and notice to prospective interviewees. It also requires that employers obtain consent from interviewees as well and prohibits sharing videos of interviewees.

Other states are currently considering laws designed to prohibit artificial intelligence-related algorithmic discrimination in the workplace and limit the use of AI-driven automated decision making. For example, bills designed to mitigate the risks of algorithmic discrimination that arises from the use of artificial intelligence or automated decision-making tools have been introduced in California,31 Hawaii32 and Illinois.33 Connecticut, Texas and Washington have passed legislation creating state-based initiatives to examine issues related to artificial intelligence and suggest solutions.

One of the most comprehensive proposed bills is Massachusetts Bill H.1873, creatively named “An Act Preventing a Dystopian Work Environment.” This Act requires that employers and vendors disclose the use of AI and protect the accuracy and security of employee data. It also regulates the use of AI for automated oversight, tracking productivity and decision-making.

One of the AI spaces that has been especially challenging in the realm of politics and advertisements is the use of fake AI-generated voices and images (often called “deepfakes” or synthetic media). In March of 2024, Tennessee passed the ELVIS Act of 2024 (Ensuring Likeness, Voice and Image Security Act of 2024). This act regulates the use of such content and defines an individual’s voice as a personal right worthy of protection.34 Ten states currently restrict the use of such AI generated in political advertisements and 14 states ban its use in non-consensual sexual materials.35

Stay Ahead of Workplace Legal Changes With Tulane Law

As AI continues to evolve, legal frameworks will need to adapt to ensure fairness and protect worker rights. Employers that wish to use artificial intelligence in screening potential employees and managing existing employees will have to adopt additional measures to ensure appropriate compliance, including disclosure and system validation to minimize the impact of algorithm discrimination and bias.

Stay ahead of the curve with Tulane Law’s online Master of Jurisprudence in Labor & Employment Law. Our targeted curriculum will help you track emerging legislation at the state and federal levels and navigate the emerging landscape of agency rules. With innovative courses that incorporate insights from intellectual property, social media law and the regulation of artificial intelligence, we are helping the next generation of visionary legal professionals grow.

Schedule a call with our admissions team today to get started.

Sources
  1. Retrieved on August 7, 2024, from shrm.org/topics-tools/news/technology/ai-adoption-hr-is-growing
  2. Retrieved on August 7, 2024, from hrexchangenetwork.com/hr-tech/articles/will-ai-replace-human-resources
  3. Retrieved on August 7, 2024, from ibm.com/think/topics/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks
  4. Retrieved on August 7, 2024, from pwc.com/us/en/tech-effect/ai-analytics/algorithmic-bias-and-trust-in-ai.html
  5. Retrieved on August 7, 2024, from hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
  6. Retrieved on August 7, 2024, from chapman.edu/ai/bias-in-ai.aspx
  7. Retrieved on August 7, 2024, from wsj.com/articles/rise-of-ai-puts-spotlight-on-bias-in-algorithms-26ee6cc9
  8. Retrieved on August 7, 2024, from hbr.org/2023/09/eliminating-algorithmic-bias-is-just-the-beginning-of-equitable-ai
  9. Retrieved on August 7, 2024, from europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
  10. Retrieved on August 7, 2024, from whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
  11. Retrieved on August 7, 2024, from whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
  12. Retrieved on August 7, 2024, from whitehouse.gov/ostp/ai-bill-of-rights/
  13. Retrieved on August 7, 2024, from dol.gov/agencies/ofccp/ai/ai-eeo-guide
  14. Retrieved on August 7, 2024, from dol.gov/general/AI-Principles
  15. Retrieved on August 7, 2024, from eeoc.gov/ai
  16. Retrieved on August 7, 2024, from eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence
  17. Retrieved on August 7, 2024, from eeoc.gov/joint-statement-enforcement-civil-rights-fair-competition-consumer-protection-and-equal-0
  18. Retrieved on August 7, 2024, from scholar.google.com/scholar_case?case=7736063661252278773&hl=en&as_sdt=6&as_vis=1&oi=scholarr
  19. Retrieved on August 7, 2024, from congress.gov/bill/118th-congress/senate-bill/2419/cosponsors?s=1&r=1&q=%7B%22search%22%3A%5B%22No+robot+bosses%22%5D%7D
  20. Retrieved on August 7, 2024, from congress.gov/bill/118th-congress/senate-bill/262/cosponsors
  21. Retrieved on August 7, 2024, from congress.gov/bill/118th-congress/senate-bill/262/text
  22. Retrieved on August 7, 2024, from nlrb.gov/news-outreach/news-story/nlrb-general-counsel-issues-memo-on-unlawful-electronic-surveillance-and
  23. Retrieved on August 7, 2024, from politico.com/live-updates/2024/05/15/congress/schumers-roadmap-on-ai-bills-00157828
  24. Retrieved on August 7, 2024, from leg.colorado.gov/bills/sb24-205
  25. Retrieved on August 7, 2024, from employmentlawworldview.com/could-artificial-intelligence-create-real-liability-for-employers-colorado-just-passed-the-first-u-s-law-addressing-algorithmic-discrimination-in-private-sector-use-of-ai-systems-us/
  26. Retrieved on August 7, 2024, from employmentlawworldview.com/artificial-intelligence-ai-employment-discrimination-laws-proposed-in-six-states-what-employers-need-to-know-us/
  27. Retrieved on August 7, 2024, from perkinscoie.com/en/news-insights/states-begin-to-regulate-ai-in-absence-of-federal-legislation.html
  28. Retrieved on August 7, 2024, from le.utah.gov/~2024/bills/static/SB0149.html
  29. Retrieved on August 7, 2024, from govtech.com/artificial-intelligence/georgia-joins-list-of-states-looking-to-limit-ai-in-health-decisions
  30. Retrieved on August 7, 2024, from bakersterchi.com/new-illinois-statute-among-the-first-to-address-aiaided-job-recruiting
  31. Retrieved on August 7, 2024, from forbes.com/sites/alonzomartinez/2024/07/19/californias-two-pronged-approach-to-regulating-ai-in-employment-and-beyond/
  32. Retrieved on August 7, 2024, from capitol.hawaii.gov/sessions/session2024/bills/SB2524_.HTM
  33. Retrieved on August 7, 2024, from senatorcervantes.com/news/press-releases/50-cervantes-moves-legislation-to-protect-workers-from-discriminatory-use-of-ai
  34. Retrieved on August 7, 2024, from wapp.capitol.tn.gov/apps/Billinfo/default.aspx?BillNumber=HB2091&ga=113
  35. Retrieved on August 7, 2024, from multistate.us/insider/2024/4/5/more-and-more-states-are-enacting-laws-addressing-ai-deepfakes