7

October, 2022

NYC Employers May Want to Rethink the Value Cost Proposition of Their AI Hiring Tools

All employers in New York City that rely on artificial intelligence (AI) in their hiring processes are potentially subject to new requirements, beginning January 1, 2023, that they ensure their AI tool has undergone a “bias audit” within the past year, provide advance notice to job applicants regarding the use of the tool, and publicly post on the employer’s website a summary of the results of the most recent bias audit and distribution data for the tool.  The law imposes hefty civil penalties on employers that fail to comply.

For months, employers have viewed this new requirement (the first of its kind in the country) with both puzzlement and foreboding, awaiting further guidance from the city as to what exactly it means by a bias audit and who should be providing the requisite certification.  The New York City Department of Consumer and Worker Protection has issued proposed rules to address those questions, with a public hearing scheduled for 11 am on Monday, October 24, 2022.

The proposed rules are notable in:

  • their definition of when an AI tool (which they refer to as an “automated employment decision tool” or “AEDT”) is subject to the new law;
  • their definition of who needs to conduct the bias audit;
  • what data needs to be analyzed and what information must be posted on employers’ websites; and
  • requirements related to how the data should be posted and additional notices to be provided.

When is the use of an AI tool subject to audit

The proposed rules narrow the scope of covered technology through their definition of an AEDT.  The law states that a bias audit is required whenever an AI tool is used “to substantially assist or replace discretionary decision making.”  The Department of Consumer and Worker Protection is proposing that the standard of substantial assistance or replacement applies in three situations:

  • when the employer relies solely on a simplified output (score, tag, classification, ranking, etc.), with no other factors considered;
  • when an employer uses such a simplified output as one of a set of criteria and weights that output more than any other criterion in the set; or
  • when an employer uses a simplified output to overrule or modify conclusions derived from other factors including human decision-making.

In other words, if the employer is relying on the AI tool to do the heavy lift on screening applicants at any stage of the hiring process, then it will likely need to comply with the bias audit requirement.  If, on the other hand, the AI tool is used more casually, as but one factor for consideration in screening candidates or perhaps to help flag those who may be the best match to the job description, and it carries no more weight than other criteria, then it would apparently fall outside the audit requirement.

Notably, for employers that are dabbling with AI and still relying mostly on human decision-making, as the employer’s AI tool gets “smarter” so to speak and better understands the types of factors it should be looking for when screening applicants, employers that may initially have been exempt from complying with the bias audit requirement will need to revisit that analysis.  Once the tool becomes a relied-upon and predominant factor at any stage of the screening process, the proposed rules indicate that the bias audit requirement will apply.

Who should be conducting the bias audit

The proposed rules state that an “independent auditor” should conduct the bias audit, meaning someone that is not involved in using or developing the AI tool.  This means that even if the employer did not develop the tool, as the user it cannot rely on its own in-house staff to audit the results of the tool for possible bias.  And if the tool is provided by a vendor, then the proposed rules seem to contemplate that the vendor will retain an independent third-party to conduct the audit.

What data needs to be analyzed

The third-party conducting a bias audit is being tasked with calculating two sets of numbers:

  • the “selection rate” – calculated by dividing (1) the number of individuals in a particular gender, racial or ethnic category who were selected to advance to the next level in the hiring process or were assigned to a particular classification by the AI tool by (2) the total number of individuals in that gender, racial or ethnic category who had applied or were considered for the position; and
  • the “impact ratio” – for which the calculation depends on whether the AI tool is being used to select and eliminate people, or whether it is being used to score and categorize them. If the tool handles selections, then the impact ratio is calculated as (1) the selection rate for a specific category divided by (2) the selection rate for the most selected category.  If rating or scoring candidates, then the impact ratio is calculated as (1) the average score of all individuals in a category divided by (2) the average score for the most selected category.

This is fairly standard statistical analysis for claims of adverse impact in employment practices, which is meant to flag employment practices that appear neutral but have a discriminatory effect on a protected group.  The EEOC and other government agencies typically apply a four-fifths rule or 80 percent guideline, whereby an impact ratio of less than 80 percent raises a red flag that there is an adverse impact.  Notably, though, the proposed rules require only that employers post the selection rate/average score and the impact ratio.  Nothing in the law or the proposed rules requires employers to educate job applicants on what the scores mean.

How the data should be posted

Employers are required to make available on their websites:

  • the date of the most recent bias audit;
  • a summary of the results, including selection rates and impact ratios for all categories; and
  • the date the employer began using the AI tool.

The proposed rules require that this posting be clear and conspicuous on the careers or jobs section of the employer’s website, but they allow employers to meet that requirement with an active and clearly-identified hyperlink to the data.

Additional notice requirements

Employers are also required, under the law, to provide candidates with at least 10 business days’ notice that they will be using an AI tool, the job qualifications and characteristics that the tool will assess, and allow candidates to request an alternative selection process or accommodation.  The proposed rules state that this notice can be posted on the employer’s website, included in the job posting, or individually distributed to job candidates.

Finally, employers must additionally provide employees with notice as to the type of data collected by the AI tool, the source of that data, and the employer’s data retention policy.  The proposed rules provide that employers either need to post the information on their website, or post notice to candidates on their websites that the information is available upon written request (and then comply with those requests within 30 days).

Next steps for employers

Employers that would like to comment on the proposed rules can use the contact information in the rules to call or Zoom in.  With the January effective date rapidly approaching, employers should review what AI tools they currently use in their hiring processes and how they are used.  If the tools are provided through a vendor, then the employer should consult with the vendor on whether it has conducted a bias audit and what information it can share as to the results of the audit.  If the employer has developed its own AI tools, it should look for an independent third-party that can perform the requisite selection and impact analysis.  Employers should also plan to make space on their websites for posting the results of their audit and the various notices required under the law.

Flummoxed employers are encouraged to seek legal advice on complying with the new law.  Employers may also want to revisit their hiring processes and determine whether the efficiencies gained through the tools exceed the administrative burdens of the New York City law.  That analysis will differ across industries and organizations.

By Tracey I. Levy

Facebooktwitterredditpinterestlinkedinmail
17

January, 2022

NYC Pushes Emerging National Trend to Regulate AI in Hiring Decisions

By Alexandra Lapes and Tracey I. Levy

New York City is near the forefront of  an emerging trend of regulating an employer’s use of artificial intelligence in employment decisions.  Employers and HR professionals use AI to collect and scan large applicant pools for qualified candidates and to target job listings to potential applicants on websites like LinkedIn and Indeed.  Artificial intelligence tools make it possible for employers to expand their search for more qualified and diverse candidates, screen for candidates with the most advantageous skills, increase efficiency, and streamline their hiring process.

But concerns about the implications of the technology in potentially magnifying biased decision-making are leading legislative bodies to impose precautionary measures.  Illinois was the first to legislate in this area in 2019, and in 2021 there were 17 states that had proposed laws to regulate artificial intelligence.  Opponents of using artificial intelligence cite studies that tools to assess a candidates’ skills and adeptness, which are based on a predetermined set of criteria created by humans with their own implicit biases, have the potential to reproduce bias and create unfair decision-making on a much broader scale.

Further fueling the legislative trend, in October 2021, the Equal Employment Opportunity Commission (EEOC) launched an initiative on artificial intelligence and algorithmic fairness to examine the issue of AI, people analytics, and big data in hiring and other employment decisions.  The EEOC will establish a working group to launch a series of listening sessions with key stakeholders and gather information on the impact of AI in employment decisions, to  examine more closely how technology changes the way employment decisions are made and to ensure those technologies are used fairly and consistently with federal equal employment opportunity laws.  The EEOC ultimately plans to  issue guidance for employers to ensure fairness in AI algorithms.  In addition, the Commission has implemented extensive training for its investigators in 2021 on the use of AI in employment practices.

New York City Is at the Forefront

New York City has stepped forward with the most stringent law in the country to date, which will require that by January 2, 2023, employers only use automated employment decision making tools that were independently audited for bias no more than one year prior to their use.  The law defines an automated employment decision tool as “any computerized process, derived from machine learning, statistical modeling, data analysis, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, which is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”   The “bias audit” required means an impartial evaluation by an independent auditor and must include testing to assess the tool’s disparate impact on persons in any federal EEO Component 1 form category (currently race/ethnicity and sex).

The law also imposes various disclosure requirements:

  • Website Posting – Employers must post on their website the summary of the results of the most recent audit conducted and the tool’s distribution date; and
  • Notice of usage – Applicants and employees who reside in New York City must be given 10 business days’ advance notice that the tool is being used to assist in evaluation of that individual for an employment decision, and the job qualifications and characteristics the automated tool will be using to assess the person’s candidacy.

Finally, an applicant or employee is granted the right to request an alternative selection process or accommodation be used, instead of the automated tool.

Notably, if not already disclosed publicly on the employer’s website, information about the type of data collected for the automated employment decision tool, the source of that data, and the employer or employment agency’s data retention policy, must be made available to any candidate or employee within 30 days of a written request.   Each day that an employer uses an automated assessment tool that does not comply with the law is considered a separate violation, and each time an employer fails to provide the proper notice, is subject to a penalty that ranges from $500 to $1,500 per violation.

More Such Laws Are Likely Coming

California and Washington, D.C. currently have bills pending that target the use of artificial intelligence in employment.  If passed, the D.C. law would be similar to the New York City bill, and would go even further to require a covered entity that takes adverse action in whole or in part on the results of an algorithmic eligibility determination provide the individual with a notice of any information the entity used to make the determination, provide the individual the ability to submit corrections to that information, and if the individual submits corrections, additionally they may request the entity conduct a reasoned reevaluation of the relevant algorithmic eligibility determination, conducted by a human on the corrected data.

Illinois’s law differs from New York City’s approach in that it restricts the use of video interview technology in the hiring process.  Like New York City, Illinois requires employers who use restricted AI hiring tools to disclose and obtain consent from the candidate prior to use and to explain to the candidate how the AI works and what general types of characteristics it uses to evaluate applicants.   Maryland adopted still another approach in 2020, and it prohibits employers from using facial recognition technology during job interviews without the applicant’s consent.

Further Considerations

Details of what is required to comply with the New York City bias audit have yet to be determined, including whether a vendor’s confirmation of such an assessment is sufficient for the product’s use in any setting, or whether an independent audit will be required for each employer.  In either case, it is left to vendors and employers to absorb the cost of compliance.

In addition, for those that request an alternative selection process, the law does not address what type of accommodation is required or under what conditions, and it remains unclear whether employers must grant such requests.

Employers should continue to monitor for any additional guidance New York City publishes and review and revise their policies and practices to reflect the notices and disclosures required of the law.

Facebooktwitterredditpinterestlinkedinmail
Back to Top