11

July, 2022

Maximize Your Talent Pool: 3 Ways to Ensure Your AI Is Not Screening Out Qualified Job Candidates with Disabilities

My son has been applying for hourly, entry-level summer jobs. Through his process, I am seeing first-hand, from the applicant’s perspective, the varying ways in which software, algorithms and artificial intelligence now predominate the job application process at this level.  When I was his age, I walked into different establishments, asked to speak with a manager, completed a paper application and then moved on, hoping to be called for a phone or in-person interview.  Now, from his experience:

  • Walk-in applicants are directed to a website, either that of the business itself or of a third-party vendor, to find postings for open positions;
  • All applications are submitted electronically, through on-line portals;
  • Initial screenings may be conducted by a series of questions texted to the applicant, and a wrong answer can shut the process down with a polite but firm response that the organization will not be proceeding further with the application at this time;
  • A next-level screening may be a video “interview,” which comprises the applicant self-recording, on the applicant’s phone with video enabled, responses to scripted questions posed by the organization and then submitting the video through a portal for review – perhaps by a human, but more likely first run through software that screens for certain substantive responses and stylistic behaviors; and
  • Scheduling of actual, in-person interviews may proceed through text or email, and the delayed responder may lose the interview opportunity, but the quick responder may, eventually, meet with an actual human being, in-person or via videoconference.

My son is very tech-savvy and was non-plussed by the AI aspects of his experience.  I, on the other hand, thought of the challenges my friends and I all have had, to varying degrees, trying to assist older family members use some of the same technologies deployed in my son’s job application process.  And I wondered, how would any of them be able to apply for an entry-level job under the current processes?  So too, the EEOC says, with regard to individuals with disabilities. 

In recently-issued Guidance, the EEOC considers these myriad ways in which automated processes and artificial intelligence can reject individuals with disabilities who would be qualified to do the job if provided a reasonable accommodation.  The EEOC recommends that employers account for this in various ways.  They include:

  • Notice – provide clear notice and instructions for applicants to request a reasonable accommodation in the context of the application process;
  • Relevance – assess algorithmic decision-making tools to confirm they measure only necessary skills and do not screen out individuals with certain disabilities; and
  • Disclosure of Process – disclose in advance information about which traits are being measured by an algorithmic tool, how they are being measured, and which disabilities might potentially score less favorably.

Artificial intelligence, we need to remember, is only as good as the information that was first used to program it.  The biases of the program designers and developers can influence the types of questions posed, or the way information is presented or analyzed, and thereby result in outcomes that disproportionately impact individuals possessing certain protected characteristics.  While some software vendors test for these disparate impact outcomes and will certify their products as having been tested to be “bias-free,” the EEOC cautions that those “bias-free” certifications pertain to Title VII-protected characteristics: race, sex, national origin, religion, and color.  Disabilities are unique to each individual, as is the requirement to provide reasonable accommodations for individuals with disabilities who are otherwise able to perform the essential functions of a job, and automated tools may thereby impact differently-abled individuals, even if they have the same diagnosed condition.

The EEOC encourages employers to develop and select tools that only measure abilities or qualifications that are truly necessary for the job.  While particularly resonant in the context of accommodating individuals with disabilities, the EEOC’s recommended approach will equally help employers to avoid inadvertently screening out individuals based on other protected characteristics.

Central to the EEOC’s guidance is encouraging employers to equip individuals with disabilities with sufficient information about the employer’s job application process so they know when they may need a reasonable accommodation to succeed in that process.  Particularly in the currently tight labor market, where employers are casting ever-wider nets to attract job applicants, a reassessment of screening tools can help to ensure that readily-available candidates are not rejected prematurely for reasons unrelated to their actual ability to perform the job.

By Tracey I. Levy

Facebooktwitterredditpinterestlinkedinmail
17

January, 2022

NYC Pushes Emerging National Trend to Regulate AI in Hiring Decisions

By Alexandra Lapes and Tracey I. Levy

New York City is near the forefront of  an emerging trend of regulating an employer’s use of artificial intelligence in employment decisions.  Employers and HR professionals use AI to collect and scan large applicant pools for qualified candidates and to target job listings to potential applicants on websites like LinkedIn and Indeed.  Artificial intelligence tools make it possible for employers to expand their search for more qualified and diverse candidates, screen for candidates with the most advantageous skills, increase efficiency, and streamline their hiring process.

But concerns about the implications of the technology in potentially magnifying biased decision-making are leading legislative bodies to impose precautionary measures.  Illinois was the first to legislate in this area in 2019, and in 2021 there were 17 states that had proposed laws to regulate artificial intelligence.  Opponents of using artificial intelligence cite studies that tools to assess a candidates’ skills and adeptness, which are based on a predetermined set of criteria created by humans with their own implicit biases, have the potential to reproduce bias and create unfair decision-making on a much broader scale.

Further fueling the legislative trend, in October 2021, the Equal Employment Opportunity Commission (EEOC) launched an initiative on artificial intelligence and algorithmic fairness to examine the issue of AI, people analytics, and big data in hiring and other employment decisions.  The EEOC will establish a working group to launch a series of listening sessions with key stakeholders and gather information on the impact of AI in employment decisions, to  examine more closely how technology changes the way employment decisions are made and to ensure those technologies are used fairly and consistently with federal equal employment opportunity laws.  The EEOC ultimately plans to  issue guidance for employers to ensure fairness in AI algorithms.  In addition, the Commission has implemented extensive training for its investigators in 2021 on the use of AI in employment practices.

New York City Is at the Forefront

New York City has stepped forward with the most stringent law in the country to date, which will require that by January 2, 2023, employers only use automated employment decision making tools that were independently audited for bias no more than one year prior to their use.  The law defines an automated employment decision tool as “any computerized process, derived from machine learning, statistical modeling, data analysis, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, which is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”   The “bias audit” required means an impartial evaluation by an independent auditor and must include testing to assess the tool’s disparate impact on persons in any federal EEO Component 1 form category (currently race/ethnicity and sex).

The law also imposes various disclosure requirements:

  • Website Posting – Employers must post on their website the summary of the results of the most recent audit conducted and the tool’s distribution date; and
  • Notice of usage – Applicants and employees who reside in New York City must be given 10 business days’ advance notice that the tool is being used to assist in evaluation of that individual for an employment decision, and the job qualifications and characteristics the automated tool will be using to assess the person’s candidacy.

Finally, an applicant or employee is granted the right to request an alternative selection process or accommodation be used, instead of the automated tool.

Notably, if not already disclosed publicly on the employer’s website, information about the type of data collected for the automated employment decision tool, the source of that data, and the employer or employment agency’s data retention policy, must be made available to any candidate or employee within 30 days of a written request.   Each day that an employer uses an automated assessment tool that does not comply with the law is considered a separate violation, and each time an employer fails to provide the proper notice, is subject to a penalty that ranges from $500 to $1,500 per violation.

More Such Laws Are Likely Coming

California and Washington, D.C. currently have bills pending that target the use of artificial intelligence in employment.  If passed, the D.C. law would be similar to the New York City bill, and would go even further to require a covered entity that takes adverse action in whole or in part on the results of an algorithmic eligibility determination provide the individual with a notice of any information the entity used to make the determination, provide the individual the ability to submit corrections to that information, and if the individual submits corrections, additionally they may request the entity conduct a reasoned reevaluation of the relevant algorithmic eligibility determination, conducted by a human on the corrected data.

Illinois’s law differs from New York City’s approach in that it restricts the use of video interview technology in the hiring process.  Like New York City, Illinois requires employers who use restricted AI hiring tools to disclose and obtain consent from the candidate prior to use and to explain to the candidate how the AI works and what general types of characteristics it uses to evaluate applicants.   Maryland adopted still another approach in 2020, and it prohibits employers from using facial recognition technology during job interviews without the applicant’s consent.

Further Considerations

Details of what is required to comply with the New York City bias audit have yet to be determined, including whether a vendor’s confirmation of such an assessment is sufficient for the product’s use in any setting, or whether an independent audit will be required for each employer.  In either case, it is left to vendors and employers to absorb the cost of compliance.

In addition, for those that request an alternative selection process, the law does not address what type of accommodation is required or under what conditions, and it remains unclear whether employers must grant such requests.

Employers should continue to monitor for any additional guidance New York City publishes and review and revise their policies and practices to reflect the notices and disclosures required of the law.

Facebooktwitterredditpinterestlinkedinmail
Back to Top