• December 7, 2022

Zero-Day Hackers Breach Samsung Galaxy S22 Twice In 24 Hours

Last year, during the Pwn2Own hacking event in Austin, Texas, the Samsung Galaxy S21 was hacked, not once but twice, across a period of just 48 hours. This year, at the …

How To Get The Most Out Of Your Branding

By Tommy Mello, owner A1 Garage Doors, a $100M+ home service business. Sharing what I’ve learned to help other entrepreneurs scale. You can buy a purse for $30. Or you can …

Cities Face Long-Term Neglect, Not Just A Real Estate “Doom Loop”

There’s been a sudden spike in worrying about city problems created by declining commercial real estate (CRE) values, especially urban office buildings where increased working from home (WFH) has reduced in-office …

Guidance issued last week by the Equal Opportunity Employment Commission (EEOC) and the U.S. Department of Justice (DOJ) warns employers that they could be breaching the Americans with Disabilities Act (ADA) if they use AI-based software designed to shortlist candidates that fails to create an equitable experience for job seekers with disabilities.

Crucially, employers are reminded that, even if they elect to deploy software using algorithms wholly designed by specialist third-party providers, they themselves could be acting unlawfully should the platform disadvantage candidates with disabilities.

This can occur as a result of the software itself being inaccessible, or discriminatory in how traits are evaluated and scored. In such cases, if the employer fails to recognize this and act by providing “reasonable accommodations” – they could be breaking the law.

Biased assessments

Nowadays, employers are increasingly turning to third-party software platforms such as XOR and eSkill to improve efficiency by automating the process of screening out unsuitable or unqualified candidates as early as possible in the recruitment cycle.

Common uses of the software include the running of aptitude tests, the scanning of resumes, chatbots and video interviewing software.

These are underpinned by a core technology stack that may include aspects of machine learning, natural language processing and computer vision.

The problem with utilizing these efficiency tools is that the underlying algorithms used to identify promising candidates are overwhelmingly modeled off data sets comprising mainstream, standardized and typical behavioral traits.

AI feeds off data, more specifically, seeking consistent patterns through data which are then used to make assumptions and predictions. AI is less keen on non-conformist edge cases and when it comes to human beings – disability is just about the most compelling edge case out there.

In the two guidance documents issued by the EEOC and DOJ entitled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” and “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring” respectively, several practical examples of the ways in which AI algorithmic hiring tools can create unequal experiences for candidates with disabilities are laid out.

These might include the use of timed or gamified tests that are plainly inaccessible to candidates with motor or visual impairments as a result of it being difficult for them to use a mouse or view the screen, particularly when having to operate at speed.

Advertisement

Another example may be video analysis software that measures facial expressions or speech patterns. Such software is likely to automatically mark down an individual on the autism spectrum who may possess a non-standard eye gaze or someone with a mild speech impediment.

Frustratingly, such tests are often just packaged into the software as standard even if they bear little correlation to the skills required for the specific job on offer.

Software and chatbots used to rapidly scan resumes may raise an automatic red flag over gaps in employment history and yet, in the case of candidates with disabilities, these may simply be present as a result of prolonged medical treatment or indeed the discrimination of other employers during previous job applications.

Finally, algorithms lack the nuance to factor in how effective a candidate might be in their job role if the reasonable accommodations required by law under the ADA were actually in place. Instead, relying on typical baseline scenarios.

For example, a candidate with ADHD or PTSD may struggle in a test designed to evaluate how well they cope with distractions but such a test does not take into account the fact that at work they may be entitled to a desk in a quiet location or have dispensation to wear noise-canceling headphones.

Transparent communication

Commenting on the guidance, EEOC Chair Charlotte A. Burrows said, “New technologies should not become new ways to discriminate. If employers are aware of the ways AI and other technologies can discriminate against persons with disabilities, they can take steps to prevent it.”

Assistant Attorney General Kristen Clarke of the DOJ’s Civil Rights Division added, “Algorithmic tools should not stand as a barrier for people with disabilities seeking access to jobs. This guidance will help the public understand how an employer’s use of such tools may violate the Americans with Disabilities Act.”

There are no silver bullets when it comes to overcoming AI and algorithmic biases against people with disabilities in job hiring. Not least, because it is significantly pre-dated by and continues to occur alongside good old-fashioned ableism from human beings, who are, of course, also the ones designing the software.

However, what is likely to move the needle somewhat is better all-round communication.

This starts with the employer who needs to communicate clearly with prospective candidates about how exactly the assessment tools work, what traits they seek to identify and perhaps, most importantly, any behaviors that have the potential to negatively skew the results.

This then gives candidates with disabilities the opportunity to raise any concerns early on – particularly if the employer is explicit about the capacity for reasonable accommodations within their digital assets and documentation.

To this end, all in-house recruitment staff need extra training on what reasonable accommodations, such as the use of multiple submission formats, practically look like in this context.

Employers can also achieve a lot by routinely asking third-party software vendors what their accessibility provisions are and buying only from those who demonstrate a track record in thinking about accessibility, thereby creating an important pressure point within the market.

Equally, candidates need to be open too. Especially when it comes to inaccessible or discriminatory experiences they have endured with automated assessment tools.

Job seekers are invariably busy and keen to move on to the next opening, but unless they provide feedback to employers about the difficulties they have had, important learning opportunities may be lost.

Everybody needs to keep talking but hopefully, employers should now be even more aware of the extent of their responsibility and less likely to think it’s OK, as is far too often the case where digital accessibility is concerned, to simply resort to passing the buck.

Advertisement

Leave a Reply

Your email address will not be published.