• June 5, 2023

Building A Top 1% Personal Brand On LinkedIn: 5 Strategies From 0-55k Followers In 12 Months

Founders can’t hide, and nor should they want to. The more prevalent your personal brand, the more influence you have without paying for clicks. The more you grow your company with …

AI Anxiety: How These 20 Jobs Will Be Transformed By Generative Artificial Intelligence

The explosion of interest in generative artificial intelligence (AI) applications has left many of us worried about the future of work. While it has exciting implications for transforming just about every …

Immigration Agency Report Shows High H-1B Visa Salaries

A government report shows the notion that H-1B visa holders are “cheap labor” is a myth, with the average salary for H-1B professionals in computer-related occupations reaching nearly $130,000 a year. …

Guidance issued last week by the Equal Opportunity Employment Commission (EEOC) and the U.S. Department of Justice (DOJ) warns employers that they could be breaching the Americans with Disabilities Act (ADA) if they use AI-based software designed to shortlist candidates that fails to create an equitable experience for job seekers with disabilities.

Crucially, employers are reminded that, even if they elect to deploy software using algorithms wholly designed by specialist third-party providers, they themselves could be acting unlawfully should the platform disadvantage candidates with disabilities.

This can occur as a result of the software itself being inaccessible, or discriminatory in how traits are evaluated and scored. In such cases, if the employer fails to recognize this and act by providing “reasonable accommodations” – they could be breaking the law.

Biased assessments

Nowadays, employers are increasingly turning to third-party software platforms such as XOR and eSkill to improve efficiency by automating the process of screening out unsuitable or unqualified candidates as early as possible in the recruitment cycle.

Common uses of the software include the running of aptitude tests, the scanning of resumes, chatbots and video interviewing software.

These are underpinned by a core technology stack that may include aspects of machine learning, natural language processing and computer vision.

The problem with utilizing these efficiency tools is that the underlying algorithms used to identify promising candidates are overwhelmingly modeled off data sets comprising mainstream, standardized and typical behavioral traits.

AI feeds off data, more specifically, seeking consistent patterns through data which are then used to make assumptions and predictions. AI is less keen on non-conformist edge cases and when it comes to human beings – disability is just about the most compelling edge case out there.

In the two guidance documents issued by the EEOC and DOJ entitled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” and “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring” respectively, several practical examples of the ways in which AI algorithmic hiring tools can create unequal experiences for candidates with disabilities are laid out.

These might include the use of timed or gamified tests that are plainly inaccessible to candidates with motor or visual impairments as a result of it being difficult for them to use a mouse or view the screen, particularly when having to operate at speed.


Another example may be video analysis software that measures facial expressions or speech patterns. Such software is likely to automatically mark down an individual on the autism spectrum who may possess a non-standard eye gaze or someone with a mild speech impediment.

Frustratingly, such tests are often just packaged into the software as standard even if they bear little correlation to the skills required for the specific job on offer.

Software and chatbots used to rapidly scan resumes may raise an automatic red flag over gaps in employment history and yet, in the case of candidates with disabilities, these may simply be present as a result of prolonged medical treatment or indeed the discrimination of other employers during previous job applications.

Finally, algorithms lack the nuance to factor in how effective a candidate might be in their job role if the reasonable accommodations required by law under the ADA were actually in place. Instead, relying on typical baseline scenarios.

For example, a candidate with ADHD or PTSD may struggle in a test designed to evaluate how well they cope with distractions but such a test does not take into account the fact that at work they may be entitled to a desk in a quiet location or have dispensation to wear noise-canceling headphones.

Transparent communication

Commenting on the guidance, EEOC Chair Charlotte A. Burrows said, “New technologies should not become new ways to discriminate. If employers are aware of the ways AI and other technologies can discriminate against persons with disabilities, they can take steps to prevent it.”

Assistant Attorney General Kristen Clarke of the DOJ’s Civil Rights Division added, “Algorithmic tools should not stand as a barrier for people with disabilities seeking access to jobs. This guidance will help the public understand how an employer’s use of such tools may violate the Americans with Disabilities Act.”

There are no silver bullets when it comes to overcoming AI and algorithmic biases against people with disabilities in job hiring. Not least, because it is significantly pre-dated by and continues to occur alongside good old-fashioned ableism from human beings, who are, of course, also the ones designing the software.

However, what is likely to move the needle somewhat is better all-round communication.

This starts with the employer who needs to communicate clearly with prospective candidates about how exactly the assessment tools work, what traits they seek to identify and perhaps, most importantly, any behaviors that have the potential to negatively skew the results.

This then gives candidates with disabilities the opportunity to raise any concerns early on – particularly if the employer is explicit about the capacity for reasonable accommodations within their digital assets and documentation.

To this end, all in-house recruitment staff need extra training on what reasonable accommodations, such as the use of multiple submission formats, practically look like in this context.

Employers can also achieve a lot by routinely asking third-party software vendors what their accessibility provisions are and buying only from those who demonstrate a track record in thinking about accessibility, thereby creating an important pressure point within the market.

Equally, candidates need to be open too. Especially when it comes to inaccessible or discriminatory experiences they have endured with automated assessment tools.

Job seekers are invariably busy and keen to move on to the next opening, but unless they provide feedback to employers about the difficulties they have had, important learning opportunities may be lost.

Everybody needs to keep talking but hopefully, employers should now be even more aware of the extent of their responsibility and less likely to think it’s OK, as is far too often the case where digital accessibility is concerned, to simply resort to passing the buck.


Leave a Reply

Your email address will not be published.