Home| Features| About| Customer Support| Request Demo| Our Analysts| Login
Gallery inside!
Technology

Employers And AI Vendors Scramble To Set Bias Auditing Bar

March 13, 2023
minute read

Companies are increasingly relying on AI technologies to make hiring decisions, so vendors, accountants, employers, and authorities are vying to create standards for bias-proofing these tools.

79% of firms utilize AI for hiring and recruiting, according to a February 2022 poll from the Society for Human Resource Management, and more stringent compliance regulations are soon to follow in locations like New York City and the European Union.

With these new legislation, it's "game time," as Shea Brown, CEO of AI consultancy company BABL AI Inc., put it.

Rules aimed at preventing bias in AI typically include some sort of auditing of the technology. Yet, the AI auditing sector is still in its infancy, and best practices and industry standards are continually evolving.

According to Mona Sloane, a senior researcher at the New York University Center for Responsible AI, there is currently no industry standard for AI audits. Instead of waiting for a government body to declare, "This is what an investigation of hiring AI must look like," there is a race to set the norm by actually implementing it.

The US Equal Employment Opportunity Commission has stated that it will focus its enforcement efforts on bias linked to AI, while having limited jurisdiction to impose more restrictions on suppliers or employers.

During a hearing on employer use of AI tools on January 31, Republican EEOC Commissioners Keith Sonderling and Andrea Lucas pointed out that no vendors had been invited. The session included auditing techniques for AI technologies.

The conversation, according to Chad Sowash, a former human resources consultant and current host of a well-liked podcast about HR and recruitment, exposes a gulf between the business that creates and manages algorithmic hiring tools and government authorities.

He declared, "I'm not certain that the EEOC is ready for this rapidly changing and fluid topic.

Carrying Out The Audits

Since many years ago, tech watchdogs have issued warnings that AI has the ability to reinforce bias. Jiahao Chen, owner of Responsible Artificial Intelligence LLC, an AI auditing company, claims that if social inequalities are not addressed, the data utilized to create a particular tool may reflect them.

Women and minorities have always been discriminated against in the workplace, according to Chen. "The data we have still contains the history of that."

The EEOC filed its first case against iTutorGroup, a provider of English-language tutoring services, in May, alleging that the company had set up its internet recruitment program to automatically exclude older applicants. Meanwhile, lawsuits are appearing alleging discrimination in workplace AI systems.

A man charged Workday Inc. with discriminating disproportionately against Black, disabled, and over-40 candidates using its artificially intelligent recruitment and screening tools last month. It's unusual for an AI bias lawsuit to be brought against a hiring platform as opposed to an employer, and the case was filed in the US District Court for the Northern District of California.

The first US jurisdiction, New York City, will shortly introduce notice and audit requirements for AI-based automated employment decision-making tools. In the European Union, a more comprehensive AI bill has been proposed.

An outcome audit or a process audit are the two common audit frameworks. According to the law in New York City, an outcome audit is necessary to measure bias in final employment choices. The EU AI Act will mandate a process audit that takes into account how the algorithm generates recommendations.

The 1978 publication of the EEOC's consistent rules on employee selection methods is frequently cited by vendors and employers. These regulations, among other things, create the "four-fifths rule," which examines whether a recruiting test has a selection rate for protected groups that is less than 80% in comparison to others.

But, considering that the tool itself doesn't have the ultimate say, the kind of bias that should be discovered in an AI tool comes from the kind of recommendations it provides to a hiring manager, according to Brown of BABL AI.

We should be concerned about the influence the tool has in influencing how these people make decisions rather than the disparate impact that occurs when the prospective employer really decides, he said.

New York City has stated that it will be up to companies to show evidence of an audit after some initial misunderstanding. Nonetheless, as long as their data is included, multiple employers are permitted to use the same audit. Harver and HireVue are two suppliers who chose to take on the responsibility of monitoring for conformity with that regulation.

Several vendors and auditors, who believe it should be a shared responsibility, welcomed the flexibility.

Frida Polli, chief data scientist of HR IT business Harver, stated, "If I modify something in my algorithm, it will affect all my clients. "This vendor-level inspection is very sensible in my opinion,"

"Rule of Thumb" of the EEOC

The four-fifths rule is just a "rule of thumb," not a promise that the tool is impartial, according to EEOC Chair Charlotte Burrows during the hearing in January. To determine discriminatory impact, courts have typically used more complex statistical analysis.

Employers are required to check artificial intelligence technologies for prejudice against people with disabilities, and they should have procedures in place to offer reasonable adjustments, according to recommendations released in May by the EEOC and the Department of Justice. There ought to be an alternative method of completing an evaluation, such as recording vocal responses, if it requires timed written responses.

However, auditors and vendors claim that because the Americans with Disabilities Act forbids employers from requesting a person to reveal their impairment, testing for how persons with disabilities perform using a particular product might be challenging.

Venkatasubramanian told the EEOC back then that suppliers draw their audits cues straight from regulators. He is a professor at Brown University and co-author of the Biden administration's Blueprint for an AI Bill of Rights. The federal government hasn't released any guidelines on minimizing bias in the workplace caused by AI that is based on traits besides disability status, like age or gender.

According to his observations, vendors determine what to test and only those things are tested by using recommendations like those offered by the EEOC.

Transparency Initiatives

Before such requirements existed, several vendors hired outside auditors or performed audits on their own. Vendors and auditors claim that by sharing their procedures, public concerns have been allayed as scrutiny grows.

Yet, some have also encountered skepticism. For allegedly "audit-washing" or misrepresenting the findings of an O'Neil Risk Consulting & Algorithmic Auditing assessment of HireVue's products in 2021, the company has come under fire.

The business was also attacked for its hiring-related facial analysis tools, which were abandoned in January 2021 not long after the Electronic Privacy Information Center complained to the Federal Trade Commission about how discriminatory and intrusive they were.

Transparency, according to Lindsey Zuloaga, chief data scientist at HireVue, is the solution. Although not necessary, HireVue and other significant providers of HR technology provide on their websites policies on ethics and other materials that describe how their systems operate.

Zuloaga said, "We've been under a lot of scrutiny since we did anything novel and unique as well as a lot of presumptions about how our technology functioned. "During my time at HireVue, I've seen us make a lot of progress toward learning that if we just open up and communicate about precisely what we do, it's really helped us debunk a lot of stereotypes or worries that folks have had," the employee said.

Tags:
Author
John Liu
Contributor
Eric Ng
Contributor
John Liu
Contributor
Editorial Board
Contributor
Bryan Curtis
Contributor
Adan Harris
Managing Editor
Cathy Hills
Associate Editor

Subscribe to our newsletter!

As a leading independent research provider, TradeAlgo keeps you connected from anywhere.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore
Related posts.