Skip to main content
Back to Blog
What Is an AI Bias Audit and Does Your Business Need One?
bias auditNYC Local Law 144EEOCAI hiringdisparate impact

What Is an AI Bias Audit and Does Your Business Need One?

AI Compliance Documents Team21 min read

Two-Sentence Summary

A bias audit is a statistical test that measures whether an AI hiring tool produces different selection rates for different demographic groups — and New York City requires one every year for any employer using automated tools to screen job candidates. This article explains how the audit works, what 'selection rate' and 'impact ratio' actually mean, who has to do it, what the penalties are for skipping it, and how federal discrimination law connects to the same underlying analysis.

If you've recently heard the phrase "bias audit" and your instinct was to assume it was either very technical or very optional, you're in good company. Most businesses encounter this term when they're already behind — either because a vendor mentioned it, a lawyer flagged it, or they found out New York City has been fining employers for not conducting one since July 2023.

The good news is that a bias audit is more understandable than it sounds. It's a structured test of whether an AI hiring tool produces different outcomes for different demographic groups. That's essentially the whole concept. The law that requires it, New York City's Local Law 144, is specific about what the test has to measure, who has to do it, and what you have to do with the results.

This article explains all of it — what a bias audit is, what the law requires, who it applies to, what the penalties are, and where this is heading beyond New York City. Every factual claim about the law is drawn from the enacted statute and implementing regulations. Source links are at the bottom.

What a Bias Audit Is

A bias audit, in the context of New York City's Local Law 144, is an impartial evaluation of an automated employment decision tool (AEDT) to assess the tool's performance across racial, ethnic, and sex categories. The technical term for what the audit measures is "disparate impact" — whether the tool produces meaningfully different selection rates for different demographic groups.

Think of it this way. Your AI hiring tool looks at applications and decides who gets through to the next stage. Some percentage of applicants in any given demographic group will pass that screen. The audit asks: is that percentage significantly lower for some groups than for others? And if so, by how much?

The standard measurement is something called an impact ratio. The impact ratio compares the selection rate for a given demographic group to the selection rate for the most selected group. If your tool selects 60% of white applicants but only 40% of Black applicants, the impact ratio for Black applicants is 0.67. A ratio below 0.80 — the "four-fifths rule" — has historically been treated under EEOC guidance as evidence of adverse impact. (EEOC Uniform Guidelines on Employee Selection Procedures)

New York City's rules require the audit to calculate selection rates and impact ratios for each race/ethnicity category and each sex category as defined in the EEO-1 form categories — the same demographic categories employers already use in federal EEOC reporting. (6 RCNY § 5-301)

Where the Legal Requirement Comes From

The requirement is in New York City Administrative Code Section 20-871, which was enacted as Local Law 144 of 2021. The full city rules implementing the statute are at Title 6 of the Rules of the City of New York, Section 5-301, adopted by the NYC Department of Consumer and Worker Protection (DCWP). The law and its implementing rules together form what's typically called the "AEDT rules."

The law says it is unlawful for an employer or employment agency to use an automated employment decision tool to substantially assist or replace discretionary decision making for employment decisions unless:

  • The AEDT has been the subject of a bias audit conducted within the past year
  • The results of that audit have been made publicly available on the employer's website
  • The employer has provided certain notices to candidates and employees

Enforcement began on July 5, 2023. (NYC Department of Consumer and Worker Protection — Automated Employment Decision Tools)

The law applies to employers who use automated employment decision tools for decisions about candidates or employees who work in New York City. "Automated employment decision tool" is defined as any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output — including a score, classification, or recommendation — that is used to substantially assist or replace discretionary decision making for employment decisions. (NYC Admin. Code § 20-870)

What the Audit Actually Involves

The bias audit has specific technical requirements set out in the NYC DCWP rules. Here's what the implementing regulations actually require. (6 RCNY § 5-301)

The auditor must be impartial. The rule requires the audit to be conducted by an "independent auditor" — meaning someone who is not involved in using, selling, or developing the AEDT and who has no financial interest in the tool. Your vendor cannot audit their own tool. Your internal data science team cannot conduct the audit. It must be a genuinely independent third party.

The data source. The audit must be conducted using historical data about the tool's performance if the employer has at least 100 people in each sex category and 100 people in each race/ethnicity category in the data. If the employer doesn't have sufficient historical data, the auditor may use other data — data from multiple employers using the same tool, or test data — but the auditor must explain why in the audit summary.

The calculation. The auditor calculates the selection rate for each race/ethnicity category and sex category as classified in EEO-1 categories. The EEO-1 categories for race/ethnicity are: Hispanic or Latino, White (not Hispanic or Latino), Black or African American (not Hispanic or Latino), Native Hawaiian or Other Pacific Islander (not Hispanic or Latino), Asian (not Hispanic or Latino), American Indian or Alaska Native (not Hispanic or Latino), and Two or More Races (not Hispanic or Latino).

The impact ratio. For each category, the auditor calculates the impact ratio: the category's selection rate divided by the selection rate of the most-selected category. The auditor also calculates the impact ratio for each sex/race combination category if the data is sufficient.

The summary. The auditor produces a summary of results that must be publicly posted on the employer's website. The summary must include the date of the audit, the source and dates of data used, the number of individuals in each category included in the audit, the selection rate for each category, and the impact ratio for each category.

Timing. The audit must have been conducted in the twelve months before the tool is used. If you conduct an audit in January 2026, the audit is valid through January 2027. After that, you need a new audit.

This is a technical process. The auditor needs access to the tool's output data — who it selected, who it didn't, and what demographic categories they fall into. It requires actual numbers, actual calculations, and a formal write-up. It is not a policy review or a design inspection; it is a measurement of real outcomes.

Who Needs a Bias Audit Under NYC LL144

The law applies to employers and employment agencies who use an AEDT in employment decisions for roles located in New York City. This includes:

  • Decisions about candidates for a role
  • Decisions about promotion or other employment decisions about current employees

The employer does not need to be based in New York City. If your company is headquartered in Dallas and you have employees in New York City, and you use an AI tool that helps make decisions about those employees, you're covered.

The scope is limited to tools that "substantially assist or replace" discretionary decision making. A tool that merely stores data and lets a human manually review it probably doesn't qualify. A tool that ranks candidates, assigns scores, generates recommendations, or automatically filters applicants — those are the target.

If you use a third-party platform like an applicant tracking system or a talent intelligence tool, you need to check whether it uses machine learning, statistical modeling, or algorithmic scoring. Many do. The law applies to the employer, not just the vendor. You cannot outsource the audit obligation to the platform provider.

What Happens If You Don't Do One

The NYC Department of Consumer and Worker Protection can impose civil penalties for violations. The penalty amounts are:

  • Up to $500 for a first violation (§ 20-872(a))
  • $500–$1,500 for each subsequent violation per day

But those amounts apply per day. An employer using a non-audited AEDT is potentially accumulating violations on a daily basis — and each use of the tool to make a decision about an individual candidate or employee could be treated as a separate violation. For a busy HR department processing hundreds of applications a month, that adds up fast.

In addition to fines, the DCWP can require employers to cease using the tool and can require corrective action. The agency has authority to investigate complaints, conduct audits, and require employers to provide documentation. (NYC Department of Consumer and Worker Protection — Enforcement)

Beyond NYC's specific penalties, using a biased hiring tool exposes an employer to federal civil rights liability. The EEOC has been explicit that AI hiring tools producing discriminatory outcomes violate Title VII of the Civil Rights Act, regardless of whether the discrimination was intentional. (EEOC Technical Assistance on Artificial Intelligence) The bias audit requirement is NYC's mechanism for making employers catch and address disparate impact before it generates individual discrimination claims.

What a Bias Audit Does Not Do

Understanding the limits of a bias audit matters as much as understanding what it does.

A bias audit does not certify that a hiring tool is fair. It measures whether the tool's current outcomes produce statistically significant disparities between demographic groups. If the tool has low disparate impact today, that's a good sign — but it doesn't mean the tool is unbiased in any deeper sense, and it doesn't mean it will stay that way as the tool continues to train on new data.

A bias audit does not evaluate the quality of the tool's decisions. An AI that consistently rejects the most qualified candidates across all demographic groups equally would pass a bias audit. The audit tests for equal treatment across groups, not for the accuracy or merit of the tool's selections.

A bias audit does not protect you from other legal obligations. Even a tool with a clean bias audit can violate the Illinois Human Rights Act, the EEOC's guidelines, or other state laws if it's used without proper notice, without human review, or in ways that otherwise violate those statutes. The NYC bias audit satisfies NYC LL144's audit requirement. It doesn't satisfy everything else.

A bias audit does not prevent future problems. It's a snapshot. If a tool is retrained, fine-tuned, or updated, the prior audit may no longer reflect current performance. This is one reason NYC requires annual audits rather than a one-time certification.

How to Get a Bias Audit Done

There are several paths, depending on your situation.

If you're using a major enterprise platform, ask your vendor whether they have retained an independent auditor and whether they make the audit results available. Some larger platforms — including some applicant tracking systems and talent intelligence tools — have arranged audits at the platform level and make summary results available to customers. If your vendor provides this, verify that the auditor is genuinely independent, that the data used reflects actual employment decisions (not just demonstration data), and that the audit was conducted within the past year.

If your vendor doesn't provide this, or if you want to verify their compliance independently, you will need to retain an independent auditor. A number of firms now specialize in AI bias auditing. You're looking for a vendor with a statistician or data scientist who can work with your tool's output data, apply the required calculations, and produce a compliant summary. The audit itself is a quantitative exercise — it doesn't require legal interpretation.

If you don't have sufficient historical data, talk to the auditor about options. The NYC rules allow auditors to use other sources of data if historical data is insufficient, as long as the audit summary explains the limitation.

Before the audit, make sure you can provide the auditor with: a dataset of tool outputs (who was selected, who wasn't), the demographic data associated with those candidates or employees (using EEO-1 categories), and documentation of how the tool was used (what decisions it was used to make, what threshold was used for selection).

After the audit, post the required summary on your website and maintain the documentation. If the audit reveals a significant disparity, you'll need to address it — either by adjusting how the tool is used, requesting that the vendor retrain or reconfigure the model, or discontinuing the tool.

The Notice Requirements Under LL144

The bias audit is only one part of LL144 compliance. The law also requires employers to provide specific notices before using the tool.

For job candidates, the employer must provide notice at least ten business days before using the tool, informing them that an AEDT will be used in the assessment, listing the job qualifications and characteristics the tool will use, and providing instructions for how to request an alternative selection process or accommodation. (NYC Admin. Code § 20-871)

For current employees, similar notice must be given at least ten business days before the tool is used.

The notice must also include a link to the publicly posted bias audit summary.

These notice requirements are separate from the audit. Even a perfectly compliant bias audit doesn't satisfy the notice obligation. Both must be done.

What Illinois and Federal Law Add

NYC's LL144 is the only US law that specifically mandates an annual third-party bias audit. But it's not the only law that creates pressure to conduct one.

Illinois HB3773, which took effect January 1, 2026, makes it a civil rights violation under the Illinois Human Rights Act to use AI in employment decisions in a way that "has the effect of" discriminating based on protected class characteristics. (775 ILCS 5/2-102(L)) The law doesn't require an annual third-party audit by name, but to demonstrate that your AI tools are not producing discriminatory effects — which is what you'd need to show in defense of an IDHR charge — a bias audit is the most direct evidence available.

The EEOC's position, stated in its October 2021 initiative on AI and algorithmic fairness, is that AI-assisted employment decisions are subject to Title VII, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA), and that employers are responsible for the discriminatory outcomes of AI tools even if the tools are provided by a third party. (EEOC AI Initiative) A bias audit, while not required under federal law, is the kind of evidence that demonstrates good-faith effort to comply.

The federal Uniform Guidelines on Employee Selection Procedures, adopted jointly by the EEOC, the Department of Labor, the Department of Justice, and the Civil Service Commission, are the foundational framework for disparate impact analysis in employment selection. Those guidelines have applied to employment selection procedures since 1978 and have always covered tests, scoring systems, and other selection mechanisms. AI hiring tools are selection procedures. The guidelines apply. (EEOC Uniform Guidelines)

Where This Is Heading

As of March 2026, NYC is the only US jurisdiction that mandates an annual third-party bias audit by name. But the direction of travel is clear.

Several states are actively considering or have recently considered bias audit requirements. The EU AI Act, which begins applying to high-risk AI systems in August 2026, requires conformity assessments and ongoing monitoring for AI systems used in employment — the underlying requirement is similar in spirit to a bias audit, even if it uses different terminology. (Regulation (EU) 2024/1689)

The more immediate practical pressure is procurement. Large enterprise clients, government contractors, and regulated industries are beginning to ask vendors and service providers whether their AI tools have been audited. Institutional buyers are treating bias audit documentation the way they treat SOC 2 reports: as a baseline requirement for doing business. That trend is accelerating.

If you're a company that sells AI tools or uses them in client-facing work, the question of whether you have a bias audit is increasingly a business question, not just a compliance question.

Where to Start

If you're an employer with employees in New York City and you use any AI tool in hiring or employment decisions, the first step is determining whether the tool qualifies as an AEDT under the law's definition. If it does, verify whether your vendor has conducted an audit, confirm the audit's independence and recency, check whether the required notice is going to candidates and employees, and confirm the summary is posted on your website.

If you're not sure whether your tool qualifies, the answer is: describe what it does and apply the definition. Does it use machine learning, statistical modeling, data analytics, or AI? Does it produce a score, classification, or recommendation? Is that output used to substantially assist or replace a hiring decision? If the answer to all three is yes, you have an AEDT and you need a bias audit.

This is not a process you can leave entirely to vendors. The obligation belongs to the employer. The documentation needs to exist. And if someone files a complaint, the first thing DCWP will ask for is your audit summary and your notice records.

Our AI Bias Audit Template includes the audit summary format, candidate notice templates, and posting checklist required under LL144 — and our NYC Local Law 144 compliance package covers the full set of employer obligations from audit through notice requirements.


Sources — Every statutory and regulatory fact in this article was verified against the enacted law and implementing regulations at these URLs:

Disclaimer: This article is for general informational purposes only and does not constitute legal advice. The law in this area is evolving, and individual circumstances vary. Consult a qualified employment attorney for guidance specific to your situation.

Why Bias Testing Looks at Outcomes, Not Intentions
Imagine a teacher creates a final exam that she genuinely believes is fair. She didn't write the questions to favor any particular group of students. She used the same textbook material everyone had access to. She gave everyone the same amount of time. From her perspective, the test is completely neutral. But when the grades come in, students who speak English as a second language fail at three times the rate of native English speakers — not because the subject matter is harder for them, but because the questions are phrased in unnecessarily complex English that tests language fluency more than subject knowledge. The teacher didn't intend to discriminate. But the result is discriminatory. This is exactly the logic behind bias audits for AI hiring tools. The law doesn't ask whether the programmer intended to build a biased tool. It doesn't ask whether the company wanted to discriminate. It asks a simpler, more measurable question: do the outcomes look different for different groups? If your AI resume screener advances 40% of white applicants but only 20% of Black applicants, the numbers speak for themselves — regardless of what the algorithm was 'trying' to do. That gap is what employment law calls adverse impact, and it's been the standard for evaluating discrimination in hiring since 1978. The reason the law focuses on outcomes rather than intentions is practical. You can't look inside an AI model and identify a line of code that says 'discriminate against this group.' Modern machine learning models are complex enough that even their creators often can't fully explain why a particular applicant received a particular score. But you can look at the results — who got through the screen and who didn't — and measure whether those results break down along demographic lines. That measurement is something an independent auditor can perform, document, and publish. It's concrete and verifiable in a way that 'we didn't mean to discriminate' never can be. This is why NYC Local Law 144 requires selection rate calculations and impact ratios broken down by the same race, ethnicity, and sex categories used in federal EEO-1 reporting. The audit doesn't ask the tool to explain itself. It measures what the tool actually does to real applicants, compares those outcomes across groups, and flags any disparity that crosses the four-fifths threshold established by the federal Uniform Guidelines. It's the difference between trusting a restaurant's claim that their kitchen is clean and sending a health inspector to look.
4 facts

Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.

More from the blog

Get your compliance documentation done

Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.

Browse Compliance Packages