Skip to main content
Back to Blog
Your Hiring Software Probably Uses AI. Here's Why That Matters Now.
AI hiringemployment lawIllinois HB3773NYC LL144Colorado SB24-205bias audit

Your Hiring Software Probably Uses AI. Here's Why That Matters Now.

AI Compliance Documents Team14 min read

Two-Sentence Summary

Most businesses using hiring software don't realize that their applicant tracking systems, resume screeners, and performance tools legally count as 'artificial intelligence' under new state laws in Illinois, Colorado, New York City, and California. If those tools help make decisions about who gets hired, fired, promoted, or scheduled, the employer — not the software vendor — is responsible for making sure they don't discriminate and that employees are properly notified.

Most employers who are affected by the new AI employment laws don't think of themselves as companies that "use AI." They think of themselves as companies that use software. An applicant tracking system. A resume screening tool. An interview scheduling platform. A workforce analytics dashboard.

That's a completely reasonable way to think about it. Five years ago, it was the only way to think about it. But the legal landscape has shifted, and the distinction between "software" and "AI" now matters in ways it didn't before.

Here's the situation. Multiple states and jurisdictions have passed laws that specifically regulate the use of artificial intelligence in employment decisions. Illinois, Colorado, New York City, and California all have laws on the books — some already in effect, some taking effect this year. The underlying federal anti-discrimination statutes, including Title VII of the Civil Rights Act (42 U.S.C. § 2000e-2), have always applied to hiring tools that produce discriminatory outcomes, and that includes AI-driven ones. And the definitions of "artificial intelligence" in these state and city laws are broad enough to cover tools that most employers don't think of as AI at all.

So the first question isn't "are we compliant?" The first question is "are we using AI?" And for a surprising number of businesses, the honest answer is "we don't actually know."

What Counts as AI Under These Laws

Let's start with what the laws actually say. The definitions vary by jurisdiction, but they share a common structure.

Illinois defines artificial intelligence as "a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" (775 ILCS 5/2-102(N)). That definition also explicitly includes generative AI.

New York City's Local Law 144 uses the term "automated employment decision tool" and defines it as "any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons" (NYC Admin. Code § 20-870).

Colorado's SB 24-205 focuses on "high-risk artificial intelligence systems" — AI systems that make, or are a substantial factor in making, "consequential decisions" about consumers in areas including employment (C.R.S. § 6-1-1701 et seq.).

Read those definitions and then think about the software your HR team uses every day.

Does your applicant tracking system rank or score resumes? That's a system generating a recommendation based on inputs — which fits Illinois's definition. Does your screening tool filter candidates based on keywords, qualifications, or "fit" scores? That's a simplified output used to assist decision making — which fits NYC's definition. Does your interview platform use any kind of assessment that produces a score? That's a classification derived from data analytics.

None of these tools need to be labeled "AI" by the vendor for the law to apply. What matters is what the tool does, not what it's called.

The Tools You Might Not Realize Qualify

The obvious ones are the tools that market themselves as AI-powered. If a vendor's website says "AI-driven talent matching" or "machine learning-powered screening," you can be fairly confident that tool falls under these laws.

But many of the tools that qualify don't advertise themselves that way. Here are some categories worth looking at.

Resume parsing and screening tools. If your ATS does anything more than store resumes — if it ranks them, scores them, highlights "top candidates," or filters out applicants who don't meet certain criteria using any kind of algorithm — it likely qualifies. The algorithm doesn't need to be sophisticated. Illinois's law specifically calls out the use of zip codes as a proxy for protected classes (775 ILCS 5/2-102(L)(1)), but that's not the only proxy that exists. A rules-based system that filters on employment gaps, school names, or prior job titles can also produce discriminatory outcomes.

Workforce scheduling and management platforms. Some scheduling tools use predictive algorithms to determine shift assignments, flag attendance patterns, or forecast staffing needs. Illinois's law covers AI used in "discharge, discipline, tenure, or the terms, privileges, or conditions of employment" (775 ILCS 5/2-102(L)(1)). If an algorithm influences who gets hours, who gets flagged for performance issues, or who gets scheduled for undesirable shifts, that's an employment decision covered by the statute.

Performance review tools. Any platform that aggregates employee data and produces a performance score, a "flight risk" rating, or a promotion recommendation is generating an AI output that affects employment terms. If that score influences who gets promoted or who gets put on a performance improvement plan, it's within scope.

Pre-employment assessment platforms. Video interview analysis, personality assessments, skills testing, and cognitive ability screening — if any of these produce a score or classification that helps determine whether a candidate advances, they qualify under NYC's LL144 definition. That definition covers any computational process that "issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making" (NYC Admin. Code § 20-870). It's true even if a human makes the final decision. The law covers tools that substantially assist human decision-makers, not just tools that replace them entirely.

Chatbots and automated outreach. If you use a chatbot to conduct initial candidate screening — asking qualifying questions and determining who advances to a human interview based on the responses — that chatbot is making or assisting in an employment decision.

The common thread is this: if a tool processes information about a person and produces any kind of output that influences an employment-related decision about that person, it's almost certainly covered.

Why "The Vendor Handles Compliance" Isn't How This Works

This is the assumption that trips up the most companies. A business buys an AI-powered hiring tool, the vendor says it's compliant, and the business assumes that's the end of the conversation.

It isn't. Under every major AI employment law passed so far, the employer bears the primary compliance obligation. Colorado's law makes this explicit by separating "developers" (who build the systems) from "deployers" (who use them) and placing independent legal duties on both (C.R.S. § 6-1-1701 et seq.). The developer may have its own obligations — Colorado requires developers to provide deployers with documentation and disclosures — but the deployer's obligations exist regardless.

NYC Local Law 144 places the duty squarely on the employer or employment agency: it is unlawful for an employer to use an AEDT unless the tool has been the subject of a bias audit conducted within the past year, the results are publicly posted, and specific notices have been given to employees and candidates (NYC Admin. Code § 20-871). The employer can't satisfy those requirements by pointing to the vendor.

Illinois requires the employer — not the vendor — to provide notice to employees that AI is being used (775 ILCS 5/2-102(L)(2)). And the underlying civil rights violation — using AI that has the effect of discriminating — is the employer's liability, not the vendor's.

What does this mean practically? It means you need to know what the tool does, how it works, what data it uses, and whether its outputs are fair. Your vendor should be able to help. But "should be able to help" and "handles it for you" are very different things.

What "Has the Effect Of" Means and Why It's the Phrase That Matters Most

Illinois's law uses a phrase that's easy to read past but is actually the most important language in the statute. It is a civil rights violation for an employer to use artificial intelligence "that has the effect of subjecting employees to discrimination on the basis of protected classes" (775 ILCS 5/2-102(L)(1)).

"Has the effect of." Not "intentionally discriminates." Not "knowingly discriminates." The effect is what the law cares about.

This is not a new legal concept. Disparate impact theory has been part of employment discrimination law since the Supreme Court's decision in Griggs v. Duke Power Co. in 1971 (401 U.S. 424), and it's codified in Title VII (42 U.S.C. § 2000e-2(k)). What's new is its explicit application to AI in state-level statutes. And what makes AI uniquely risky in this context is that AI systems learn from data, and data reflects the world as it has been — not the world as it should be.

Here's a concrete example. Suppose a resume screening tool is trained on five years of a company's hiring data. During those five years, the company hired mostly from certain universities, mostly people with certain career trajectories, mostly people in certain zip codes. The AI learns to favor those patterns — not because anyone told it to, but because those patterns correlate with "successful hires" in the training data. If those patterns also correlate with race, gender, or age, the AI will reproduce that correlation in its recommendations. It will screen out qualified candidates from underrepresented groups at a higher rate, and no one at the company will have decided to do that.

That's algorithmic discrimination. It's the whole reason these laws exist. And the only way to know whether it's happening is to test for it — which is exactly what these laws require. NYC's LL144 mandates annual bias audits that calculate selection rates and impact ratios across demographic categories (6 RCNY § 5-301). Colorado requires deployers to complete impact assessments and conduct annual reviews (C.R.S. § 6-1-1701 et seq.).

The Practical Starting Point

If you've gotten this far and you're thinking "I need to figure out whether this applies to me," here are the steps that make sense right now.

Start with an inventory. Make a list of every software tool your company uses that touches hiring, promotion, performance evaluation, scheduling, discipline, or termination. For each one, find out whether it uses machine learning, AI, predictive analytics, algorithmic scoring, or automated decision-making. Ask the vendor directly. Get the answer in writing. Our AI bias audit template includes an inventory framework built specifically for employment contexts.

For any tool that qualifies, find out what data it uses, what outputs it produces, and how those outputs influence employment decisions. This isn't a legal analysis — it's just understanding what's happening inside your own operation. Many employers go through this exercise and are surprised by what they find.

Then look at where you operate. If you have employees, candidates, or applicants in Illinois, Colorado, New York City, California, or any of the other states with AI or comprehensive privacy laws, you probably have obligations. Our multi-state employer AI disclosure kit consolidates the notice and disclosure requirements across jurisdictions so you're not tracking each state separately. The specifics vary by jurisdiction — you can read the enacted statutes at ilga.gov (Illinois), leg.colorado.gov (Colorado), legistar.council.nyc.gov (NYC), and leginfo.legislature.ca.gov (California) — but the baseline is consistent: know what AI you're using, assess whether it's fair, notify the people it affects, and document the whole thing.

This isn't something that requires a six-month project. For most mid-size companies, the inventory takes a week. The vendor conversations take another week or two. The documentation — the assessments, notices, and policies — takes more time, but it's time you'll be glad you spent when the alternative is reacting to a complaint without any documentation to point to.

If none of this was on your radar before today, that's fine. These laws are new. The regulatory landscape is moving fast. There is no version of this where everyone already knows everything. The only thing that matters is that you're paying attention now.

What Is Algorithmic Discrimination?
Imagine a teacher is grading essays, but instead of reading them, she feeds them into a computer program that gives each one a score. The program was trained by studying essays that got good grades in the past. Sounds fair, right? But what if, in past years, the teacher unconsciously gave higher grades to students with certain names or from certain neighborhoods? The computer would learn those same biases — not because anyone told it to be unfair, but because the old data it learned from was already unfair. That's algorithmic discrimination in a nutshell: a computer system producing unfair results because the patterns it learned reflect human biases baked into the data. In hiring, this plays out in real and measurable ways. A resume screening tool might learn that 'successful hires' in the past tended to come from certain universities, live in certain zip codes, or have certain career paths. If those patterns happen to line up with race, gender, or age — which they often do, because of decades of inequality in education and employment — the AI will favor candidates who fit those patterns and screen out everyone else. Nobody programmed it to discriminate. It just learned from a world that already had discrimination built in. What makes this especially tricky is that it's invisible. A hiring manager using a biased AI tool doesn't see the bias — they just see a ranked list of candidates and assume the tool did its job. The candidates who were filtered out never know it happened. That's why laws like Illinois HB3773 and NYC Local Law 144 focus on the effect of the tool, not the intent behind it. If the result is that one group of people gets screened out at a higher rate, that's a legal problem — even if nobody meant for it to happen. The fix isn't to stop using AI in hiring — it's to test it. That means regularly checking whether the tool's recommendations show patterns that track protected characteristics like race, gender, age, or disability. NYC's law actually requires an annual bias audit that calculates selection rates across demographic groups. The idea is simple: if you're going to let a machine help make decisions about people's careers, you have a responsibility to make sure it's being fair.
5 facts

Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.

More from the blog

Get your compliance documentation done

Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.

Browse Compliance Packages