
Your Hiring Software Probably Uses AI. Here's Why That Matters Now.
Two-Sentence Summary
Most businesses using hiring software don't realize that their applicant tracking systems, resume screeners, and performance tools legally count as 'artificial intelligence' under new state laws in Illinois, Colorado, New York City, and California. If those tools help make decisions about who gets hired, fired, promoted, or scheduled, the employer — not the software vendor — is responsible for making sure they don't discriminate and that employees are properly notified.
Most employers who are affected by the new AI employment laws don't think of themselves as companies that "use AI." They think of themselves as companies that use software. An applicant tracking system. A resume screening tool. An interview scheduling platform. A workforce analytics dashboard.
That's a completely reasonable way to think about it. Five years ago, it was the only way to think about it. But the legal landscape has shifted, and the distinction between "software" and "AI" now matters in ways it didn't before.
Here's the situation. Multiple states and jurisdictions have passed laws that specifically regulate the use of artificial intelligence in employment decisions. Illinois, Colorado, New York City, and California all have laws on the books — some already in effect, some taking effect this year. The underlying federal anti-discrimination statutes, including Title VII of the Civil Rights Act (42 U.S.C. § 2000e-2), have always applied to hiring tools that produce discriminatory outcomes, and that includes AI-driven ones. And the definitions of "artificial intelligence" in these state and city laws are broad enough to cover tools that most employers don't think of as AI at all.
So the first question isn't "are we compliant?" The first question is "are we using AI?" And for a surprising number of businesses, the honest answer is "we don't actually know."
What counts as AI under Illinois, Colorado, New York City, and California employment laws?
Illinois defines AI as any machine-based system that infers how to generate predictions, recommendations, or decisions from input data (775 ILCS 5/2-102(N)). NYC's Local Law 144 covers any computational process derived from machine learning that produces a score or classification used to assist hiring decisions. Colorado's SB 24-205 targets high-risk AI systems that are a substantial factor in consequential employment decisions.
Let's start with what the laws actually say. The definitions vary by jurisdiction, but they share a common structure.
Illinois defines artificial intelligence as "a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" (775 ILCS 5/2-102(N)). That definition also explicitly includes generative AI.
New York City's Local Law 144 uses the term "automated employment decision tool" and defines it as "any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons" (NYC Admin. Code § 20-870).
Colorado's SB 24-205 focuses on "high-risk artificial intelligence systems" — AI systems that make, or are a substantial factor in making, "consequential decisions" about consumers in areas including employment (C.R.S. § 6-1-1701 et seq.).
Read those definitions and then think about the software your HR team uses every day.
Does your applicant tracking system rank or score resumes? That's a system generating a recommendation based on inputs — which fits Illinois's definition. Does your screening tool filter candidates based on keywords, qualifications, or "fit" scores? That's a simplified output used to assist decision making — which fits NYC's definition. Does your interview platform use any kind of assessment that produces a score? That's a classification derived from data analytics.
None of these tools need to be labeled "AI" by the vendor for the law to apply. What matters is what the tool does, not what it's called.
Which HR software tools qualify as AI under state employment laws even without AI branding?
Resume parsers that rank or score applicants, workforce scheduling platforms that use predictive algorithms, performance review tools that generate flight-risk ratings, and pre-employment assessments that issue a score or classification all qualify — even if the vendor never uses the word "AI." Under NYC Local Law 144, any computational process producing a simplified output that substantially assists hiring decisions is covered.
The obvious ones are the tools that market themselves as AI-powered. If a vendor's website says "AI-driven talent matching" or "machine learning-powered screening," you can be fairly confident that tool falls under these laws.
But many of the tools that qualify don't advertise themselves that way. Here are some categories worth looking at.
Resume parsing and screening tools. If your ATS does anything more than store resumes — if it ranks them, scores them, highlights "top candidates," or filters out applicants who don't meet certain criteria using any kind of algorithm — it likely qualifies. The algorithm doesn't need to be sophisticated. Illinois's law specifically calls out the use of zip codes as a proxy for protected classes (775 ILCS 5/2-102(L)(1)), but that's not the only proxy that exists. A rules-based system that filters on employment gaps, school names, or prior job titles can also produce discriminatory outcomes.
Workforce scheduling and management platforms. Some scheduling tools use predictive algorithms to determine shift assignments, flag attendance patterns, or forecast staffing needs. Illinois's law covers AI used in "discharge, discipline, tenure, or the terms, privileges, or conditions of employment" (775 ILCS 5/2-102(L)(1)). If an algorithm influences who gets hours, who gets flagged for performance issues, or who gets scheduled for undesirable shifts, that's an employment decision covered by the statute.
Performance review tools. Any platform that aggregates employee data and produces a performance score, a "flight risk" rating, or a promotion recommendation is generating an AI output that affects employment terms. If that score influences who gets promoted or who gets put on a performance improvement plan, it's within scope.
Pre-employment assessment platforms. Video interview analysis, personality assessments, skills testing, and cognitive ability screening — if any of these produce a score or classification that helps determine whether a candidate advances, they qualify under NYC's LL144 definition. That definition covers any computational process that "issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making" (NYC Admin. Code § 20-870). It's true even if a human makes the final decision. The law covers tools that substantially assist human decision-makers, not just tools that replace them entirely.
Chatbots and automated outreach. If you use a chatbot to conduct initial candidate screening — asking qualifying questions and determining who advances to a human interview based on the responses — that chatbot is making or assisting in an employment decision.
The common thread is this: if a tool processes information about a person and produces any kind of output that influences an employment-related decision about that person, it's almost certainly covered.
Why can't employers rely on their AI vendor to handle employment law compliance?
Under every major AI employment law, the employer bears the primary compliance obligation — not the vendor. Colorado's SB 24-205 places independent legal duties on "deployers" regardless of what developers provide. NYC Local Law 144 requires the employer to commission an annual bias audit, post results publicly, and deliver candidate notices — obligations that cannot be satisfied by pointing to the vendor.
This is the assumption that trips up the most companies. A business buys an AI-powered hiring tool, the vendor says it's compliant, and the business assumes that's the end of the conversation.
It isn't. Under every major AI employment law passed so far, the employer bears the primary compliance obligation. Colorado's law makes this explicit by separating "developers" (who build the systems) from "deployers" (who use them) and placing independent legal duties on both (C.R.S. § 6-1-1701 et seq.). The developer may have its own obligations — Colorado requires developers to provide deployers with documentation and disclosures — but the deployer's obligations exist regardless.
NYC Local Law 144 places the duty squarely on the employer or employment agency: it is unlawful for an employer to use an AEDT unless the tool has been the subject of a bias audit conducted within the past year, the results are publicly posted, and specific notices have been given to employees and candidates (NYC Admin. Code § 20-871). The employer can't satisfy those requirements by pointing to the vendor.
Illinois requires the employer — not the vendor — to provide notice to employees that AI is being used (775 ILCS 5/2-102(L)(2)). And the underlying civil rights violation — using AI that has the effect of discriminating — is the employer's liability, not the vendor's.
What does this mean practically? It means you need to know what the tool does, how it works, what data it uses, and whether its outputs are fair. Your vendor should be able to help. But "should be able to help" and "handles it for you" are very different things.
What does "has the effect of" mean in Illinois's AI employment law, and why does it matter?
Illinois prohibits AI that "has the effect of subjecting employees to discrimination" (775 ILCS 5/2-102(L)(1)) — not AI that intentionally discriminates. This disparate-impact standard, rooted in Griggs v. Duke Power Co. (1971) and Title VII (42 U.S.C. § 2000e-2(k)), means an employer is liable if an AI tool produces discriminatory outcomes even when no one intended bias.
Illinois's law uses a phrase that's easy to read past but is actually the most important language in the statute. It is a civil rights violation for an employer to use artificial intelligence "that has the effect of subjecting employees to discrimination on the basis of protected classes" (775 ILCS 5/2-102(L)(1)).
"Has the effect of." Not "intentionally discriminates." Not "knowingly discriminates." The effect is what the law cares about.
This is not a new legal concept. Disparate impact theory has been part of employment discrimination law since the Supreme Court's decision in Griggs v. Duke Power Co. in 1971 (401 U.S. 424), and it's codified in Title VII (42 U.S.C. § 2000e-2(k)). What's new is its explicit application to AI in state-level statutes. And what makes AI uniquely risky in this context is that AI systems learn from data, and data reflects the world as it has been — not the world as it should be.
Here's a concrete example. Suppose a resume screening tool is trained on five years of a company's hiring data. During those five years, the company hired mostly from certain universities, mostly people with certain career trajectories, mostly people in certain zip codes. The AI learns to favor those patterns — not because anyone told it to, but because those patterns correlate with "successful hires" in the training data. If those patterns also correlate with race, gender, or age, the AI will reproduce that correlation in its recommendations. It will screen out qualified candidates from underrepresented groups at a higher rate, and no one at the company will have decided to do that.
That's algorithmic discrimination. It's the whole reason these laws exist. And the only way to know whether it's happening is to test for it — which is exactly what these laws require. NYC's LL144 mandates annual bias audits that calculate selection rates and impact ratios across demographic categories (6 RCNY § 5-301). Colorado requires deployers to complete impact assessments and conduct annual reviews (C.R.S. § 6-1-1701 et seq.).
What is the practical starting point for employers who need to assess AI employment law compliance?
Start with a software inventory: list every tool touching hiring, promotion, performance, scheduling, or termination, then ask each vendor in writing whether it uses machine learning, algorithmic scoring, or predictive analytics. For tools that qualify, document what data they use and how outputs influence decisions — this inventory is the foundation every state law's compliance framework requires.
If you've gotten this far and you're thinking "I need to figure out whether this applies to me," here are the steps that make sense right now.
Start with an inventory. Make a list of every software tool your company uses that touches hiring, promotion, performance evaluation, scheduling, discipline, or termination. For each one, find out whether it uses machine learning, AI, predictive analytics, algorithmic scoring, or automated decision-making. Ask the vendor directly. Get the answer in writing. Our AI bias audit template includes an inventory framework built specifically for employment contexts.
For any tool that qualifies, find out what data it uses, what outputs it produces, and how those outputs influence employment decisions. This isn't a legal analysis — it's just understanding what's happening inside your own operation. Many employers go through this exercise and are surprised by what they find.
Then look at where you operate. If you have employees, candidates, or applicants in Illinois, Colorado, New York City, California, or any of the other states with AI or comprehensive privacy laws, you probably have obligations. Our multi-state employer AI disclosure kit consolidates the notice and disclosure requirements across jurisdictions so you're not tracking each state separately. The specifics vary by jurisdiction — you can read the enacted statutes at ilga.gov (Illinois), leg.colorado.gov (Colorado), legistar.council.nyc.gov (NYC), and leginfo.legislature.ca.gov (California) — but the baseline is consistent: know what AI you're using, assess whether it's fair, notify the people it affects, and document the whole thing.
This isn't something that requires a six-month project. For most mid-size companies, the inventory takes a week. The vendor conversations take another week or two. The documentation — the assessments, notices, and policies — takes more time, but it's time you'll be glad you spent when the alternative is reacting to a complaint without any documentation to point to.
If none of this was on your radar before today, that's fine. These laws are new. The regulatory landscape is moving fast. There is no version of this where everyone already knows everything. The only thing that matters is that you're paying attention now.
What Is Algorithmic Discrimination?
Imagine a teacher is grading essays, but instead of reading them, she feeds them into a computer program that gives each one a score. The program was trained by studying essays that got good grades in the past. Sounds fair, right? But what if, in past years, the teacher unconsciously gave higher grades to students with certain names or from certain neighborhoods? The computer would learn those same biases — not because anyone told it to be unfair, but because the old data it learned from was already unfair. That's algorithmic discrimination in a nutshell: a computer system producing unfair results because the patterns it learned reflect human biases baked into the data.
In hiring, this plays out in real and measurable ways. A resume screening tool might learn that 'successful hires' in the past tended to come from certain universities, live in certain zip codes, or have certain career paths. If those patterns happen to line up with race, gender, or age — which they often do, because of decades of inequality in education and employment — the AI will favor candidates who fit those patterns and screen out everyone else. Nobody programmed it to discriminate. It just learned from a world that already had discrimination built in.
What makes this especially tricky is that it's invisible. A hiring manager using a biased AI tool doesn't see the bias — they just see a ranked list of candidates and assume the tool did its job. The candidates who were filtered out never know it happened. That's why laws like Illinois HB3773 and NYC Local Law 144 focus on the effect of the tool, not the intent behind it. If the result is that one group of people gets screened out at a higher rate, that's a legal problem — even if nobody meant for it to happen.
The fix isn't to stop using AI in hiring — it's to test it. That means regularly checking whether the tool's recommendations show patterns that track protected characteristics like race, gender, age, or disability. NYC's law actually requires an annual bias audit that calculates selection rates across demographic groups. The idea is simple: if you're going to let a machine help make decisions about people's careers, you have a responsibility to make sure it's being fair.
5 facts
6 references
- [1]Title VII of the Civil Rights Act — 42 U.S.C. § 2000e-2 (Unlawful Employment Practices) (opens in new tab)
- [2]NYC Local Law 144 — Int 1894-2020 (Automated Employment Decision Tools) (opens in new tab)
- [3]NYC DCWP Rules for Automated Employment Decision Tools (6 RCNY § 5-301) (opens in new tab)
- [4]Griggs v. Duke Power Co., 401 U.S. 424 (1971) (opens in new tab)
- [5]Colorado SB 24-205 — Consumer Protections for Artificial Intelligence (opens in new tab)
- [6]California Legislature (CCPA/ADMT Statutes) (opens in new tab)
Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.
More from the blog
Texas TRAIGA Has Been Live for 4 Months. Here's What the AG Is Doing — and What You Should Be Ready For.
Texas TRAIGA has been live for 4 months. Zero public AG enforcement so far. The complaint portal launches September 1, 2026 — and what you have documented before that matters more than what you do after.
Colorado's AI Law Takes Effect June 30, 2026. Here's What It Requires.
Colorado's AI law takes effect June 30, 2026. No amending bill has been introduced. The legislature has failed to revise the law four times. The deadline is real.
Workday AI Hiring Lawsuit Could Reshape Employer Liability
A federal court is testing whether AI vendors — not just employers — can be sued for discriminatory hiring outcomes. The certified class could include hundreds of millions of applicants.
Colorado AI Compliance for HR Software Companies: What SB 24-205 Means for Your Product
HR software that screens candidates, scores performance, or ranks employees is classified as high-risk AI under Colorado's law. The June 30, 2026 deadline applies to both the companies that build these tools and the HR teams that use them.
Do I Need AI Compliance? A Decision Framework for Every Business Using AI
Not sure if AI compliance applies to your business? Walk through four questions — and know exactly which laws apply, which documents you need, and where to start.
Operating in Multiple States? Here's How AI Compliance Stacks Up Across 15 Jurisdictions
Colorado, California, Texas, Illinois, and NYC all have active AI laws — and they don't all require the same things. If you operate in multiple states, here's what applies to you and why.
Oregon Consumer Privacy Act: What Your Business Needs to Know About AI Profiling Requirements
Oregon's privacy law has been in effect since July 2024, requires data protection assessments for AI profiling, and flatly prohibits processing personal data of consumers under 16 for targeted advertising or data sales — a protection not found in most other state laws. The 30-day cure period effectively expired for most businesses on January 1, 2026 (Oregon Laws 2025, c.417).
What Is an AI Impact Assessment? The Document Every State Law Now Requires
Colorado, California, and Illinois all require some version of an AI impact assessment — but they don't call it the same thing or require the same format. Here's what every version has in common, and what each state specifically demands.
What Is a High-Risk AI System? A Plain-Language Guide for Business Owners
Three different laws. Three different definitions of 'high-risk AI.' If your business uses AI to make decisions about people, here's how to figure out which rules apply to you.
The Federal Government Quietly Removed Its AI Hiring Guidance. Four States Are Writing Their Own.
The federal government removed every page of AI hiring guidance it ever published. Over a year later, the pages are still down. Four states wrote their own — and none of them agree.
AI governance framework checklist: what every enacted state law actually requires
Colorado, Texas, and Illinois all passed AI laws with deadlines in early 2026 — and none of them are identical. Here's the one compliance checklist that covers all three at once.
You're HIPAA-Compliant. That's Not Enough Anymore.
HIPAA protects patient records. It has nothing to say about whether the AI making decisions about those patients is fair. New rules are filling that gap — and they apply to you even if your HIPAA program is airtight.
The NIST AI Risk Management Framework: What It Is and Why Colorado Made It a Legal Shield
The US government published a free framework for managing AI risk — and Colorado's AI law turns following it into a legal shield. If something goes wrong with your AI, this is the document that shifts the burden of proof.
Texas TRAIGA (HB 149): What the Texas Responsible AI Governance Act Requires and How to Comply
Texas passed an AI law that applies to every business — no exemptions for small companies, no carveout for low-risk tools. It's already in effect, and a single uncurable violation starts at $80,000.
What Does AI Compliance Actually Cost a Small Business in 2026?
AI compliance can cost $49 or $50,000 — depending on what you actually need. Here's what each option costs in real numbers, so you can stop guessing and start budgeting.
AI Compliance Penalties by State: What Happens If You Ignore the Law
"Per violation" sounds like one fine. It isn't. Here's what the penalty math actually looks like state by state — and why the numbers can compound into company-ending territory fast.
AI and HIPAA: What Healthcare Businesses Must Do Now
If an AI tool touches patient data at your healthcare organization, HIPAA applies — and most vendor contracts aren't written to cover it. Here's what you need before you deploy.
EU AI Act Compliance Checklist: What US Businesses Need Before August 2026
Europe's AI law applies to US companies — even ones with no European office. If your AI is used by anyone in the EU, the deadline is August 2026 and the fines are calculated on your global revenue.
ISO 42001: The AI Certification Your Enterprise Clients Will Soon Require
Enterprise clients are starting to require ISO 42001 certification before they'll buy AI products — the same way they require SOC 2. Here's what the standard actually requires and why getting it early is a competitive advantage.
What Is an AI Bias Audit and Does Your Business Need One?
New York City requires an annual test of any AI hiring tool to check whether it's filtering out one group of people more than others. If you hire in NYC, this isn't optional — here's what the audit actually involves.
Get your compliance documentation done
Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.
Browse Compliance Packages













