We Help Protect
California's Employees

Was an Algorithm the Reason You Didn’t Get the Job? Your Rights Under California’s New AI Rules

You applied for a job you were qualified for. Maybe you spent hours tailoring your resume and cover letter. Maybe you had the right experience, the right background, the right credentials. Sometimes within minutes, you received a rejection. More frequently, you never hear anything back at all.

It may not have been a human who decided you weren’t a fit: it might have been an AI algorithm. Until recently, the law had not clearly addressed what happens when those systems discriminate.

That changed on October 1, 2025, when a new California AI hiring discrimination law took effect. That’s when California became the first state to comprehensively address AI discrimination in employment under its civil rights framework. Under these rules, employers cannot hide behind the complexity of their technology or the contracts with their vendors. If an algorithm discriminates against you, your employer is responsible.

Here’s what you need to know about what the new law covers, how AI tools can harm employees from protected groups, and what you can do if you believe an automated system cost you a job or a promotion.

What Is an Automated Decision System 

According to the updated California Code of Regulations, § 11008.1(a), an automated decision system, or ADS, is defined as: 

“[A] computational process that makes a decision or gacilitates human decision making regarding an employment benefit […] An Automated-Decision System may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing methods.” 

In other words, it’s a computer program that shapes employment-related decisions. The definition is broad on purpose, because it’s supposed to cover not just the tools that make final decisions, but also any systems that could meaningfully shape a human’s decisions related to someone else’s employment. 

The tools themselves are already everywhere. Large and mid-size employers across the Bay Area and beyond now routinely use ADS tools to:

  • Screen and filter resumes before a human recruiter sees them, eliminating candidates who don’t match a pre-set profile
  • Conduct and evaluate automated video interviews, analyzing word choice, vocal tone, and facial expressions to generate a candidate score
  • Administer gamified assessments and cognitive tests that generate numerical rankings used to advance or eliminate applicants
  • Deliver job advertisements algorithmically, directing listings to certain demographic groups while excluding others,  shaping who even has the opportunity to apply with preemptive AI screening job applicant discrimination.
  • Monitor employee productivity in real time and generate performance scores that inform decisions about promotions, raises, discipline, and termination

Many applicants and employees encounter these tools without knowing it. A request to complete a brief “assessment” before speaking with a recruiter, or a video interview platform with no human on the other end, are common signs that an ADS is in use. 

California’s New Law About AI Hiring: What Changed on October 1, 2025

What is California’s automated decision system law?

On October 1, 2025, regulations developed by the California Civil Rights Council took effect, amending the regulatory framework of the Fair Employment and Housing Act to address the use of artificial intelligence in employment. These are are formal clarifications of how FEHA’s existing anti-discrimination protections apply to automated tools. The practical effect is significant: every protection California workers have always had against employment discrimination now explicitly applies to decisions made or influenced by AI. 

The law includes two key provisions:

  • Using an ADS in a way that discriminates based on protected characteristics is unlawful.
  • An employer does not need to use or choose an ADS with discriminatory reasons in mind, or even know that the ADS was causing discriminatory outcomes. The employer is still liable due to the tool’s disparate impact

These basic standards apply to all California businesses with five or more employees.

The regulations also address transparency. Under the related Automated Decision-Making Technology rules developed in parallel by the California Privacy Protection Agency, employers and businesses using ADS tools are required to provide employees and applicants with pre-use notice that explains:

  • When and how automated tools are being used
  • What the basis for any decision is
  • What rights the individual has to opt out or request human review

Employees who are evaluated, scored, or ranked by an automated system have the right to ask for a human to review that decision instead.

The Vendor Problem: “The AI Did It” Isn’t a Defense

Can an employer blame the AI vendor if It discriminates and they didn’t know about it? No. That’s the whole point of the new regulations. 

Many companies that use AI hiring tools did not build those tools themselves. They purchased or licensed them from third-party vendors, including well-known platforms used by major employers across industries. The implicit assumption has often been that if a vendor’s algorithm discriminates, the responsibility lies with the vendor, not the employer who chose to use it.

The new regulations reject that assumption entirely. The rules extend FEHA liability to an employer’s agent, which is defined as any person or entity acting on behalf of an employer, directly or indirectly, to perform functions traditionally performed by the employer. That definition explicitly includes third-party vendors conducting applicant recruitment, screening, hiring, performance evaluation, or any other employment function using an automated decision system. The employer who deployed the tool is liable for what the tool does.

This interpretation is consistent with a significant development in federal court. In Mobley v. Workday, Inc., a federal judge in the Northern District of California allowed discrimination claims to proceed against Workday on the theory that its AI screening tools functioned as a gatekeeper in the hiring process. The case, which is ongoing, signals that AI vendors themselves may face direct liability. 

If you’re an employee, it’s a much more relevant case: your employer can’t protect itself from liability just by delegating responsibility to a third-party AI hiring solution. To make a long story short, the company who rejected your application is responsible for that action, even if someone else built the tool it used to do so.

How AI Tools Can Discriminate Against Protected Groups

Understanding how algorithmic discrimination actually happens matters for employees who are trying to assess whether their situation is worth taking to a lawyer. To make a long story short, these tools are trained by humans, which means that they are prone to replicating human biases, but without the flexibility of an actual person. 

Biased Training Data

Most AI hiring tools are trained on historical data, like records of who was hired in the past, who succeeded, and what their profiles looked like. If those historical patterns reflect discrimination (and in most industries, they do, according to multiple recent studies), the algorithm learns to replicate them. It is not alive, so it can’t understand that it is perpetuating inequality. It’s just identifying patterns and repeating them. 

For example, the MIT Media Lab’s landmark Gender Shades study by researcher Joy Buolamwini demonstrated how dramatically facial analysis AI performs worse on women and people with darker skin tones. In other words, any employer using video interview software could be discriminating against these protected characteristics just because the AI was trained with human biases. 

Thoughtless Assessment Designs

Many screening tools are intended to judge an applicant’s abilities or fitness for the job. However, that can lead to problems for candidates with:

  • Disabilities: Tools that measure reaction time, physical dexterity, or specific cognitive patterns may screen out candidates with certain disabilities. 
  • Accents: Vocal tone analysis may disadvantage non-native English speakers. 
  • Darker Skin: Facial expression scoring may introduce racial bias.

The EEOC has addressed this directly in its guidance on artificial intelligence and algorithmic fairness, noting that assessments that appear neutral on their face can produce discriminatory outcomes across disability, national origin, and gender lines.

The Proxy Problem

California’s new regulations include an important concept: a “proxy” is defined as a characteristic or category closely correlated with a protected class under FEHA. An ADS does not need to directly consider race or gender to produce racially or gender-based discriminatory outcomes — it only needs to rely on a variable that closely tracks those characteristics. Employment gaps, zip codes, graduation years, alma maters, and even the specific software programs listed on a resume can function as proxies for protected characteristics. The AI never says it is discriminating. The outcome does.

Anti-Parent Scheduling and Availability Screening

Tools that filter candidates based on scheduling flexibility or availability for overtime may disproportionately screen out parents (particularly mothers) as well as employees with religious observances, those managing disabilities, or those undergoing medical treatment. The AI Now Institute has documented how algorithmic management tools in workplace settings can entrench systemic disadvantages for caregivers and people with health conditions.

Each of these mechanisms can give rise to a viable discrimination claim under FEHA. The legal standard is disparate impact: the same doctrine established by the Supreme Court in Griggs v. Duke Power Co., which held that facially neutral employment practices that produce discriminatory outcomes violate civil rights law. California’s ADS regulations make clear that this principle applies fully to automated tools.

Your Questions Answered 

Can an employer use AI to screen out job applicants in California?

Yes, but only within strict legal limits. An employer may use automated decision systems in the hiring process, but any ADS that produces discriminatory outcomes against a protected class under FEHA is unlawful,regardless of whether the discrimination was intentional. Employers also have affirmative obligations around notice, transparency, and the availability of human review.

Is AI hiring discrimination illegal in California?

Yes. Under FEHA and the regulations that took effect October 1, 2025, using an ADS that creates a disparate impact on a protected class is unlawful. Liability turns on outcome, not intent. An employer that uses a tool with a discriminatory effect cannot avoid accountability by claiming ignorance of how the algorithm worked.

What are my rights if an algorithm rejected my job application?

If you believe an automated tool played a role in a discriminatory hiring or employment decision, you have several rights under California law. You may file a complaint with the California Civil Rights Department, which enforces FEHA. You may request that a human being review any automated decision affecting your employment. And you have the right to legal representation in pursuing a claim. 

Importantly, California’s regulations require employers to retain ADS-related records, including data inputs, outputs, decision criteria, and audit results, for at least four years. This means the evidence of what a tool did and how it affected your candidacy must be preserved, and can be obtained through the litigation process.

Can I opt out of AI decision-making?

Under California’s Automated Decision-Making Technology rules, employers are required to provide pre-use notice before using an ADS to make or contribute to an employment decision. That notice must explain:

  • What the tool does
  • What data it uses 
  • What rights you have (including the right to opt out and to request a human alternative) 

If you were not given this notice, or if your request for human review was denied, those facts are relevant to a potential claim.

What to Do If You Suspect Think an Algorithm Cost You a Job or Promotion

If you suspect an automated system played a role in a hiring rejection, a failure to be promoted, or an adverse performance evaluation, the steps you take immediately can significantly affect your ability to pursue a legal claim.

Document Everything

Save the job posting, your application materials, any automated communications you received, rejection notices, and the complete timeline of the process. Pay attention to whether the rejection came unusually quickly. In many cases, a near-instant rejection is a strong indicator of automated screening. Write down any platforms or tools you were asked to use.

Look For Signs of ADS Use in the Process

Were you directed to a third-party assessment platform before speaking with anyone at the company? Did you participate in a video interview with no human present? Were you asked to complete timed cognitive tests or games? These are common formats for ADS tools and should be documented.

Research the Employer’s Tools

Many employers disclose their use of AI hiring tools in privacy notices, terms of use, or job application agreements. Prior employment discrimination complaints against the same employer, or against the vendor whose tool was used, may be searchable through public records. The Society for Human Resource Management (SHRM) publishes guidance for HR professionals on complying with ADS regulations. Reading this from an employee’s perspective reveals exactly what employers are required to do, and therefore what failures to look for.

Compare Your Experience to Others Where Possible

If you are aware of candidates from outside your protected class who were less qualified but were advanced in the same process, document what you know. Patterns across multiple applicants are among the strongest forms of evidence in disparate impact cases.

Contact an Employment Attorney as Early as Possible

Evidence in AI discrimination cases can be complex and time-sensitive. Employers are required under California’s regulations to retain ADS records for at least four years, but an attorney can move quickly to ensure those records are preserved and to identify the right legal theory for your situation. 

The American Bar Association has noted that workers often underestimate the strength of their claims in algorithmic discrimination cases precisely because the mechanism of harm is invisible. An attorney experienced in employment discrimination can help you assess what happened and what recourse is available.

Don’t Let Machines Discriminate Against You

California’s new AI employment regulations reflect a clear principle: the sophistication of a tool does not exempt an employer from the most basic obligation in employment law. Every worker in California deserves to be evaluated on their actual qualifications, not on what an algorithm infers from a proxy, a pattern, or a data point that has nothing to do with their ability to do the job.

If you were rejected by an employer that used automated hiring tools, passed over for a promotion by an AI performance scoring system, or evaluated in a way that felt arbitrary or unexplained, you may have legal options that are stronger than you realize.

Le Clerc & Le Clerc, LLP represents employees in California who have experienced workplace discrimination in all its forms. That includes discrimination driven by the tools employers choose to deploy. Contact us today for a confidential consultation to discuss your situation and understand your rights.

Facebook
Twitter
LinkedIn
  • Recent Posts

  • Archives

  • Categories

  • Rss Feed