California Sets Precedent: New Regulations Govern AI in Employment Decisions
California has once again positioned itself as a regulatory leader, this time by enacting comprehensive new regulations that govern the use of Artificial Intelligence (AI) and Automated-Decision Systems (ADS) in employment decisions. Effective October 1, 2025, these regulations aim to prevent discriminatory outcomes and ensure a more equitable hiring and employment landscape within the state. This move by California is significant, as the state has a history of pioneering legislation in areas like privacy and data protection, often setting a precedent that other jurisdictions eventually follow.
Understanding the New Regulatory Landscape
At its core, the new regulatory framework makes it unlawful to deploy an Automated-Decision System (ADS) that results in discrimination against any applicant or employee based on protected categories outlined in California’s Fair Employment and Housing Act (FEHA). The definition of an ADS is broad, encompassing any computational process that makes or facilitates employment-related decisions, particularly those derived from or utilizing artificial intelligence, machine learning, algorithms, statistics, or other data processing techniques. This definition explicitly includes AI, which is characterized as a machine-based system that infers how to generate outputs, such as predictions, content, recommendations, or decisions, from the input it receives.
The regulations provide illustrative examples of tasks that ADS commonly perform in the employment context. These include using computer-based assessments, tests, questions, puzzles, or games to evaluate applicants or employees on predictive assessments, skills, reaction times, personality traits, aptitudes, attitudes, or cultural fit. ADS are also used for screening, categorizing, and recommending candidates, directing targeted job advertisements, screening resumes for specific keywords, analyzing online interview data such as facial expressions and word choice, and processing applicant or employee data acquired from third parties.
Addressing Discriminatory Impacts of AI
A critical aspect of these regulations is their focus on preventing adverse impact discrimination. Employers can be held liable not only for intentional discrimination but also if their facially neutral ADS selection tools disproportionately screen out individuals from protected groups—such as those based on race, age, gender, or disability—unless the practice is proven to be job-related and consistent with business necessity. This mirrors existing principles under FEHA but applies them specifically to the context of AI and automated decision-making.
The regulations explicitly caution against the use of ADS that analyze characteristics like an applicant’s tone of voice, facial expressions, or other physical attributes or behaviors, as these can inadvertently discriminate against individuals based on race, national origin, gender, disability, or other protected characteristics. Similarly, systems that assess an applicant’s or employee’s skills, dexterity, reaction time, or other abilities may disproportionately affect individuals with certain disabilities or other protected statuses. To mitigate such risks, employers may need to implement reasonable accommodations consistent with FEHA’s protections for religious creed and disability.
The concept of a "proxy" is also newly defined within the regulations. A proxy is characterized as a characteristic or category that is closely correlated with a protected category under FEHA. This acknowledges that even seemingly neutral data points can serve as proxies for protected characteristics, leading to indirect discrimination.
The Role of Anti-Bias Testing and Due Diligence
Notably, the regulations provide a roadmap for employers seeking to defend against discrimination claims arising from the use of ADS. Evidence of "anti-bias testing or similar proactive efforts to avoid unlawful discrimination" is considered relevant. This includes the quality, efficacy, recency, and scope of such testing, as well as its results and the employer’s response to those results. This language strongly encourages employers to implement robust due diligence measures.
Employers are advised to take protective steps, such as conducting regular audits of their ADS for bias, and requiring vendors to certify that their systems have undergone thorough testing to detect and address bias. Documenting these efforts, including the methodology, findings, and remediation steps, will be crucial for demonstrating compliance and mitigating liability.
Extended Record Retention and Liability
Beyond addressing discrimination, the regulations introduce significant changes to recordkeeping requirements. Employers and covered entities must now preserve personnel and other employment records for a minimum of four years from the later of the date the record was made or the date of the personnel action. This is an extension from the previous two-year requirement.
The scope of records subject to this extended retention period is broad, including selection criteria, automated decision system data, applications, personnel records, membership and referral records, and any other records created or received by the employer that relate to employment practices and affect any employment benefit, applicant, or employee. "Automated-decision system data" specifically includes any data used in or resulting from the application of an ADS, such as data provided by or about individuals, or data reflecting employment decisions or outcomes, as well as data used to develop or customize an ADS for a specific employer.
Furthermore, the regulations extend liability for ADS-driven discrimination to an employer’s "agent." An agent is defined as any individual acting on behalf of an employer to exercise a function traditionally performed by the employer or any other FEHA-regulated activity, including recruitment, screening, hiring, promotion, or decisions regarding pay, benefits, or leave, especially when conducted through an ADS.
Key Takeaways for Employers and Vendors
The implementation of these new regulations necessitates concrete actions from businesses. Employers utilizing ADS in their employment decision-making processes, as well as AI vendors whose products are used in the employment arena, should:
- Identify all ADS currently in use for employment decisions.
- Review and update record retention policies to ensure compliance with the new four-year minimum.
- Implement a comprehensive anti-bias testing program, including establishing a clear plan for the frequency and nature of testing, and meticulously documenting the process, results, and any subsequent actions taken.
- Revise existing anti-discrimination and reasonable accommodation policies to explicitly address the use of ADS and AI technologies.
California
AI Summary
California is at the forefront of regulating artificial intelligence (AI) in employment with new regulations designed to prevent discriminatory outcomes from Automated-Decision Systems (ADS). Effective October 1, 2025, these rules make it unlawful to employ an ADS that results in discrimination against any applicant or employee based on protected characteristics under California’s Fair Employment and Housing Act (FEHA). This proactive stance by California, a state known for pioneering privacy and data protection laws, is expected to influence other jurisdictions. The regulations address scenarios where AI systems might inadvertently screen out candidates from protected classes, such as individuals with disabilities, or analyze characteristics like tone of voice and facial expressions in ways that could perpetuate bias based on race, national origin, gender, or disability. A key provision highlights the relevance of anti-bias testing and similar proactive measures in determining liability. Employers are advised to conduct regular audits of their AI systems, ensure vendors certify thorough bias testing, and document these efforts. The definition of ADS is broad, encompassing any computational process that aids employment decisions and may utilize AI, machine learning, or other data processing techniques. The regulations also clarify that AI tools alone are insufficient for individualized assessments, particularly concerning criminal history, and that medical or psychological inquiries via AI remain largely prohibited. Furthermore, employers must now retain employment records, including ADS data, for a minimum of four years, an increase from the previous two-year requirement. These changes extend liability to "agents" acting on behalf of employers in employment-related AI functions. Businesses utilizing ADS in hiring, promotion, or other employment decisions, as well as AI vendors, must take immediate steps to identify relevant systems, update record-keeping policies, conduct rigorous anti-bias testing, and revise anti-discrimination and accommodation policies to align with these new regulations.