California’s New ADMT Regulation: Key Takeaways for Design, Legal, and Compliance Teams

Introduction :
Most people don’t realize how often algorithms decide for them. Whether they get hired, promoted, denied credit, or watched at work, what used to be invisible is now visible, and under regulation.
On July 24, 2025, the California Privacy Protection Agency (CPPA) finalized the first-ever regulation in the United States focused specifically on Automated Decisionmaking Technology (ADMT). The rulemaking began in November 2024, shaped by tech, civil society, and regulators.
These new rules mark a turning point for how businesses use algorithmic systems to evaluate, monitor, or make decisions about people. The final rules focus on striking a balance between innovation and consumer protection in their scope. While the text avoids the term “AI,” the ADMT regulation is, in effect, California’s first regulatory framework for AI-powered decision-making.
The rules apply to a wide range of technologies, especially those used in hiring, credit scoring, education, housing, and employee monitoring. They introduce new rights for consumers, including the right to opt out, access information, and receive plain-language notices.
What Is Automated Decision-Making Technology (ADMT)?
Automated Decision-Making Technology, or ADMT, refers to any system, software, or process that uses personal information and computation in decision making. This could include making a decision, executing a decision, or assisting human decision-making. ADMT includes tools based on machine learning, statistics, or algorithmic processing–whether or not they are labeled as AI.
Under the CPPA regulation (Section 7001), profiling is classified as a type of Automated Decision-Making Technology (ADMT). Profiling refers to the automated processing of personal data to evaluate or predict a person’s behavior, performance, interests, reliability, or location. When used in this way, profiling falls under the scope of ADMT rules.
Common examples of ADMT include resume screening tools, productivity or attention monitors, credit scoring systems, facial or speech recognition, and Bluetooth, Wi-Fi, or geolocation trackers.
What’s not considered ADMT?
Routine tools like spam filters or file sorters are not included, unless they are used in ways that substantially replace human judgment. A business may be exempt if the decision process involves meaningful human involvement. But this standard is strict.
To qualify, the human reviewer must:
- Be trained to interpret and override the ADMT’s output
- Actively analyze both automated and non-automated data
- Have the authority to change or reverse the decision
In its official commentary, the CPPA clarifies that, this type of human oversight is expected to fall to senior staff, not low-level reviewers. If a system makes decisions that affect access to jobs, credit, housing, or services, without real-time, qualified human control, it likely qualifies as ADMT.
When do California’s ADMT rules apply?
The ADMT regulation applies when a business uses automated systems to make or shape decisions that have legal or similarly significant effects on an individual.
These include decisions related to: Housing, Insurance, Education access, Criminal justice outcomes, Essential goods and services”
The regulation also applies to extensive profiling through systematic observation. This includes:
- Continuous video or audio monitoring
- Geolocation, Bluetooth, or Wi-Fi tracking
- Use of biometric identifiers (i.e., facial, speech, emotion detection)
- Productivity or behavior monitoring in the workplace
As defined in the CCPA draft, profiling refers to any automated processing used to analyze or predict behavior, interests, or performance. If a business uses ADMT for any of the above purposes, the regulation applies, unless an exception is met.
Certain uses of ADMT are excluded from opt-out and notice obligations if they are strictly limited to:
- Security or fraud prevention
- Protecting physical safety
- Performing a service specifically requested by the consumer
Even with these certain use cases, the business must show there is no reasonable alternative and may need to justify that to regulators. In all other cases, if ADMT is used in a way that affects someone’s rights or access to opportunity, the full set of compliance obligations applies.
Consumer rights under the ADMT regulation:
California’s new rules on Automated Decision-Making Technology (ADMT) grant consumers three core rights:
1. Right to Pre-Use Notice:
Before using ADMT, businesses must provide a plain-language notice that explains:
- The purpose of using the ADMT
- What personal data is used
- Whether a human is involved and their role
- How consumers can opt out or use an alternative process
- How to access additional ADMT information (logic, parameters, outputs)
If the human reviewer cannot overrule the system, that must be disclosed. The notice must be shown before use and in the format the business normally uses to interact with the consumer.
2. Right to Opt Out:
Consumers can opt out of ADMT use in most situations, especially when it involves profiling or marketing. Businesses must provide a process that is simple, clearly labeled, and free of dark patterns. Along with this process, they must offer at least two opt-out methods, one of which matches the way the business normally interacts with the consumer (e.g., online or by phone).
Opt-out must be offered when ADMT is used for Profiling, Behavioral advertising and Machine learning training on consumer data
Opt-out is not required when ADMT is used exclusively for:
- Security or fraud prevention
- Physical safety
- Certain hiring or academic assessments (if non-discriminatory and narrowly focused)
- Internal allocation of work or compensation (with limits)
A business may offer a right to appeal to a qualified human reviewer instead of an opt-out. Denied requests must be documented with justification (e.g., suspected fraud).
3. Right to access ADMT Information
Consumers can request:
- The purpose of ADMT use
- Logic and data inputs
- How the output influenced the decision
- Role of any human involvement
- Key parameters and their effect
- Results of fairness or reliability testing
If the tool is used repeatedly, future decision logic must also be explained. Responses must be plain-language, securely delivered, and still meaningful even if trade secrets are withheld.
What this means for design and compliance Teams:
Compliance with California’s ADMT regulation requires more than legal checklists. It demands clear, human-centered coordination across teams, such as:
- Design and product teams must ensure opt-outs, notices, and access flows are clear and free of dark patterns.
- Legal and compliance teams must locate ADMT use, verify exceptions, and prepare for user requests.
- Human reviewers must be properly trained and authorized, especially when used to meet compliance.
To prepare for January 1, 2027, companies should:
- Audit all ADMT use across products,
- Identify systems that trigger compliance,
- Draft or update pre-use notices,
- Review opt-out flows for legal clarity, Document fairness,
- oversight, and human review and Set up response plans for access and complaints.
From legal Risk to responsible design: FairPatterns’ Role:
At Fairpatterns , we help companies detect privacy dark patterns and redesign manipulative UX before it becomes a legal risk. Our AI-powered solution turns complex, unclear flows, like those involving automated decision-making, into transparent, trust-first experiences.
Our tool helps you:
- Detect harmful patterns across any taxonomy
- Replace them with tested, compliant designs in plain language
- Prevent future risks by supporting legal-safe design at the mockup stage
Whether you're building opt-out flows, pre-use notices, or ADMT interfaces, we don’t just help you comply. We help you design for trust.
Want to future-proof your platform for fairness? 👉 Connect with us
💫 Regain your freedom online.
References:
Nisenbaum, A. C. (2025, August 5). California Finalizes CCPA Regulations on Cybersecurity Audits, Risk Assessments, and Automated Decisionmaking: Key Provisions and Implications. National Law Review.https://natlawreview.com/article/california-finalizes-ccpa-regulations-cybersecurity-audits-risk-assessments-and
https://cppa.ca.gov/meetings/materials/20231208_item2_draft.pdf
California Finalizes Groundbreaking Regulations on AI, Risk Assessments, and Cybersecurity: What Businesses Need to Know - Ogletree . (2025, August 7). Ogletree. https://ogletree.com/insights-resources/blog-posts/california-finalizes-groundbreaking-regulations-on-ai-risk-assessments-and-cybersecurity-what-businesses-need-to-know/