Source: site

Key new privacy laws
California added multiple statutes that directly amend or build on the California Consumer Privacy Act (CCPA) and related privacy regimes. The main 2025 privacy enactments highlighted in the wrap-up include:
-
California Opt Me Out Act (browser universal opt-out)
-
Requires browser makers to offer a built‑in universal opt‑out signal that applies across all sites, which will dramatically increase the volume of do‑not‑sell/do‑not‑share signals businesses must honor.
-
Expected to significantly affect digital advertising and marketing strategies that rely on cross-site tracking.
-
-
Expanded data broker duties (SB 361 and DELETE Act changes)
-
Data brokers must now disclose whether they collect highly sensitive categories (e.g., sexual orientation, citizenship status, biometric data, government ID numbers) and whether they sell or share data with foreign entities, government agencies, law enforcement, or generative AI developers.
-
Enforcement under the DELETE Act is strengthened, including doubled daily fines and treating failure to process deletion requests as a sanctionable offense.
-
-
Health and location data protections (AB 45)
-
Extends existing bans on certain geofencing practices to prohibit collecting, using, selling, sharing, or retaining personal information linked to precise locations of clinics and reproductive health centers.
-
Bars in‑person health care providers from using geofencing for identification or advertising, creates restrictions on disclosing identifiable research data to law enforcement, and grants a private right of action for violations.
-
Frontier AI and chatbot transparency
The legislature adopted first‑of‑its‑kind AI transparency laws targeting “frontier” AI developers and AI chatbots, especially in sensitive contexts.
-
Transparency in Frontier Artificial Intelligence Act
-
Applies to “frontier” AI developers with at least 500 million USD in revenue whose systems pose heightened safety risks.
-
Requires a public safety framework on the developer’s site, disclosures to the state Office of Emergency Services, and third‑party audits indicating which safety standards are used.
-
-
Companion chatbot and disclosure measures
-
Companion or conversational chatbots must clearly disclose that they are AI‑generated and not human, with special protections aimed at minors in certain implementations.
-
These measures are intended to reduce deception risks and align consumer‑facing interfaces with broader AI transparency goals.
-
Sector-specific AI and tech rules
California also pushed AI rules deeper into specific verticals like health care, employment, and antitrust.
-
Health‑related AI marketing (AB 489)
-
Prohibits use of titles or terms that imply AI developers or deployers are licensed health professionals when they are not, to avoid misleading patients.
-
Targets generative AI tools and virtual assistants used in health and wellness contexts.
-
-
Pricing algorithms and antitrust (AB 325)
-
Makes it unlawful to enter into contracts to share or use common pricing algorithms when this restrains trade, responding to concerns over algorithmic price‑fixing.
-
Signals that competition law will increasingly focus on algorithmic collusion and shared AI tools.
-
-
AI liability in civil cases (AB 316)
-
Bars defendants who develop, use, or modify AI systems from arguing that “the AI did it” as a standalone defense by claiming the system autonomously caused the harm.
-
Encourages responsible oversight and human accountability in AI deployment.
-
Automated decision-making and employment AI
Separate from the headline “frontier AI” statute, California finalized or advanced broad rules for automated decision‑making technology (ADMT), including in employment.
-
Statewide ADMT rules for significant decisions
-
Treat as ADMT any technology that processes personal data and replaces or substantially replaces human decision‑making in “significant decisions” such as employment, credit, or housing.
-
Require pre‑use notices, logic disclosures, and opt‑out rights for consumers affected by such tools, with compliance deadlines stretching into 2027.
-
-
Employment-focused AI regulations and guidance
-
Final CCPA‑based regulations impose stringent requirements on employers’ use of AI in hiring, promotion, and other workforce decisions, effective January 1, 2026.
-
Employers must conduct risk assessments for employment‑related ADMT and report on high‑risk processing to the California Privacy Protection Agency on a scheduled basis.
-
What to expect next
The legislative wrap‑up notes that more than twenty additional privacy and AI bills will carry into the second half of the 2025‑2026 session, including measures on workplace surveillance, geolocation bans, insurance privacy, and further CCPA/CIPA tweaks. Federal efforts and potential DOJ challenges have not slowed California’s trajectory, suggesting that regulated entities should treat California as a de facto national baseline for AI and privacy compliance planning.




