On November 13, North Carolina Attorney General (AG) Jeff Jackson and Utah AG Derek Brown, along with the Attorney General Alliance, announced a task force in conjunction with generative artificial intelligence (AI) developers, including OpenAI and Microsoft, to identify and develop consumer safeguards within AI systems as these technologies continue to rapidly proliferate.
The task force will provide a mechanism for state AGs to work with technology companies, law enforcement, and AI experts so that AGs can better insulate the public from AI risks as new systems come online. The effort will include the development of technical safeguards to protect the public from potential harm, with a particular focus on child safety. As part of the effort, stakeholders will create a “standing forum” to facilitate coordination.
This latest state initiative augments AI governance legislation passed in California, Colorado, Texas, and Utah over the past two years. These laws require AI developers and deployers to meet certain AI framework criteria addressing public safety, potential bias, and transparency. The California, Colorado, and Texas laws take effect in 2026, while the Utah law took effect in 2024. State legislators and regulators have been exploring ways to enact or ensure consumer AI safeguards while balancing innovation interests.
Further, state regulators do not necessarily require AI-specific legislation to regulate AI. State AGs have signaled that they will utilize traditional laws to protect consumers from alleged harmful use of AI. Several state AGs, including Massachusetts, New Jersey, New York, Oregon, and Texas, have warned they will enforce generally applicable consumer protection laws, privacy laws, and even anti-discrimination laws to regulate AI within an ever-evolving regulatory framework. Indeed, in late 2024, Texas AG Ken Paxton announced a settlement with health care technology company Pieces Technology under the Texas Deceptive Trade Practices – Consumer Protection Act. The enforcement action represents the first settlement under a state consumer protection act involving generative AI.
Why It Matters
While state AGs will continue to pursue AI-related enforcement actions, the newly formed and bipartisan AG AI task force suggests that state AGs are willing to work with, negotiate, and learn from AI experts and stakeholders. AI developers and deployers must take note and position themselves accordingly, while continuing to adhere to a patchwork of state AI regulation and existing consumer protection laws, often with complex and varied requirements. The AG task force provides an opportunity for state-level engagement that companies can consider on the front-end of AI system development, which may provide “regulatory sandbox” style innovation opportunities while mitigating regulatory risks. Engaging relevant stakeholders and consulting experienced outside counsel will help navigate these opportunities and minimize legal exposure within an ever-changing AI landscape.





