A coalition of U.S. state attorneys general has issued a formal warning to leading artificial intelligence companies, urging them to implement stronger safeguards to prevent psychologically harmful chatbot behavior. The letter, signed by dozens of AGs through the National Association of Attorneys General, was sent to Microsoft, OpenAI, Google, and ten additional AI developers, including Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI.
The action follows a series of alarming incidents in which AI chatbot interactions were linked to mental health crises, including cases involving self-harm and violence. The AGs argue that current systems have produced “delusional” or sycophantic responses that reinforced user distress, creating potential violations of state consumer protection and safety laws.
The letter calls for mandatory third-party audits of large language models, transparent incident-reporting procedures, and user notifications when a chatbot generates harmful content. Attorneys general also request pre-release safety testing to identify and mitigate risks posed by generative AI systems.
The intervention intensifies the ongoing conflict between state regulators and the federal government over AI oversight. While the current administration promotes a national framework favoring industry flexibility, states continue to assert authority, prompting the president to signal an upcoming executive order aimed at curbing state-level AI regulation.




