Anthropic Introduces New Data Policy Requiring User Choice on AI Training

Anthropic has announced a major update to its data policies, giving all Claude Free, Pro, and Max users until September 28 to decide whether their conversations and coding sessions can be used to train future AI models. The change marks a significant departure from the company’s earlier stance, when consumer chat data was not used for training and was deleted within 30 days. Under the new policy, data will be retained for up to five years unless users opt out.

The update does not affect enterprise, education, or government customers, whose data will remain excluded from training. Anthropic said the shift is intended to strengthen model safety and improve Claude’s reasoning, coding, and analysis capabilities.

The move reflects mounting industry pressure as leading AI companies, including OpenAI and Google, face heightened scrutiny over data retention and privacy practices. With training data seen as a crucial advantage in the AI race, Anthropic’s decision underscores the balancing act between advancing model performance and maintaining user trust.

James Dargan

James Dargan is a writer and researcher at The AI Insider. His focus is on the AI startup ecosystem and he writes articles on the space that have a tone accessible to the average reader.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape