Anthropic has announced a major update to its data policies, giving all Claude Free, Pro, and Max users until September 28 to decide whether their conversations and coding sessions can be used to train future AI models. The change marks a significant departure from the company’s earlier stance, when consumer chat data was not used for training and was deleted within 30 days. Under the new policy, data will be retained for up to five years unless users opt out.
The update does not affect enterprise, education, or government customers, whose data will remain excluded from training. Anthropic said the shift is intended to strengthen model safety and improve Claude’s reasoning, coding, and analysis capabilities.
The move reflects mounting industry pressure as leading AI companies, including OpenAI and Google, face heightened scrutiny over data retention and privacy practices. With training data seen as a crucial advantage in the AI race, Anthropic’s decision underscores the balancing act between advancing model performance and maintaining user trust.




