Anthropic, widely recognized for its focus on responsible AI development, has disclosed an internal error that resulted in the unintended release of sensitive software files linked to its Claude Code platform. The issue occurred during the rollout of version 2.1.88, where a packaging mistake exposed nearly 2,000 source code files and more than 512,000 lines of code, outlining key architectural components of the product.
The incident was identified by security researcher Chaofan Shou, who highlighted the exposure shortly after release. Anthropic stated that the event was caused by human error in the release process rather than a security breach. The company has recently emphasized its commitment to AI safety, making the incident notable within the broader context of its positioning.
Claude Code, a developer-focused AI tool, has been gaining traction in enterprise and technical markets, underscoring the potential competitive implications of the exposure within the rapidly evolving AI software landscape.