Anthropic accidentally exposed 512,000+ lines of Claude Code source code. Here's what leaked, why it matters, and what it means for the AI industry.
Matilda
Anthropic is Having a Month
Anthropic's Worst Week: Two Major Leaks in Seven Days Shake the AI Industry Anthropic has spent years building its reputation as the responsible, safety-first AI company. It publishes detailed research on AI risk. It employs some of the sharpest minds in the field. It speaks loudly and often about the weight of building transformative technology. So when the company accidentally exposed hundreds of thousands of lines of internal source code — not once, but twice in a single week — the AI world took notice. This is the story of how one of Silicon Valley's most trusted AI labs had its most embarrassing seven days on record, and what it means for the future of AI development. What Leaked — and Why It Matters On a Tuesday that the engineering team at Anthropic will likely not forget, the company pushed out version 2.1.88 of its Claude Code software package. Hidden inside that routine update was something that was never meant to be public: a file containing nearly 2,000 source code fil…