Claude Code's Source Code Leaked via npm
I’ve written about Claude Code before ↗ and it’s a tool I use every day. Yesterday, it was discovered that Anthropic accidentally included the source code for Claude Code.
It’s important to note, this is only the specific source code for Claude Code, the coding assistant that Anthropic have, primarily used in the terminal. This is not the source of the actual AI model itself or the Claude app.
Here’s what actually happened, what people found inside, and why I think this is mostly just an embarrassing mistake rather than the catastrophic event some are making it out to be.
What Happened#
On March 31st, security researcher Chaofan Shou discovered that a recent npm release of Claude Code included a source map file. For the non-developers: when code gets packaged up for distribution, it gets compressed and minified into something that’s basically unreadable to humans. It still works fine, but if you opened it up and tried to make sense of it, you’d just see a wall of characters. A source map is a separate file that reverses that process. It maps the compressed code back to the original, readable source. Developers keep these around for debugging, but they’re meant to stay internal. You don’t publish them to the world.
Anthropic accidentally shipped one to the public.
The result was roughly 1,900 TypeScript files and over 500,000 lines of readable code, essentially the entire Claude Code codebase, sitting right there in the npm package for anyone to extract. Researchers found it, the code spread quickly across GitHub, and by the time Anthropic pulled the release and started issuing DMCA takedowns, the cat was thoroughly out of the bag.
How Does This Happen?#
This is actually a pretty common class of mistake. npm packages are defined by a package.json file that specifies what gets included when you publish. A misconfigured .npmignore file or a missing exclusion in the files field, and suddenly your source maps, internal docs, or test fixtures are shipping to production.
The leading theory is that someone on the team enabled richer source maps to debug some rate-limiting issues they were having, and then forgot to exclude them before the next publish.
Anthropic confirmed as much in a statement to the press:
Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.
It was a build pipeline oversight. The kind of thing that can happen to any team, and has happened to plenty of others.
What’s Inside#
This is where it gets interesting. The leaked code reveals that Claude Code is a genuinely sophisticated piece of software, not just a thin wrapper around an API.
The architecture is built on Bun (not Node.js), uses React with Ink for terminal UI rendering, and has a modular tool-based system with around 40 built-in tools, each with its own permission gates. The query engine alone, which handles all the LLM API calls, streaming, caching, and orchestration, is reportedly around 46,000 lines.
Multi-agent orchestration is a big part of it. Claude Code can spawn sub-agents (internally called “swarms”) to handle complex tasks in parallel, each running in its own context with specific tool permissions. If you’ve used the Agent tool in Claude Code, you’ve seen this in action.
There’s a persistent memory system, which I’ve talked about before ↗, stored as files that maintain context about you and your projects across sessions. Seeing the implementation details confirms what the experience already suggested: this is one of the features that makes Claude Code actually useful rather than just clever.
The Most Interesting Stuff#
Beyond the architecture, the leak exposed some features and behaviors that raised eyebrows.
Anti-distillation tricks. The code includes mechanisms designed to poison the output if someone tries to train another model on Claude’s responses. Essentially, fake tool descriptions and misleading instructions get baked into the output in ways that would confuse a model trying to learn from it. This is Anthropic’s attempt to protect against competitors scraping Claude’s behavior to train cheaper models. Now that the actual tool list and real instructions are visible, this defense is somewhat undermined.
Undercover mode. There’s a flag that instructs Claude not to mention itself, its model name, or Anthropic in commit messages or outputs. The stated purpose is to avoid leaking internal model codenames, but it also raised concerns about AI-generated code being quietly contributed to open source projects without disclosure. I can see both sides of this. If you’re an Anthropic engineer using Claude Code to help with a commit to a public repo, you probably don’t want “Claude Opus 4.7” in the commit metadata. But the optics aren’t great.
A frustration detector. A regex-based system that scans user input for angry or profane language and logs events. This is pretty straightforward pattern matching, not some deep emotional intelligence system.
Unreleased features. The code references a bunch of stuff that hasn’t shipped yet: “dream mode” for background memory consolidation, “coordinator mode” for spinning up multiple workers in parallel, “ultra plan” and “ultra review” for long-running remote operations, auto/AFK modes that act while you’re away, and references to unreleased models including something codenamed Capiara/Mythos.
Feature flags everywhere. The code uses tons of feature flagging, which is pretty standard, but the sheer number of flags and environment variables suggests this codebase moves fast and experiments constantly.
Why It’s Not a Huge Deal#
Here’s where I’ll probably diverge from some of the more dramatic coverage. I don’t think this is actually that significant in terms of real damage.
The model is the product, not the harness. Claude Code is an impressive piece of engineering, but it’s ultimately orchestration software that makes API calls to Claude. The actual intelligence, the thing that makes it useful, lives in the model weights on Anthropic’s servers. Those weren’t leaked. A competitor could rebuild the entire Claude Code CLI from scratch using the leaked source and it would still need access to Claude’s API (or another model’s) to do anything.
Most of this was already knowable. If you’ve used Claude Code extensively, you already knew about the tool system, the memory architecture, the sub-agent spawning, and the permission gates. The leaked code confirms implementation details, but the overall architecture wasn’t exactly a mystery. Anthropic’s own documentation describes most of these systems.
The “competitive advantage” argument is overstated. Yes, competitors can now see exactly how Anthropic implemented certain features. But the open-source AI coding tool space is already vibrant and moving fast. Tools like Aider, OpenCode, and others have independently arrived at similar architectures. The ideas aren’t secret; the execution quality and the model behind it are what matter.
Anti-distillation being exposed is annoying, not devastating. The poison-pill approach was always security through obscurity. Now that it’s visible, Anthropic will need to adjust their approach, but this was never going to be a long-term defense anyway. This is a cat and mouse game that will continue regardless.
The unreleased features are just a product roadmap. Every software company has unreleased features sitting in their codebase. Knowing that Anthropic is working on background agents and parallel worker coordination is interesting but not exactly surprising given the direction the entire industry is heading.
Embarrassing#
The real impact here is reputational. Anthropic positions themselves as the “safety-first” AI company, the adults in the room. Having their flagship developer tool leak because of a build configuration mistake is… not a great look for that brand.
The Takeaway#
A source map file shipped in an npm package. It happens. The code it exposed is impressive engineering but not the secret sauce that makes Claude useful. The unreleased features are interesting but not shocking. The anti-distillation tricks and undercover mode are worth discussing but not scandalous.