Voltar ao Blog
/4 min read

Claude Code's Source Just Leaked and the Internet Already Documented Everything

AIEngineering

Five hundred twelve thousand lines of TypeScript. Nineteen hundred files. The entire unminified source code of Claude Code, Anthropic's AI coding agent, exposed through an npm source map that should never have been published.

A security researcher found it this morning. By afternoon, the code had been archived to GitHub, dissected in blog posts, and turned into interactive exploration tools. The repository hit 5,000 stars in under an hour.

Anthropic's safety-first AI company just shipped its source code to the public registry.

How it happened

Source maps are debugging files. They translate minified production code back to the original source, so developers can trace errors to the exact line that caused them. They're useful in development. They're a liability in production.

The @anthropic-ai/claude-code npm package included a .map file that referenced the complete original TypeScript source, hosted on Anthropic's own cloud storage. Anyone who looked at the package contents could follow the reference and download everything.

A misconfigured .npmignore or files field in package.json is the most likely cause. A single line in a config file. That's all it takes.

Anthropic responded fast. They pushed an update that stripped the source map and pulled old versions from the registry. But the code was already everywhere.

What people found inside

The findings paint a detailed picture of how a production AI coding agent actually works.

Scale. A 785KB main entry point. Forty-plus discrete tools, each with its own permission gates. A base tool definition spanning 29,000 lines. The full system prompt reportedly exceeds 24,000 tokens when tools are included.

Architecture. Claude Code uses a multi-agent system. It spawns specialized worker agents for different tasks: planning, exploring codebases, executing subtasks. Each agent runs with its own context and toolset. The orchestration layer coordinates them in parallel.

Unreleased features. This is where it gets interesting. The source revealed several features that haven't been announced.

A mode called KAIROS describes a persistent, always-on assistant that maintains cross-session memory, subscribes to PR updates, sends push notifications, and even runs a nightly "dreaming" process to consolidate what it's learned. Another feature called ULTRAPLAN offloads complex planning to a remote container with up to thirty minutes of dedicated thinking time.

And then there's Buddy. A full Tamagotchi-style companion pet system. Eighteen species including a capybara and a ghost. Rarity tiers. Cosmetic hats. Five personality stats. A deterministic gacha system. Apparently planned for an April teaser rollout.

The community moved fast

Within hours of the leak, the internet had already produced:

Archived repositories. Multiple GitHub repos preserving the full source, some with organized breakdowns of every module and tool definition. Over a thousand forks within the first day.

System prompt catalogs. Dedicated repos documenting every piece of the system prompt: tool descriptions, sub-agent instructions, utility prompts, and the XML-based constraints used to coordinate agent teams.

Interactive tools. Someone published an MCP server to npm that lets any compatible client explore the leaked source interactively. Another developer started a Python rewrite based on the architecture.

Architectural analysis. Blog posts breaking down the tool permission system, the agent spawning patterns, and how Claude Code manages context across parallel workers. The kind of documentation that would normally take months of reverse engineering.

What this actually means

Two things are true at the same time.

First, this is embarrassing for Anthropic. A company that positions itself around safety and careful AI development shipped a debugging artifact to a public package registry. The code itself doesn't contain model weights or training data. It's a client application. But the optics matter when trust is your brand.

Second, the leaked code is genuinely impressive. The level of engineering in the tool system, permission gates, and multi-agent orchestration is substantial. Developers studying this codebase are getting a master class in building production AI agents. The documentation the community is creating will influence how the next generation of coding tools gets built.

The Hacker News crowd was split. One camp argued the source code doesn't matter because the real value is the underlying model. "The source code of the slot machine is not relevant to the casino manager." The other camp pointed out that a company asking you to trust it with your codebase should be able to secure its own.

Both sides have a point. The model is the product. The client code is the delivery mechanism. But the delivery mechanism has access to your files, your terminal, and your git history. Knowing exactly how it works isn't trivial information.

For builders, the takeaway is practical. The best AI coding tools use patterns that are now fully documented in the open. Agent orchestration, tool permission systems, context management across parallel workers. These are hard problems with working solutions. The solutions are now public. Use them.

Talvez goste de