Anthropic has inadvertently open-sourced the complete client-side code for Claude Code, its terminal-based AI coding assistant. This significant leak occurred through a misconfiguration in its npm packaging.
The incident, first identified by an intern at Fuzzland, involved a large source map file in the latest version of the `@anthropic-ai/claude-code` package. This file provided direct access to a public Cloudflare R2 bucket containing the full, unobfuscated TypeScript source code.
The exposed code comprises 1,906 files, detailing internal APIs, tool orchestration logic, and permission systems. It also reveals dozens of feature flags for unreleased capabilities.
Within hours, the code was mirrored on GitHub, de-minified, and analyzed by the community. Some developers are already working on cleanroom replicas.
What is Claude Code?
Claude Code is Anthropic's entry into the burgeoning field of agentic development tools. Launched in early 2025, it allows developers to issue natural language commands directly in the terminal to perform tasks like code editing, file manipulation, and shell command execution.
The tool is a significant revenue driver for Anthropic, reportedly powering a substantial portion of the company's annual run-rate. Its deep integration into local development environments, including permission prompts and sandboxing, is now laid bare by the leak.
What the Leak Exposes
The leaked client-side code reveals the core logic of the agent. This includes how Anthropic implements tool execution for bash, file I/O, and new 'computer use' capabilities.
It also exposes the full permission bypass and approval flows, system prompts governing safety, and telemetry hooks. A long list of feature flags suggests a product roadmap.
Community analysis has uncovered hidden toggles for features like "undercover mode," voice interaction, and autonomous agent triggers.
This marks a recurrence of a similar issue, with earlier versions of the package having exposed source maps in February 2025.
The Irony of the Leak
Anthropic has long positioned itself as a safety-focused alternative to competitors like OpenAI. Its messaging emphasizes responsible AI development and constitutional training.
The company even developed a Claude Code Security module to assist enterprises in identifying AI security vulnerabilities. The current leak, stemming from a basic packaging error, stands in stark contrast to this carefully cultivated image.
The online reaction ranges from schadenfreude to excitement among developers eager to study the architecture of a production-grade AI coding agent.
The community consensus appears to be that the widespread access to Claude Code's architecture is ultimately beneficial.
The code is now permanently available, with GitHub forks proliferating rapidly.
For the broader industry, this incident highlights the vulnerability of even guarded AI products through their build processes.
The future of Claude Code and Anthropic's approach to openness remains to be seen.
