[ALERT]: Claude Code Source Code Leaked
Snaplyze Digest
GitHub Repos intermediate 2 min read Mar 31, 2026 Updated Apr 2, 2026

[ALERT]: Claude Code Source Code Leaked

“46,000 developers starred a leaked Claude Code rewrite in 12 hours”

In Short

A developer claims to have ported Anthropic's Claude Code agent harness to Python in one night after the source was exposed on March 31, 2026. The repo hit 46,249 stars and 54,947 forks within hours (verified April 1, 2026 via GitHub API). Creator Sigrid Jin — profiled in the Wall Street Journal for using 25 billion Claude Code tokens — built it using oh-my-codex orchestration. The Python implementation mirrors Claude Code's command and tool architecture but is not yet a runtime-equivalent replacement. A Rust port is in progress on the dev/rust branch.

aiagentsllmpythonopen-source
Why It Matters
The practical pain point this digest is really about.

You know that feeling when you want to understand how a proprietary AI agent system wires its tools, orchestrates tasks, and manages context — but the source is closed? Claude Code users have built workflows around it without visibility into the harness architecture. This rewrite gives you a Python reference implementation that exposes the command routing, tool wiring, and runtime patterns that were previously opaque.

How It Works
The mechanism, architecture, or workflow behind it.

The Python port mirrors Claude Code's architecture through metadata shims. Load the manifest with `python3 -m src.main manifest` to see 70+ Python modules organized into subsystems like assistant, bootstrap, bridge, cli, commands, components, and tools. Run `python3 -m src.main commands` to list mirrored command entries. Run `python3 -m src.main tools` to see tool inventories. The CLI lets you route prompts across command/tool registries, bootstrap sessions, and run turn loops — but these are orchestration shims, not a working AI agent. You still need an LLM backend.

Key Takeaways
6 fast bullets that make the core value obvious.
  • Manifest CLI — why YOU care: Run `python3 -m src.main manifest` to see the full Python workspace structure with 70+ modules across subsystems like assistant, bootstrap, bridge, cli, commands, and tools. Gives you visibi...
  • Command/Tool registries — why YOU care: Browse mirrored command and tool inventories that Claude Code uses internally. Run `python3 -m src.main commands --limit 20` or `python3 -m src.main tools --limit 20` to see what ...
  • Parity audit — why YOU care: Run `python3 -m src.main parity-audit` to compare the Python workspace against the archived TypeScript snapshot. Shows you what's been ported and what's still missing.
  • Turn loop simulation — why YOU care: Run `python3 -m src.main turn-loop "your prompt" --max-turns 3` to simulate a multi-turn agent loop. Note: this is orchestration only, not a working agent without an LLM backend.
  • Rust port in progress — why YOU care: The dev/rust branch contains an active Rust rewrite aiming for faster, memory-safe execution. If you prefer systems languages, this may become the definitive version.
  • Reference data snapshots — why YOU care: The src/reference_data/ directory contains JSON snapshots of tools and commands from the archived source. You can study these to understand Claude Code's tool surface without acc...
Should You Care?
Audience fit, decision signal, and the original source in one place.

Who It Is For

If you're a developer curious about agent harness architecture, tool wiring patterns, or how production AI systems orchestrate commands — this gives you a reference implementation to study. Not useful if you need a working AI agent today (you still need an LLM backend). Not production-ready — the README explicitly states it's 'not yet a complete one-to-one replacement for the original system.'

Worth Exploring?

Worth exploring if you study agent architectures or want to understand Claude Code's design patterns. The Python code is readable and well-organized. However, understand the legal/ethical context: this is based on exposed proprietary source, and the creator acknowledges concerns about whether 'legal is the same as legitimate.' The project is hours old, experimental, and not production-ready. Check the included essay on AI reimplementation ethics before diving deep.

View original source
What the full digest unlocks

There is more here than the public preview.

This page gives you the hook. The full Snaplyze digest goes deeper so you can move from curiosity to decision with less noise.

Open the full digest to read the deeper breakdown, compare viewpoints, and get the practical next-step playbooks.

Open the full digest

Snaplyze

Go beyond the preview

Read the full digest for deep-dive insight, Easy Mode, Pro Mode, and practical playbooks you can actually use.

Install Snaplyze