“46,000 developers starred a leaked Claude Code rewrite in 12 hours”
A developer claims to have ported Anthropic's Claude Code agent harness to Python in one night after the source was exposed on March 31, 2026. The repo hit 46,249 stars and 54,947 forks within hours (verified April 1, 2026 via GitHub API). Creator Sigrid Jin — profiled in the Wall Street Journal for using 25 billion Claude Code tokens — built it using oh-my-codex orchestration. The Python implementation mirrors Claude Code's command and tool architecture but is not yet a runtime-equivalent replacement. A Rust port is in progress on the dev/rust branch.
You know that feeling when you want to understand how a proprietary AI agent system wires its tools, orchestrates tasks, and manages context — but the source is closed? Claude Code users have built workflows around it without visibility into the harness architecture. This rewrite gives you a Python reference implementation that exposes the command routing, tool wiring, and runtime patterns that were previously opaque.
The Python port mirrors Claude Code's architecture through metadata shims. Load the manifest with `python3 -m src.main manifest` to see 70+ Python modules organized into subsystems like assistant, bootstrap, bridge, cli, commands, components, and tools. Run `python3 -m src.main commands` to list mirrored command entries. Run `python3 -m src.main tools` to see tool inventories. The CLI lets you route prompts across command/tool registries, bootstrap sessions, and run turn loops — but these are orchestration shims, not a working AI agent. You still need an LLM backend.
If you're a developer curious about agent harness architecture, tool wiring patterns, or how production AI systems orchestrate commands — this gives you a reference implementation to study. Not useful if you need a working AI agent today (you still need an LLM backend). Not production-ready — the README explicitly states it's 'not yet a complete one-to-one replacement for the original system.'
Worth exploring if you study agent architectures or want to understand Claude Code's design patterns. The Python code is readable and well-organized. However, understand the legal/ethical context: this is based on exposed proprietary source, and the creator acknowledges concerns about whether 'legal is the same as legitimate.' The project is hours old, experimental, and not production-ready. Check the included essay on AI reimplementation ethics before diving deep.
View original sourceThis page gives you the hook. The full Snaplyze digest goes deeper so you can move from curiosity to decision with less noise.
Open the full digest to read the deeper breakdown, compare viewpoints, and get the practical next-step playbooks.
Read the full digest for deep-dive insight, Easy Mode, Pro Mode, and practical playbooks you can actually use.
Install Snaplyze