“"There are some legitimate concerns about Warp throughout these comments (telemetry, business model, etc.). But the one thing that really excites me is to have a full team working full-time on building the terminal that developers want to use." — user kyeb, HN thread id=30921231”
You know that feeling when you run a long command, the output scrolls past, and you have to hunt through raw scrollback text to find the error that matters? Traditional terminals treat all output as an undifferentiated stream of characters — you cannot click a command's output, copy just that block, or give an AI agent a clean unit to act on. Every workaround (piping to grep, tee to a file, tmux splits) adds friction without solving the underlying model. When AI coding agents enter the picture, the problem compounds: agents scraping raw scrollback text make mistakes that agents reading structured output would not.
Warp is written in Rust (98.2% of the codebase) and uses an Alacritty-derived terminal parsing engine with Tokio for async I/O and FontKit for rendering. The defining architectural decision is Blocks: each command and its output is wrapped into a discrete, selectable, filterable unit stored as a structured document rather than appended to a scrollback buffer. This makes output machine-readable by design. The Oz cloud platform extends Blocks into an agent runtime: multiple AI coding agents (Claude Code, Codex, Gemini CLI, or Oz's own) can each operate on their own Block thread in parallel, with full auditability. The UI framework crates (warpui_core, warpui) carry the MIT license; the rest of the client is AGPL-3.0; the Oz platform is fully proprietary.
Software engineers who spend significant time in the terminal and want structured output they can navigate, copy, and feed to AI coding agents. Particularly relevant for teams already using Claude Code, Codex, or Gemini CLI who want a terminal purpose-built to work with those tools. Not a fit for engineers working in air-gapped or offline environments (Warp requires network access to function fully), for companies whose legal team rules out AGPL-3.0 in any internal tooling, or for developers who consider mandatory telemetry a non-starter — the history of network calls to segment.io and sentry...
Worth a personal install and one-week trial if you actively use AI coding agents: the Block model gives those agents cleaner output to parse, and the IDE-like input is a measurable improvement over raw shell prompts. For team or org-wide adoption, clear the AGPL-3.0 licensing implications with your legal team first — internal use may or may not trigger open-source obligations depending on your jurisdiction and deployment model. The 35-contributor count against 3,248 open issues signals the maintainer team is thin relative to demand; expect slower issue resolution than a larger open-source project. The Oz platform has no public pricing as of April 30, 2026, so the full cost of the agentic ti...
Deep-dive insight, Easy and Pro modes, plus action playbooks — the full breakdown is one tap away.