“"The world's first AI assistant(OpenClaw) on a $5 chip. No Linux. No Node.js. Just pure C" — ssslvky1, Show HN post objectID 46925183 (2026-02-07)”
You know that feeling when you want an always-on AI assistant but every option requires a VPS, a Raspberry Pi running 24/7, or a laptop left on overnight — all to serve something idle 99% of the time? Running agents on existing platforms means ongoing cloud costs, OS maintenance, or noisy home hardware. You want something that sits on a shelf, draws 0.5W from a USB port, and handles Telegram messages whenever they arrive — without a server to maintain.
You flash a pre-built binary onto an ESP32-S3 chip with one command, then configure your WiFi password, Telegram bot token, and AI API key through a serial terminal. The chip runs two parallel tasks: one core handles all network I/O — polling Telegram, sending replies, serving a WebSocket gateway — while the other runs the AI agent loop. When a Telegram message arrives, it queues up, the agent sends it to Claude or GPT via HTTPS, and the model can call tools (web search, get current time, read or write files) up to 10 times before returning a final response. All state lives as plain markdown files on the chip's 12 MB SPIFFS flash partition — SOUL.md sets the personality, MEMORY.md holds long-term facts, and session history stores as JSONL per chat.
If you are an embedded or firmware engineer curious about running LLM agent patterns on bare-metal hardware, this is a working reference you can flash and study today. If you want a self-hosted always-on AI assistant and you own or are willing to buy an ESP32-S3 dev board with 8 MB PSRAM, this is the cheapest path to zero-ongoing-cost operation. Not for you if you need production reliability, team access, or security hardening — the project explicitly has no Telegram auth and 94 open issues as of 2026-05-07.
Worth exploring as a reference implementation and weekend hardware project if you already own an ESP32-S3 board with 8 MB PSRAM. The dual-core architecture and markdown-file memory model are clean design choices worth studying. Do not deploy it in any shared or semi-public setting without adding Telegram auth first — the project's own TODO.md flags that anyone who finds your bot token can consume your API credits with zero friction.
Deep-dive insight, Easy and Pro modes, plus action playbooks — the full breakdown is one tap away.