GitHub Repos intermediate 3 min read May 7, 2026
Public Preview Sign in free for the full digest →

MimiClaw: Full LLM agent loop on a $10 chip — no Linux, 1.18 MB

“A ReAct AI agent with web search, cron scheduling, and persistent memory — running 24/7 at 0.5W on a $10 chip, no Linux and no monthly bill.”

MimiClaw: Full LLM agent loop on a $10 chip — no Linux, 1.18 MB
Source · github.com

“"The world's first AI assistant(OpenClaw) on a $5 chip. No Linux. No Node.js. Just pure C" — ssslvky1, Show HN post objectID 46925183 (2026-02-07)”

You know that feeling when you want an always-on AI assistant but every option requires a VPS, a Raspberry Pi running 24/7, or a laptop left on overnight — all to serve something idle 99% of the time? Running agents on existing platforms means ongoing cloud costs, OS maintenance, or noisy home hardware. You want something that sits on a shelf, draws 0.5W from a USB port, and handles Telegram messages whenever they arrive — without a server to maintain.

embeddededge-aiesp32llm-agentsopen-sourcectelegram

You flash a pre-built binary onto an ESP32-S3 chip with one command, then configure your WiFi password, Telegram bot token, and AI API key through a serial terminal. The chip runs two parallel tasks: one core handles all network I/O — polling Telegram, sending replies, serving a WebSocket gateway — while the other runs the AI agent loop. When a Telegram message arrives, it queues up, the agent sends it to Claude or GPT via HTTPS, and the model can call tools (web search, get current time, read or write files) up to 10 times before returning a final response. All state lives as plain markdown files on the chip's 12 MB SPIFFS flash partition — SOUL.md sets the personality, MEMORY.md holds long-term facts, and session history stores as JSONL per chat.

01
ReAct agent loop with tool calling — the AI can search the web, check the time, and read or write files across up to 10 chained steps per message, handling multi-step tasks without you supervising each step
02
Persistent markdown memory — SOUL.md, USER.md, and MEMORY.md store personality and long-term facts as plain text on flash; you read and edit them directly via serial CLI with no opaque database to debug
03
Dual-core task split — Core 0 handles all network I/O and Core 1 runs the agent loop, separated by FreeRTOS queues so a slow LLM response does not block incoming messages from being received
04
Telegram interface with WiFi captive-portal onboarding — interact from any phone without installing an app; initial WiFi setup works through a browser without requiring a serial terminal connection
05
Cron scheduling built into firmware — the agent can schedule its own future tasks, and a heartbeat service periodically prompts the AI to act on items listed in HEARTBEAT.md
06
OTA firmware updates over WiFi — push firmware updates without physical access once the device is deployed on a shelf or in a cabinet
07
Runtime provider switching — toggle between Anthropic Claude and OpenAI GPT without reflashing; added in v0.1.1 via PR#27
Who it’s for

If you are an embedded or firmware engineer curious about running LLM agent patterns on bare-metal hardware, this is a working reference you can flash and study today. If you want a self-hosted always-on AI assistant and you own or are willing to buy an ESP32-S3 dev board with 8 MB PSRAM, this is the cheapest path to zero-ongoing-cost operation. Not for you if you need production reliability, team access, or security hardening — the project explicitly has no Telegram auth and 94 open issues as of 2026-05-07.

Worth exploring

Worth exploring as a reference implementation and weekend hardware project if you already own an ESP32-S3 board with 8 MB PSRAM. The dual-core architecture and markdown-file memory model are clean design choices worth studying. Do not deploy it in any shared or semi-public setting without adding Telegram auth first — the project's own TODO.md flags that anyone who finds your bot token can consume your API credits with zero friction.

Developer playbook
Tech stack, code snippet, sentiment, alternatives.
PM playbook
Adoption angles, user fit, positioning.
CEO playbook
Traction signals, ROI, build vs buy.
Deep-dive insight
Full long-form analysis, no fluff.
Easy mode
Core idea, fast — when you need the gist.
Pro mode
Technical nuance, edge cases, tradeoffs.
Read the full digest
Go beyond the preview

Deep-dive insight, Easy and Pro modes, plus action playbooks — the full breakdown is one tap away.

Underrated tools. Unfiltered takes.

Read the full digest in the Snaplyze app for deep-dive insight, Easy and Pro modes, and the playbooks you can actually use.

Install Snaplyze →