GitHub Repos intermediate 3 min read May 13, 2026
Public Preview Sign in free for the full digest →

OpenHuman: A Rust Agent that Builds a Local Memory Tree

“This Rust desktop agent connects to 118+ of your apps, syncs every 20 minutes, and stores your entire work memory as plain text files on your machine—no cloud, no vector database, no lock-in.”

OpenHuman: A Rust Agent that Builds a Local Memory Tree
Source · github.com

“"OpenHuman is not AGI. But it is a meaningful architectural step closer, with better memory, better orchestration, and better tooling." — tinyhumans.gitbook.io/openhuman (verified 2026-05-13)”

You know that feeling when you ask an AI chatbot about something from last month's Slack thread and it has no idea what you are talking about? Every new chat starts from zero—no memory of your projects, your emails, your GitHub issues, or your notes. You spend the first 10–30 minutes of every complex session copy-pasting context before the AI can help. The information already exists across your tools, but no AI can access it unless you manually paste it in every time.

ai-agentsrustlocal-firstpersonal-aidesktop-appmemoryopen-source

OpenHuman runs a 6-stage pipeline every 20 minutes: it pulls data from your connected apps via OAuth, breaks each piece into ≤3k-token Markdown chunks with content-addressed IDs (so duplicates never enter twice), stores them in a local SQLite database, runs scoring and entity extraction asynchronously in background workers, and builds three layers of summaries—per-source (one per Gmail label or Slack channel), per-topic (built lazily by entity frequency), and a single global daily digest. When you ask a question, the retrieval layer queries this local store, runs the matching chunks through TokenJuice—which strips HTML to Markdown and shortens URLs—and injects the compressed result into your LLM's context. The hot path from ingestion through scoring never calls an LLM, keeping the UI responsive while heavy work (embeddings, summarization) runs in the background.

01
118+ OAuth integrations — connects Gmail, GitHub, Slack, Notion, Stripe, Jira, Linear, Calendar, Drive, and 109 other services via one-click OAuth; auto-refreshes every 20 minutes so your AI has today's context without any manual input fro...
02
Memory Tree with deterministic chunking — ingests data as ≤3k-token Markdown chunks with content-addressed IDs stored in SQLite; the leaf lifecycle state machine (pending_extraction → admitted/dropped → buffered → sealed) means dropped chu...
03
TokenJuice compression — strips HTML to Markdown and shortens URLs before any chunk enters the LLM context; the README claims 'up to 80%' cost and latency reduction, making six months of email analysis cost single-digit dollars per the pro...
04
Obsidian-compatible local vault — everything lands as plain .md files at ~/.openhuman/wiki/ so you can open, edit, or delete your memory in Obsidian, VS Code, or any text editor without touching the app itself
05
Model routing — dispatches each task to the right LLM using reasoning, fast, or vision hints; supports 15+ providers plus Ollama for fully on-device processing with zero cloud calls
06
Google Meet virtual camera (v0.53.22) — joins meetings as a virtual participant, takes live notes, and surfaces relevant memory during the call; added in the 2026-05-09 release with 69 merged pull requests
Who it’s for

You are a knowledge worker—engineer, researcher, consultant—who juggles Gmail, Slack, GitHub, and Notion daily and spends 10–30 minutes per AI session re-explaining context that already exists in your tools. OpenHuman is also worth studying if you are building local-first LLM infrastructure and want a Rust reference implementation for deterministic memory pipelines. This is not the right tool yet if you need mobile access, real-time sync (the hard polling floor is 20 minutes), or plan to embed this in a commercial product—GPL-3.0 requires you to release your own source code under the same lic...

Worth exploring

Worth installing if you want to study a non-RAG memory architecture or evaluate local-first AI agent design in Rust—the 6-stage Memory Tree is a concrete reference implementation. Do not ship this in a production environment today: the project self-describes as Early Beta, issue #1595 is an active Rust panic on non-ASCII content in the memory ingestion core, and the 3,336-star vs. 2-HN-point gap indicates the project has not yet passed peer engineering scrutiny.

Developer playbook
Tech stack, code snippet, sentiment, alternatives.
PM playbook
Adoption angles, user fit, positioning.
CEO playbook
Traction signals, ROI, build vs buy.
Deep-dive insight
Full long-form analysis, no fluff.
Easy mode
Core idea, fast — when you need the gist.
Pro mode
Technical nuance, edge cases, tradeoffs.
Read the full digest
Go beyond the preview

Deep-dive insight, Easy and Pro modes, plus action playbooks — the full breakdown is one tap away.

Underrated tools. Unfiltered takes.

Read the full digest in the Snaplyze app for deep-dive insight, Easy and Pro modes, and the playbooks you can actually use.

Install Snaplyze →