Letta (formerly MemGPT) is an agent framework with structured memory tiers — built for shipping customer-facing AI products. SynaBun is an MCP toolkit with 106 tools — built to plug into AI coding agents like Claude Code, Codex, OpenCode, and Gemini. Adjacent territory, different jobs.
| Feature | SynaBun | Letta (MemGPT) |
|---|---|---|
| Primary use case | MCP toolkit for AI coding agents | Agent framework for AI products |
| MCP tools | 106 (native) | 4 (via wrapper) |
| Memory architecture | Categorical + importance + recency | Structured blocks (core/archival/recall) |
| Default embedding | all-MiniLM-L6-v2 (local) | OpenAI / configurable |
| Embedding latency (p50) | ~12ms | ~240ms (cloud) / ~30-50ms (local) |
| End-to-end recall p50 | 17ms | 110ms (self-hosted) |
| Storage backend | SQLite + sqlite-vec | Postgres + pgvector |
| Runs offline (default) | Yes | Self-hosted only, requires local LLM |
| Self-hosted setup | npm install -g synabun | Docker compose (Postgres + API + runtime) |
| Built-in agent SDK | No (uses host AI client) | Yes |
| Multi-tenant by design | No (single-user) | Yes |
| Browser automation | 38 tools | None |
| Social media extraction | 6 platforms | None |
| Visual whiteboard | Yes | None |
| Autonomous loops | Yes (cron) | Agent runtime loop |
| 3D memory visualization | Yes | No |
| Claude Code lifecycle hooks | 7 hooks | None |
| Native MCP server | Yes | Wrapper |
| Sidepanel support (Claude/Codex/OpenCode) | Yes | No |
| Managed cloud option | No | Yes (Letta Cloud, paid) |
| License | Apache 2.0 (no commercial fork) | Apache 2.0 + commercial cloud |
| Research lineage | Built for MCP era (2025-2026) | MemGPT paper (UC Berkeley, 2023) |
Structured memory model. The core/archival/recall split is genuinely well-designed. Core memory holds persona + preferences (always in context). Archival holds long-term facts (vector-searched). Recall holds chat history (filtered). The agent itself promotes/demotes memories between tiers. For products that need "the AI remembers who I am" this is the right model.
Agent framework. Letta is not just memory — it is a full agent runtime. If you are building a customer-facing AI product (support agent, sales copilot, in-app assistant), Letta gives you the agent loop, the memory model, and the SDK in one package.
Multi-tenant by design. Letta is built to host many agents serving many users. SynaBun assumes a single developer on a single machine. For multi-tenant products, Letta is the right architecture.
Research lineage. The original MemGPT paper has been cited extensively. The structured memory ideas are battle-tested and well-understood.
MCP-native. SynaBun was built as an MCP server from day one. 106 tools exposed natively over the MCP protocol. Letta has MCP support but it is a wrapper around the agent framework — the abstraction layer adds friction.
Tool surface. Browser automation (38 Playwright tools), social media extraction (6 platforms), 3D whiteboard, Claude Code hooks, Discord bots, Universal MCP Management. Letta's tool surface is smaller and centered on the memory model.
Latency. Local SQLite + local embeddings beat self-hosted Postgres + cloud embeddings on every workload by 5-10x. For interactive coding sessions, this matters.
Setup simplicity. One npm command vs Docker compose with Postgres + API + agent runtime. SynaBun is plug-and-play; Letta is "spin up the stack".
Developer-first ergonomics. SynaBun is built for the workflow of a developer using Claude Code. Auto-recall on session start. Auto-save on session end. Categorical organization that maps to projects + topics. Letta is built for AI products — its ergonomics are designed for product engineers integrating AI features, not developers using AI to write code.
Pick Letta if: you are building a customer-facing AI product, you need structured memory tiers, you want a multi-tenant agent framework, your product needs an agent loop with built-in tool-calling + memory + persistence, or you want a managed cloud service to host the agents.
Pick SynaBun if: you are a solo developer or small team using AI coding tools daily, you want one MCP install for memory + browser + social + loops, latency matters in your daily workflow, you want a fully local stack, or you want first-class Claude Code/Codex/OpenCode/Gemini integration.
Yes — and unlike SynaBun + Mem0, this combination makes some sense. Letta can serve as the agent framework for a customer-facing product, while SynaBun provides developer-side memory + tooling for the team building that product. The two memory stores stay separate (developer memory is not the same as product memory).
That said, most teams will end up picking one. Running both is operationally heavier than the value usually warrants.
Migrating from Letta to SynaBun: export archival memories via Letta's API → re-embed with all-MiniLM-L6-v2 → import via SynaBun's remember tool. Core memory does not have a clean SynaBun equivalent — Letta's "always-in-context" persona block is closer to a CLAUDE.md file than a memory entry. SynaBun's importance scoring (10 = foundational) is the closest mapping.
One command. SQLite + local embeddings. 106 MCP tools.
Read the docs GitHub