SynaBun vs Letta (MemGPT)

MCP Toolkit.
vs Agent Framework.

Letta (formerly MemGPT) is an agent framework with structured memory tiers — built for shipping customer-facing AI products. SynaBun is an MCP toolkit with 106 tools — built to plug into AI coding agents like Claude Code, Codex, OpenCode, and Gemini. Adjacent territory, different jobs.

FeatureSynaBunLetta (MemGPT)
Primary use caseMCP toolkit for AI coding agentsAgent framework for AI products
MCP tools106 (native)4 (via wrapper)
Memory architectureCategorical + importance + recencyStructured blocks (core/archival/recall)
Default embeddingall-MiniLM-L6-v2 (local)OpenAI / configurable
Embedding latency (p50)~12ms~240ms (cloud) / ~30-50ms (local)
End-to-end recall p5017ms110ms (self-hosted)
Storage backendSQLite + sqlite-vecPostgres + pgvector
Runs offline (default)YesSelf-hosted only, requires local LLM
Self-hosted setupnpm install -g synabunDocker compose (Postgres + API + runtime)
Built-in agent SDKNo (uses host AI client)Yes
Multi-tenant by designNo (single-user)Yes
Browser automation38 toolsNone
Social media extraction6 platformsNone
Visual whiteboardYesNone
Autonomous loopsYes (cron)Agent runtime loop
3D memory visualizationYesNo
Claude Code lifecycle hooks7 hooksNone
Native MCP serverYesWrapper
Sidepanel support (Claude/Codex/OpenCode)YesNo
Managed cloud optionNoYes (Letta Cloud, paid)
LicenseApache 2.0 (no commercial fork)Apache 2.0 + commercial cloud
Research lineageBuilt for MCP era (2025-2026)MemGPT paper (UC Berkeley, 2023)

Where Letta wins

Structured memory model. The core/archival/recall split is genuinely well-designed. Core memory holds persona + preferences (always in context). Archival holds long-term facts (vector-searched). Recall holds chat history (filtered). The agent itself promotes/demotes memories between tiers. For products that need "the AI remembers who I am" this is the right model.

Agent framework. Letta is not just memory — it is a full agent runtime. If you are building a customer-facing AI product (support agent, sales copilot, in-app assistant), Letta gives you the agent loop, the memory model, and the SDK in one package.

Multi-tenant by design. Letta is built to host many agents serving many users. SynaBun assumes a single developer on a single machine. For multi-tenant products, Letta is the right architecture.

Research lineage. The original MemGPT paper has been cited extensively. The structured memory ideas are battle-tested and well-understood.

Where SynaBun wins

MCP-native. SynaBun was built as an MCP server from day one. 106 tools exposed natively over the MCP protocol. Letta has MCP support but it is a wrapper around the agent framework — the abstraction layer adds friction.

Tool surface. Browser automation (38 Playwright tools), social media extraction (6 platforms), 3D whiteboard, Claude Code hooks, Discord bots, Universal MCP Management. Letta's tool surface is smaller and centered on the memory model.

Latency. Local SQLite + local embeddings beat self-hosted Postgres + cloud embeddings on every workload by 5-10x. For interactive coding sessions, this matters.

Setup simplicity. One npm command vs Docker compose with Postgres + API + agent runtime. SynaBun is plug-and-play; Letta is "spin up the stack".

Developer-first ergonomics. SynaBun is built for the workflow of a developer using Claude Code. Auto-recall on session start. Auto-save on session end. Categorical organization that maps to projects + topics. Letta is built for AI products — its ergonomics are designed for product engineers integrating AI features, not developers using AI to write code.

Picking by use case

Pick Letta if: you are building a customer-facing AI product, you need structured memory tiers, you want a multi-tenant agent framework, your product needs an agent loop with built-in tool-calling + memory + persistence, or you want a managed cloud service to host the agents.

Pick SynaBun if: you are a solo developer or small team using AI coding tools daily, you want one MCP install for memory + browser + social + loops, latency matters in your daily workflow, you want a fully local stack, or you want first-class Claude Code/Codex/OpenCode/Gemini integration.

Can I use both?

Yes — and unlike SynaBun + Mem0, this combination makes some sense. Letta can serve as the agent framework for a customer-facing product, while SynaBun provides developer-side memory + tooling for the team building that product. The two memory stores stay separate (developer memory is not the same as product memory).

That said, most teams will end up picking one. Running both is operationally heavier than the value usually warrants.

Migration notes

Migrating from Letta to SynaBun: export archival memories via Letta's API → re-embed with all-MiniLM-L6-v2 → import via SynaBun's remember tool. Core memory does not have a clean SynaBun equivalent — Letta's "always-in-context" persona block is closer to a CLAUDE.md file than a memory entry. SynaBun's importance scoring (10 = foundational) is the closest mapping.

Further reading

Try SynaBun in 60 seconds.

One command. SQLite + local embeddings. 106 MCP tools.

Read the docs GitHub