SynaBun / Documentation

SynaBun Documentation

67 MCP tools for AI assistants — persistent memory, browser automation, social media tools, visual workspace, and more. One server, every AI editor.

v1.0 stable MCP Compatible Open Source

Overview #

SynaBun is a persistent vector memory system for AI assistants. Any MCP-compatible AI tool — Claude Code, Cursor, Windsurf, or any other — can connect and retain knowledge across sessions through semantic vector search.

Memories are stored in SQLite as vector embeddings, with rich metadata for payload-based filtering. When your AI needs to recall something, it searches by semantic meaning rather than exact keywords — so it finds the relevant memory even if you phrase things differently than when you originally stored it.

Key Capabilities #

  • 67 MCP tools — memory (9), browser automation (38), whiteboard (5), cards (5), discord (8), git (1), loop (1), tictactoe (1)
  • 100+ REST API endpoints — full HTTP API for external integrations and the Neural Interface
  • 7 Claude Code hooks — lifecycle hooks for automated memory capture, user learning, and context management (SessionStart, UserPromptSubmit, PreCompact, Stop, PreToolUse, PostToolUse x2)
  • Local embeddings by default — Transformers.js (all-MiniLM-L6-v2, 384 dimensions) with zero configuration, plus 12+ optional cloud providers
  • 3D Neural Interface — force-directed graph visualization of your memory at localhost:3344
  • Multi-project support — memories are tagged by project, searchable across all or filtered per-project
  • Hierarchical categories — parent/child category system with dynamic schema
  • Self-hosted, local-first — your data never leaves your infrastructure
Tip
SynaBun works with any MCP client. While it has deep Claude Code integration, it also works with Cursor, Windsurf, Continue, and any other tool that supports the Model Context Protocol.

How It Works #

When your AI assistant calls remember, SynaBun converts the content into a vector embedding using your configured embedding provider, then stores it in SQLite with metadata (category, project, importance, tags, related files). When it calls recall, SynaBun embeds the query and performs approximate nearest-neighbor search, returning the most semantically similar memories ranked by a composite score.

The composite scoring algorithm weights vector similarity, recency, importance, and payload filter matches to surface the most relevant memories — not just the most similar ones.

Quick Start #

Prerequisites #

  • Node.js 22.5+ — for the MCP server, SQLite database (via built-in node:sqlite), and Neural Interface
  • No API keys required — local embeddings via Transformers.js work out of the box, or optionally configure a cloud provider

Installation #

bash
# Clone the repository
git clone https://github.com/danilokhury/Synabun.git
cd synabun

# Install dependencies
npm install

# Configure environment
cp .env.example .env
# Edit .env with your embedding API key

# Start everything (SQLite + MCP server + Neural Interface)
npm start
bash
# Install globally via npm
npm install -g synabun

# Run the setup wizard
synabun setup

# Start all services
synabun start

What npm start does #

  1. Creates the SQLite database at data/memory.db with the correct schema and vector dimensions for your embedding model
  2. Downloads the local embedding model (Xenova/all-MiniLM-L6-v2, ~23MB ONNX) on first run
  3. Starts the MCP server (stdio and HTTP transports, compatible with all MCP clients)
  4. Starts the Neural Interface REST API on port 3344
  5. Opens the 3D Neural Interface in your browser

Connect to Claude Code #

Add SynaBun to your Claude Code MCP configuration:

json
// ~/.claude/.mcp.json
{
  "mcpServers": {
    "SynaBun": {
      "command": "node",
      "args": ["/path/to/synabun/mcp-server/index.js"],
      "env": {}
    }
  }
}
Note
The onboarding wizard (npm run setup) automatically generates the correct .mcp.json configuration for your installation path and selected embedding provider.

Verify Installation #

bash
# Check database exists
ls data/memory.db

# Check Neural Interface API
curl http://localhost:3344/api/memories?limit=5

# Open Neural Interface in browser
open http://localhost:3344

Architecture #

SynaBun follows a layered architecture: your AI assistant communicates via MCP, the MCP server handles tool dispatch, embeddings are generated via your chosen provider, and vectors are stored and searched in SQLite.

┌─────────────────────────────────────────────────────────────┐
│                     AI ASSISTANT                             │
│         (Claude Code, Cursor, Windsurf, etc.)               │
└───────────────────────┬─────────────────────────────────────┘
                        │  Model Context Protocol (stdio or HTTP)
                        ▼
┌─────────────────────────────────────────────────────────────┐
│                  SYNABUN MCP SERVER                         │
│                                                             │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐   │
│  │ remember │  │  recall  │  │ reflect  │  │  forget  │   │
│  └──────────┘  └──────────┘  └──────────┘  └──────────┘   │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐   │
│  │ restore  │  │ memories │  │   sync   │  │ category │   │
│  └──────────┘  └──────────┘  └──────────┘  └──────────┘   │
└─────────────┬───────────────────────┬───────────────────────┘
              │                       │
              ▼                       ▼
┌─────────────────────┐  ┌───────────────────────────────────┐
│  EMBEDDING PROVIDER │  │         SQLITE (data/memory.db)   │
│                     │  │                                   │
│  • Transformers.js  │  │  Table: memories                  │
│  • Ollama (local)   │  │  Vectors: float32[]               │
│  • Google Gemini    │  │  Payload: category, project,      │
│  • Cohere           │  │    importance, tags, files,       │
│  • 12+ providers    │  │    timestamps, source             │
└─────────────────────┘  └───────────────────────────────────┘
                                        │
                                        ▼
                        ┌───────────────────────────────────┐
                        │    NEURAL INTERFACE               │
                        │    (Express REST API + 3D UI)     │
                        │    localhost:3344                 │
                        └───────────────────────────────────┘

Data Flow #

Writing a memory: AI calls remember(content, category, importance) → MCP server generates embedding via provider → stores vector + payload in SQLite → returns UUID.

Reading memories: AI calls recall(query) → MCP server embeds the query → SQLite performs ANN search → results filtered and re-ranked by composite score (similarity + recency + importance) → top N memories returned.

Neural Interface: Browser connects to the Express REST API on port 3344 → fetches memories as nodes → renders 3D force-directed graph with Three.js → allows search, trash, restore, backup, and sync operations.

Composite Scoring #

The recall ranking algorithm combines multiple signals into a final score:

javascript
// Simplified scoring formula
const score =
  vector_similarity * 0.60    // semantic closeness (primary signal)
  + recency_score  * 0.20    // newer memories ranked higher
  + importance     * 0.15    // user-set importance 1–10
  + project_boost  * 0.05;   // current project match bonus

MCP Tools #

SynaBun exposes 67 tools via the Model Context Protocol. The memory tools below are the core of the system. Browser, whiteboard, card, discord, git, loop, and tictactoe tools are documented in their respective sections.

ToolPurposeKey Parameters
rememberStore a new memorycontent, category, importance, tags, project, source, related_files
recallSemantic searchquery, category, project, limit, min_score, min_importance, tags
reflectUpdate an existing memorymemory_id, content, category, importance, tags, add_tags, related_files, related_memory_ids
forgetSoft-delete (move to trash)memory_id
restoreRestore from trashmemory_id
memoriesBrowse recent or get statsaction (recent/stats/by-category/by-project), limit, category, project
syncDetect stale memoriesproject
categoryManage categoriesaction (create/update/delete/list), name, description, parent, color
tictactoePlay tic-tac-toeaction (start/move/state/end)

remember #

Store a memory with content, category, importance, and optional metadata. Returns the full UUID of the created memory.

javascript
// Store a bug fix
remember({
  content: "Fixed race condition in auth middleware by adding mutex lock around token refresh logic. Root cause was concurrent requests hitting the token expiry check simultaneously.",
  category: "bug-fixes",
  importance: 8,
  tags: ["auth", "race-condition", "middleware"],
  project: "my-app",
  source: "self-discovered",
  related_files: ["src/middleware/auth.ts"]
})

The source field accepts: user-told, self-discovered, auto-saved. Importance ranges from 1 (trivial) to 10 (foundational).

recall #

Semantic search across all stored memories. Uses vector similarity to find relevant memories even if exact words don't match.

javascript
// Search with filters
recall({
  query: "authentication issues token refresh",
  category: "bug-fixes",
  project: "my-app",
  limit: 5,
  min_score: 0.3,
  min_importance: 5,
  tags: ["auth"]
})

All filter parameters are optional. Omitting category and project searches across everything, with a score boost for the current project.

reflect #

Update an existing memory. Provide only the fields you want to change. If content is updated, the embedding vector is regenerated automatically.

javascript
// Update importance and add a tag
reflect({
  memory_id: "8f7cab3b-644e-4cea-8662-de0ca695bdf2",
  importance: 9,
  add_tags: ["critical"],
  subcategory: "auth"
})
Warning
The memory_id parameter requires the full UUID format (e.g. 8f7cab3b-644e-4cea-8662-de0ca695bdf2). Shortened IDs are not accepted. Use recall to get the full UUID first.

forget / restore #

forget moves a memory to trash (soft delete). restore brings it back. Trashed memories can also be managed from the Neural Interface trash panel.

javascript
// Soft-delete
forget({ memory_id: "8f7cab3b-644e-4cea-8662-de0ca695bdf2" })

// Restore from trash
restore({ memory_id: "8f7cab3b-644e-4cea-8662-de0ca695bdf2" })

memories #

Browse memories or get statistics. Useful at session start to get recent context.

javascript
// Get recent memories
memories({ action: "recent", limit: 10 })

// Get stats
memories({ action: "stats" })

// Browse by category
memories({ action: "by-category", category: "bug-fixes" })

// Browse by project
memories({ action: "by-project", project: "my-app", limit: 20 })

sync #

Detects memories whose related_files have changed since the memory was last updated. Compares file content hashes against stored checksums. Returns a list of potentially stale memories.

javascript
// Check all memories for staleness
sync({})

// Check only a specific project
sync({ project: "my-app" })

category_create / category_update / category_delete / category_list #

Manage the category taxonomy. Categories can have parent/child relationships. Creating or updating a category triggers a dynamic schema refresh so the recall tool's category descriptions update immediately.

javascript
// Create a parent category
category_create({
  name: "myproject",
  description: "All knowledge for My Project",
  is_parent: true,
  color: "#3b82f6"
})

// Create a child category
category_create({
  name: "myproject-bugs",
  description: "Bug fixes and known issues",
  parent: "myproject"
})

// Rename a category
category_update({ name: "myproject-bugs", new_name: "myproject-issues" })

// Delete, reassigning memories to another category
category_delete({ name: "myproject-issues", reassign_to: "myproject" })

// List all categories as a tree
category_list({ format: "tree" })

Whiteboard Tools #

SynaBun includes a shared visual canvas that your AI can see, draw on, and use for architecture diagrams, planning, and collaborative sketching. 5 MCP tools control the whiteboard.

ToolPurposeKey Parameters
whiteboard_readRead current whiteboard state(none) — returns all elements with IDs, positions, and properties
whiteboard_addAdd elements to the canvaselements (array), layout (row/column/grid/center), coordMode (px/pct)
whiteboard_updateUpdate element propertiesid, updates (position, size, content, color, rotation), coordMode
whiteboard_removeRemove elements or clear canvasid (omit to clear all)
whiteboard_screenshotCapture whiteboard as JPEG(none)

Element Types #

  • text — Text boxes with configurable font size and color
  • list — Bulleted or numbered lists
  • shape — Rectangles, circles, pills, and drawn circles
  • arrow — Connectors with auto-anchor snapping to element centers
  • pen — Free-form drawing strokes with configurable width
  • image — Images with automatic compression (max 1920×1920)
  • section — Wireframe blocks (navbar, hero, sidebar, content, footer, card, form, modal, grid)
javascript
// Add a diagram with auto-layout
whiteboard_add({
  elements: [
    { type: "text", content: "Auth Flow", fontSize: 24 },
    { type: "shape", shape: "rect", content: "Login" },
    { type: "shape", shape: "rect", content: "Token" },
    { type: "arrow", from: "Login", to: "Token" }
  ],
  layout: "row"
})
Tip
Always call whiteboard_read before placing elements — it returns the viewport dimensions so your AI can position elements correctly. Use coordMode: "pct" for responsive layouts.

Card Tools #

Memory cards are floating panels in the Neural Interface that display memory content. Your AI can open, arrange, pin, and screenshot cards to build visual workspaces for research, investigations, or reference material.

ToolPurposeKey Parameters
card_listList all open cards(none) — returns UUID, position, size, compact/pinned state
card_openOpen memory as floating cardmemoryId, left, top, coordMode (px/pct), compact
card_closeClose card(s)memoryId (omit to close all)
card_updateMove, resize, or pin cardsmemoryId, updates (position, size, compact, pinned), coordMode
card_screenshotCapture card workspace(none) — captures all open cards in current layout
javascript
// Open a memory as a compact card
card_open({
  memoryId: "8f7cab3b-644e-4cea-8662-de0ca695bdf2",
  left: 50,
  top: 10,
  coordMode: "pct",
  compact: true
})

Discord Tools #

8 MCP tools for full Discord server management. Requires a DISCORD_BOT_TOKEN in your .env file. Set DISCORD_GUILD_ID for a default server.

ToolPurposeKey Actions
discord_guildServer info & overviewinfo, channels, members, roles, audit_log
discord_channelChannel managementcreate, edit, delete, list, permissions
discord_roleRole managementcreate, edit, delete, list, assign, remove
discord_messageSend & manage messagessend, edit, delete, pin, unpin, react, bulk_delete, list
discord_memberMember moderationinfo, kick, ban, unban, timeout, nickname
discord_onboardingServer setupget, set_welcome, set_rules, set_verification, set_onboarding
discord_webhookWebhook managementcreate, edit, delete, list, execute
discord_threadThread managementcreate, archive, unarchive, lock, delete
javascript
// Send a message
discord_message({
  action: "send",
  channel: "general",
  content: "Deployment complete! v2.1.0 is live."
})

// Create a role
discord_role({
  action: "create",
  name: "Beta Tester",
  color: "#3b82f6",
  mentionable: true
})
Note
All Discord tools use an action parameter to select the operation. Channel types include: text, voice, category, announcement, forum, and stage.

Autonomous Loops #

The loop tool enables autonomous iteration — your AI can run long-running tasks with up to 50 iterations and configurable time caps (up to 480 minutes). Progress is tracked via a journal across context compactions.

ActionPurposeKey Parameters
startBegin a new looptask, iterations (max 50), max_minutes (max 480), context, template
stopForce stop active loopsession_id
statusCheck loop statesession_id
updateUpdate progress journalsession_id, summary, progress
javascript
// Start a monitoring loop
loop({
  action: "start",
  task: "Check the deploy status every iteration and report any failures",
  iterations: 20,
  max_minutes: 120
})

Git Tool #

The git MCP tool provides version control operations directly through the MCP protocol — status, diff, commit, log, and branch management.

ActionPurposeKey Parameters
statusWorking tree statuspath (repo absolute path)
diffShow file changespath, max_lines (default 500)
commitStage and commitpath, message, files (array)
logRecent commit historypath, count (default 10)
branchesList all branchespath
javascript
// Check repo status
git({
  action: "status",
  path: "/Users/me/my-project"
})

// Commit specific files
git({
  action: "commit",
  path: "/Users/me/my-project",
  message: "Fix auth middleware race condition",
  files: ["src/middleware/auth.ts"]
})

Neural Interface #

The Neural Interface is a 3D force-directed graph visualization of your memory at localhost:3344. Each memory is a node, related memories are connected by edges, and categories form visual clusters.

Core Features #

  • 3D force-directed graph — powered by Three.js with Unreal Bloom post-processing. Sun nodes for parent categories, planet nodes for children, star nodes for memories. Full camera controls (WASD + QE movement, mouse rotation/pan) with compass HUD.
  • 2D alternative — Pixi.js-based 2D view with minimap navigation and force-directed layout for performance-focused workflows.
  • Live search — search across all memories in real-time with filters (category, project, importance, tags). Matching nodes highlight and camera animates to them.
  • Memory inspector — click any node to see full content, metadata, related files, and tags in a detail panel.
  • Trash panel — browse, restore, or permanently delete trashed memories without leaving the browser.
  • Sync panel — run the staleness check and see which memories reference files that have changed.
  • Backup & restore — export all memories as a JSON snapshot, import from a snapshot to a new SQLite instance.
  • Category logos — projects can have custom logos displayed as node decorations.

IDE Features #

  • Skills Studio — browse, create, edit, import/export Claude Code skills and agents. Multi-file support, icon upload, validation, and install/uninstall management.
  • Automation Studio — visual workflow builder for multi-step automations with trigger/condition/action chains.
  • Multi-tab terminal — PTY-backed terminal with ANSI color support (256-color + truecolor), git branch tracking, command history, session lock, and split view.
  • File explorer — project file tree with custom folder colors, sort options, and collapsible sections.
  • Whiteboard canvas — shared drawing surface for AI and human collaboration. Free-form pen, text, shapes, arrows, images, and wireframe sections with undo/redo (50-level stack).
  • Floating memory cards — open memories as draggable, pinnable cards. Compact mode (220×120px mini-cards), persistent positioning, and workspace screenshots.
  • Cost widget — Claude API cost tracking with per-session breakdowns and model cost awareness. Collapsible and dockable.
  • Statistics dashboard — memory count by category/project, importance distribution, age distribution, and health checks (stale memories, orphaned categories).

Customization #

  • Custom skins — upload custom CSS themes for the entire interface.
  • Keybinds — 20+ bindable actions (toggle panels, launch tools, navigation, search). Import/export keybind configurations.
  • Workspaces — save and restore panel layouts, sidebar state, and terminal positions. Layout presets per variant (2D/3D).
  • Custom file icons — per-extension or per-filename icons with PNG/SVG support.
  • Guest access — generate 24-hour invite links with feature-level permission control. Cloudflare tunnel proxy support for remote access.

REST API #

The Neural Interface runs an Express server with 100+ REST endpoints. All MCP tool operations are also available via HTTP for external integrations:

bash
# Get all memories
GET http://localhost:3344/api/memories

# Search memories
POST http://localhost:3344/api/memories/search
{ "query": "auth bug", "limit": 5 }

# Get a specific memory
GET http://localhost:3344/api/memories/:id

# Create a memory
POST http://localhost:3344/api/memories
{ "content": "...", "category": "...", "importance": 5 }

# Update a memory
PATCH http://localhost:3344/api/memories/:id

# Delete (trash) a memory
DELETE http://localhost:3344/api/memories/:id

# List categories
GET http://localhost:3344/api/categories

# Get graph data for visualization
GET http://localhost:3344/api/graph

# Stats
GET http://localhost:3344/api/stats

Claude Code Hooks #

SynaBun ships 7 lifecycle hooks for Claude Code that automate memory operations. Install them to get automatic context recall at session start and enforced memory capture after task completion.

FileEventTimeoutPurpose
session-start.mjsSessionStart5sInjects category tree, project detection, 5 binding directives (incl. user learning), and compaction recovery
prompt-submit.mjsUserPromptSubmit3sMulti-tier recall trigger system (7 priorities) — nudges AI to check memory and reflect on user patterns
pre-compact.mjsPreCompact10sCaptures session transcript before context compaction for conversation indexing
stop.mjsStop3sEnforces memory storage — blocks response if session isn't indexed or edits aren't remembered
post-remember.mjsPostToolUse3sTracks edit count and clears enforcement flags when memories are stored
pre-websearch.mjsPreToolUse2sBlocks WebSearch/WebFetch during active browser sessions to prevent interference
post-plan.mjsPostToolUse3sAuto-stores plans as memories when exiting plan mode

Installing Hooks #

bash
# Copy hooks to your Claude Code hooks directory
cp synabun/hooks/claude-code/*.mjs ~/.claude/hooks/

# Or configure via settings.json (recommended)
# See the Installation JSON section in the hooks documentation

SessionStart hook #

Runs when Claude Code starts a new session. Injects the full category tree, project detection rules, 5 binding directives (session start recall, auto-remember, recall before decisions, compaction auto-store, user learning), and compaction recovery context. This is prepended to the session before the user types anything.

UserPromptSubmit hook #

Intercepts each user message before it reaches Claude. Analyzes the prompt against a 7-priority trigger system (recall tiers, non-English detection, Latin catch-all, user learning reflection) and injects context-aware nudges. Priority 7 is a quiet-only user learning nudge that fires after a configurable number of interactions.

PreCompact hook #

Fires before context compaction occurs. Captures the session transcript path and metadata, sets a pending-compact flag that the Stop hook will enforce — ensuring the compacted session gets indexed in memory before Claude can proceed.

Stop hook #

Runs after Claude completes a response. Enforces two requirements: (1) if a pending-compact flag exists, blocks Claude until the session is indexed via remember with category "conversations", and (2) if a pending-remember flag exists with 3+ unremembered edits, blocks until work is stored. Max 3 retries per flag to prevent infinite loops.

PostToolUse hook (post-remember) #

Matches Edit, Write, NotebookEdit, and mcp__SynaBun__remember tool calls. Tracks edit count (incrementing a pending-remember counter) and clears enforcement flags when memories are stored — conversations category clears the compact flag, other categories reset the remember counter.

PreToolUse hook (pre-websearch) #

Intercepts WebSearch and WebFetch tool calls. Checks if a SynaBun browser session is active or if the current loop uses the browser — if so, blocks the tool and injects a message telling Claude to use the SynaBun browser tools instead. Falls through gracefully if the Neural Interface is unreachable.

PostToolUse hook (post-plan) #

Fires when Claude exits plan mode via ExitPlanMode. Automatically stores the finalized plan as a memory in the appropriate plans category, ensuring architectural decisions and implementation strategies are captured without manual intervention.

Tip
Hook behavior can be customized via feature flags in the Neural Interface settings. Flags include conversationMemory (compaction auto-store), greeting (session greeting), userLearning (autonomous user observation), and userLearningThreshold (interaction count before reflection nudge).

Claude Code Integration #

SynaBun ships a single slash command — /synabun — that serves as the entry point for all memory-powered capabilities. Type it in Claude Code and an interactive menu appears:

SynaBun interactive menu in Claude Code showing Brainstorm Ideas, Audit Memories, Memory Health, Search Memories options

/synabun — Command Hub #

The /synabun command presents an interactive prompt with these options:

OptionWhat it does
Brainstorm IdeasCross-pollinate memories to spark creative ideas and novel connections. Uses multi-round recall with 5 query strategies (direct, adjacent, problem-space, solution-space, cross-domain) and synthesizes ideas traced back to specific memories.
Audit MemoriesValidate stored memories against the current codebase for staleness. Runs 6 phases: landscape survey, checksum pre-scan, bulk retrieval, parallel semantic verification, interactive classification (STALE/INVALID/VALID/UNVERIFIABLE), and audit report.
Memory HealthQuick stats overview and staleness check of your memory system — total count, category distribution, stale file references.
Search MemoriesFind something specific across your entire memory bank using semantic search.
text
/synabun
Note
/synabun is the only slash command you need to remember. All capabilities — brainstorming, auditing, health checks, and search — are accessible from the interactive menu it displays. You don't need to invoke individual skills directly.

Category System #

Categories are the primary organizational axis in SynaBun. Each memory belongs to exactly one category. Categories support a parent/child hierarchy, custom colors, and descriptions that appear in the recall tool's schema hints.

Hierarchy #

Categories can have a parent, forming a tree structure. Parent categories act as top-level organizational branches. Child categories provide fine-grained routing within a branch. When routing a memory, use the most specific applicable category.

text
myproject (parent)
├── myproject-bugs        # Bug fixes and known issues
├── myproject-arch        # Architecture decisions
├── myproject-api         # API design notes
└── myproject-deploy      # Deployment and infrastructure

Routing Descriptions #

The description field of each category is surfaced in the MCP tool schema as a hint to AI assistants. A good description should explain what belongs here in routing language:

text
Good: "Bug fixes, known issues, and error resolutions for My Project"
Good: "Architecture decisions, system design patterns, and rationale"

Bad: "Bugs"
Bad: "My stuff"

Dynamic Schema Refresh #

When you create, rename, or update a category, SynaBun automatically refreshes the MCP tool schema. The next time Claude Code requests the tool manifest, it will see the updated category descriptions — no server restart required.

Built-in Categories #

CategoryPurpose
conversationsIndexed conversation sessions for cross-session recall
synabunKnowledge about SynaBun itself (parent)
synabun/architectureSystem architecture and data flow
synabun/mcp-toolsTool behavior, quirks, usage patterns
synabun/hooksClaude Code hook configuration
synabun/setupInstallation and onboarding
user-profileKnowledge about the user as a person (parent)
user-profile/communication-styleTone, formality, verbosity, language patterns, text quirks

Embedding Providers #

SynaBun supports 12+ embedding providers. The provider is configured once in .env and used for all new memories. You can switch providers, but existing memories embedded with a different model will not be comparable — a re-index is required.

Tip — Fully Local Option
SynaBun uses Transformers.js with all-MiniLM-L6-v2 by default for completely offline, local-only embeddings. No API key required. The ~23MB ONNX model downloads once and runs entirely in Node.js — your data never leaves the machine.
ProviderBase URLRecommended ModelNotes
transformerslocal (in-process)all-MiniLM-L6-v2Default. 384 dimensions, fully local, no API key.
openaiapi.openai.comtext-embedding-3-smallBest quality / cost ratio. Requires API key.
ollamalocalhost:11434nomic-embed-textFully local. No API key.
geminigenerativelanguage.googleapis.comtext-embedding-004Google Gemini embeddings
cohereapi.cohere.aiembed-english-v3.0Strong multilingual support
mistralapi.mistral.aimistral-embedEuropean data residency
voyageapi.voyageai.comvoyage-3High quality for code
nomicapi-atlas.nomic.ainomic-embed-text-v1.5Open source model
jinaapi.jina.aijina-embeddings-v3Long context support
togetherapi.together.xyztogethercomputer/m2-bert-80M-8k-retrievalServerless inference
fireworksapi.fireworks.ainomic-ai/nomic-embed-text-v1.5Fast inference
azureyour-resource.openai.azure.comtext-embedding-3-smallAzure OpenAI endpoint
bedrockvia AWS SDKamazon.titan-embed-text-v2:0AWS managed, IAM auth

Configuration #

SynaBun uses a namespaced .env format. Each SQLite instance and each embedding provider gets a unique <id> prefix. This allows multi-instance setups with different providers per instance.

SQLite Configuration #

.env
# Database file path (default: data/memory.db)
SQLITE_DB_PATH=data/memory.db

# SQLite runs embedded in Node.js via node:sqlite
# No port, no API key, no external process needed

Embedding Provider Configuration #

.env
# Format: EMBEDDING__<id>__<SETTING>
# Select active embedding provider
# Default: local Transformers.js (no config needed)
# EMBEDDING_ACTIVE=transformers

# Optional: OpenAI (requires API key)
# EMBEDDING_ACTIVE=openai_main
# EMBEDDING__openai_main__API_KEY=sk-your-api-key-here
# EMBEDDING__openai_main__BASE_URL=https://api.openai.com/v1
# EMBEDDING__openai_main__MODEL=text-embedding-3-small
# EMBEDDING__openai_main__DIMENSIONS=1536
# EMBEDDING__openai_main__LABEL=OpenAI Main

# Ollama (local, no API key needed)
# EMBEDDING__ollama__BASE_URL=http://localhost:11434
# EMBEDDING__ollama__MODEL=nomic-embed-text
# EMBEDDING__ollama__DIMENSIONS=768
# EMBEDDING__ollama__LABEL=Ollama Local

Server Configuration #

.env
# Neural Interface server port (default: 3344)
# NEURAL_PORT=3344

# Setup wizard completion flag
# SETUP_COMPLETE=false
Note
The DIMENSIONS value must match the output dimension of your chosen model. Mismatches will cause SQLite collection creation to fail. Check the model card for the correct value.

Browser Automation #

SynaBun integrates Playwright to give your AI assistant its own Chromium browser with persistent sessions. Your AI can navigate websites, interact with elements, fill forms, take screenshots, execute JavaScript, and extract structured data — all through natural language commands via MCP.

How It Works #

Each browser session runs a real Chromium instance managed by Playwright. Sessions maintain their own cookies, login state, and local storage across AI conversations — your AI stays logged into platforms without re-authenticating each time. Sessions can run headed (visible window) or headless.

Browser Tools (18) #

  • browser_navigate — Open any URL in the AI's browser
  • browser_click — Click elements by CSS selector, text content, ARIA role, or data-testid
  • browser_fill — Clear and fill input fields and textareas
  • browser_type — Type text character-by-character (for contenteditable elements like social media composers)
  • browser_snapshot — Get the page's accessible structure as text (token-efficient alternative to screenshots)
  • browser_screenshot — Capture the visible viewport as a base64 JPEG image
  • browser_content — Extract full page HTML or text content
  • browser_evaluate — Execute arbitrary JavaScript in the page context
  • browser_hover — Hover over page elements
  • browser_select — Select dropdown options
  • browser_press — Press keyboard keys and shortcuts
  • browser_wait — Wait for elements to appear, page load states, or timeouts
  • browser_scroll — Scroll the page or specific elements
  • browser_go_back — Navigate to the previous page
  • browser_go_forward — Navigate to the next page
  • browser_reload — Refresh the current page
  • browser_upload — Upload files through form inputs
  • browser_session — Create, list, or close browser sessions
Tip
Browser sessions persist their cookies and login state in a local storage file. This means your AI can stay logged into web apps, social media platforms, and admin panels across sessions without re-authenticating.

Social Media Automation #

Beyond the general browser tools, SynaBun includes 20 dedicated extraction tools for parsing social media content into structured JSON. These tools are faster and more reliable than using browser_snapshot because they use platform-specific DOM parsing.

Twitter / X #

  • browser_extract_tweets — Parses all visible tweets: author, handle, text, timestamp, URL, replies, reposts, likes, and view counts. Works on timelines, search results, and profile pages.

TikTok #

  • browser_extract_tiktok_videos — Extract feed videos with handle, video URL, caption, likes, comments, saves, shares, and music info
  • browser_extract_tiktok_search — Parse search results with video URLs, handles, captions, and view counts
  • browser_extract_tiktok_profile — Extract profile info: name, handle, bio, followers, following, likes, plus video grid with URLs and views
  • browser_extract_tiktok_studio — Parse TikTok Studio content list: title, URL, date, privacy setting, and performance stats

Facebook #

  • browser_extract_fb_posts — Parse visible posts: author, author URL, text, timestamp, post URL, and reactions. Works on news feeds, group feeds, and business Pages.

WhatsApp #

  • browser_extract_wa_chats — Parse the WhatsApp Web sidebar: chat name, last message, time, unread count, muted/pinned status
  • browser_extract_wa_messages — Extract messages from an open conversation: sender, time, date, direction (in/out), and text content

Instagram #

  • browser_extract_ig_feed — Parse feed posts: username, caption, likes, comments, and timestamp
  • browser_extract_ig_profile — Extract profile data: bio, follower/following stats, post grid, and story highlights
  • browser_extract_ig_post — Single post with full comments (supports pagination)
  • browser_extract_ig_reels — Extract reels with engagement metrics and audio name/URL
  • browser_extract_ig_search — Parse explore and hashtag search results

LinkedIn #

  • browser_extract_li_feed — Feed posts with author, headline, text, reactions, comments, and article links
  • browser_extract_li_profile — Profile data: headline, location, connections, experience, education, and skills
  • browser_extract_li_post — Single post with full comments (supports pagination)
  • browser_extract_li_notifications — Notifications with text, timestamp, and read status
  • browser_extract_li_messages — Messaging conversations and active thread messages
  • browser_extract_li_search_people — People search results with headline and location
  • browser_extract_li_network — Network connections and connection suggestions

Other Platforms #

The general browser automation tools can interact with any web-based platform — YouTube, Reddit, Pinterest, or any site accessible in a Chromium browser. Use browser_navigate, browser_click, browser_type, and browser_content to automate interactions on any website.

Use Case
Common social media automation workflows: content monitoring, competitive analysis, audience research, creator analytics, cross-platform reporting, and lead generation — all through conversational AI commands.

Vibe Coding with SynaBun #

Vibe coding is a development style where you collaborate with AI assistants conversationally — describing what you want in natural language while the AI writes, refactors, and debugs code. Tools like Claude Code, Cursor, and Windsurf have made vibe coding mainstream.

The Problem: AI Amnesia #

Without persistent memory, every AI coding session starts from zero. Your AI forgets architecture decisions, bug fixes you already resolved, coding conventions you established, and project context. You end up re-explaining the same things every session, wasting time and losing continuity.

How SynaBun Fixes It #

SynaBun gives your AI persistent memory that survives across sessions:

  • Automatic memory capture — Claude Code hooks automatically store important decisions, bug fixes, and patterns after each task
  • Semantic recall — Your AI finds relevant context by meaning, not exact keywords. Ask "how did we handle auth?" and it finds the memory about "JWT token refresh flow"
  • Drift detection — When source files change, SynaBun flags memories that reference those files as potentially stale
  • Multi-project isolation — Each project gets its own memory space, with cross-project search when needed
  • Session continuity — Conversation memory indexing captures session context at compaction events for cross-session recall

Beyond Memory: Complete AI Toolkit #

SynaBun provides 67 MCP tools that transform any compatible AI editor into a full development environment:

  • Browser automation — Test web apps, scrape data, interact with APIs through a real Chromium browser
  • Visual whiteboard — Architecture diagrams, flowcharts, and collaborative planning on a shared canvas
  • Floating cards — Pin research notes, investigations, and decisions to a visual workspace
  • Autonomous loops — Long-running AI tasks with configurable intervals
  • Skills studio — Build reusable AI workflows and custom commands
  • Social media tools — Interact with Twitter/X, TikTok, Facebook, and WhatsApp
Compatible Editors
SynaBun works with Claude Code (full hook integration), Cursor, Windsurf, Claude.ai, and any MCP-compatible AI tool. One installation, all 67 tools available to every connected assistant.

Self-Hosting #

SynaBun is designed to be self-hosted. All data lives in your SQLite instance and your local filesystem. No accounts, no telemetry, no phone-home.

Database Management #

SynaBun uses SQLite via Node.js built-in node:sqlite (requires Node.js 22.5+). The database is a single file at data/memory.db — no Docker, no separate process, no network port. Everything runs in the same Node.js process.

bash
# Database location
ls data/memory.db

# Backup the database (just copy the file)
cp data/memory.db data/memory-backup.db

# Inspect with sqlite3 CLI (optional)
sqlite3 data/memory.db ".tables"

Backup & Restore #

You can backup and restore memories as JSON snapshots through the Neural Interface or the CLI:

bash
# Export all memories to JSON
node synabun/scripts/backup.js --output ./memories-backup.json

# Restore from a backup
node synabun/scripts/restore.js --input ./memories-backup.json

# Or use the Neural Interface backup panel at:
# http://localhost:3344 → Settings → Backup
Warning
Backups include memory content and metadata but not the vector embeddings. When restoring, all vectors are regenerated using your current embedding provider. This means restoring to a different provider model will produce different vectors but the content is preserved.

Multi-Instance Support #

You can run multiple SynaBun instances with separate SQLite databases — for example, one per project or one per environment. Each instance uses its own data/memory.db file in its working directory.

.env
# Database path (default: data/memory.db)
SQLITE_DB_PATH=data/memory.db

# Each SynaBun instance uses its own database file
# Simply run separate instances in different directories

Production Deployment #

For production deployments, run SynaBun as a Node.js process with a process manager like PM2. The SQLite database and local embeddings run entirely within the process — no external services needed:

bash
# Start with PM2
pm2 start npm --name synabun -- start

# Services running in single process:
# - SQLite database (data/memory.db)
# - MCP server + Neural Interface (:3344)

Contributing #

SynaBun is open source under the Apache 2.0 license. We welcome bug reports and feature requests. Pull requests are not accepted — the codebase is maintained solely by the SynaBun authors.

  • Bug reports: Open an issue on GitHub Issues with reproduction steps
  • Feature requests: Open an issue on GitHub Issues describing the problem and proposed solution
  • Forking: You are free to fork and modify SynaBun under Apache 2.0. Forks must use a different name (trademark policy).

See the full CONTRIBUTING.md for details, development setup, and the forking guide.