Picture this. You are deep in a codebase, your AI assistant tuned exactly how you like it. Your model preferences, your custom instructions, your allowed tools, your workflow. Then a collaborator joins your workspace to help with a feature. Right now, they have two options: use your AI setup and lose everything that makes their workflow fast, or work in a separate environment and lose the ability to touch your files. Both options are terrible.
We are building something different. We call it Bring Your Own CLI.
The Problem Nobody Talks About
AI-assisted development has a collaboration problem. Every AI coding tool today is fundamentally single-player. Your Claude Code instance, your API key, your settings, your context. That is fine when you work alone. But software is not built alone.
Remote pair programming, open source contributions, team debugging sessions, code reviews where you actually want to run the code. All of these scenarios hit the same wall: who owns the AI?
SynaBun already solved the first half of this. Our invite system lets you share your entire workspace with someone else. They see your terminal, your browser, your memory cards, your whiteboard. They can interact with your Claude Code session in real time. It works. People use it. But there is a fundamental limitation that keeps surfacing.
When a guest uses the host's CLI, they are borrowing someone else's brain. They lose their model preferences, their custom instructions, their tool permissions, everything that makes their AI workflow theirs.
What Bring Your Own CLI Actually Means
The concept is simple. When you get invited into someone's SynaBun workspace, you have a choice. Use the host's AI like before, or bring your own. If you bring your own, you provide your API key and optionally your model preference and custom instructions. The host's server spawns a separate Claude Code process just for you, running with your configuration but pointed at the host's project files.
Your AI. Their codebase. Same workspace.
Let that sink in for a second. Two developers, each with their own AI agent, both reading and writing to the same project. Each agent has its own personality, its own context, its own way of solving problems. One might be running Opus for deep architectural work while the other runs Sonnet for rapid iteration. One might have strict tool permissions while the other runs fully autonomous. Both operating on the same files, in real time.
Why This Changes Everything
This is not just a convenience feature. It fundamentally changes what AI-assisted collaboration looks like.
Each developer keeps their edge
Developers spend weeks tuning their AI setup. Custom CLAUDE.md instructions that encode their coding style. Model preferences based on the kind of work they do. Tool permissions configured for their risk tolerance. That tuning is valuable. Losing it every time you collaborate with someone is like being forced to use someone else's keyboard layout.
Parallel AI agents on one codebase
With BringCLI, two AI agents can work the same codebase simultaneously. One developer's Claude works on the backend API while the other's handles the frontend components. No branch juggling, no merge conflicts from working in isolation. Real-time, same-directory, multi-agent development.
Cost stays where it belongs
Each developer uses their own API key. The host is not footing the bill for everyone's AI usage. Cost tracking is per-user, transparent, and capped. The host sets a maximum spend per guest session. No surprises.
Trust without surrender
The host keeps full control. They can see every active guest CLI session, what model it is running, how much it has cost, and kill it instantly if needed. Guests get project file access but not the host's secrets, API keys, or system configuration. It is granular permission control, not all-or-nothing sharing.
The Architecture Under the Hood
We researched this extensively before writing a single line of code. The architecture we landed on is straightforward but deliberate.
When a guest enables BringCLI, they submit their Anthropic API key through the SynaBun interface. The key is encrypted at rest using AES-256-GCM with a host-side secret that never leaves the machine. When the guest starts a Claude session, the server spawns a dedicated process using the host's Claude Code binary but with the guest's API key injected into the environment.
The guest's process is sandboxed. It operates on the host's project directory but gets a whitelisted environment. Only essential system variables pass through. The host's API keys, tokens, and secrets are stripped entirely. No SynaBun hooks are injected into the guest process either. It runs vanilla Claude Code with the guest's identity.
Each guest gets their own WebSocket connection on a dedicated endpoint. The message protocol is identical to the host's, so the existing Claude panel UI works without changes. The server just routes messages to the correct process based on who sent them.
Security: The Hard Questions
Sharing a workspace with someone else's AI raises real security questions. We thought about every one of them.
- API key safety. Guest keys are encrypted at rest and only decrypted in memory at spawn time. They never appear in logs, error messages, or debug output. The connection requires HTTPS.
- Host secret isolation. Guest processes receive a whitelisted environment. We do not just strip known secrets. We only pass through explicitly safe variables like
PATH,HOME, andTEMP. Everything else is excluded by default. - Resource protection. Each guest session has a cost cap, an idle timeout, and the host can set a maximum number of concurrent guest processes. Runaway agents get terminated automatically.
- File access. Guests can read and write project files, the same level of access they already have with terminal permission. The host explicitly opts in to this. No BringCLI without the host enabling it.
- Process lifecycle. When the WebSocket closes, the process dies. No orphaned agents running in the background.
What This Looks Like in Practice
You are working on a SynaBun project. A friend wants to help debug an issue. You generate an invite link, they join your workspace, and you enable the BringCLI permission. They paste their API key into a simple config modal, pick their preferred model, and click save.
Now your Neural Interface shows two active CLI sessions. Yours in the default blue accent. Theirs in amber. Both running side by side in the Claude panel. You can see that they are running Sonnet 4.6 and have spent $0.45 so far. They can see your project, your files, your terminal output. But their AI responses come from their own key, their own model, their own instructions.
They find the bug. Their Claude writes the fix. You review it in your own Claude session. Two perspectives, two AI agents, one shared truth: the code on disk.
The Bigger Picture
We think this is the beginning of something larger. Today, AI development tools are personal devices. Your AI, your context, your machine. But real software development is collaborative. The tools need to catch up.
BringCLI is our first step toward multiplayer AI development. Not the kind where everyone shares a single AI. The kind where every participant brings their own intelligence to the table, configured the way they work best, operating on shared ground.
Think about what comes next. A team of five developers, each with their own Claude instance, all working the same monorepo. Each one tuned for their domain. The frontend specialist's Claude knows React patterns cold. The backend engineer's Claude has strict type safety instructions. The DevOps person's Claude is configured for infrastructure-as-code. All of them reading from and writing to the same project, with real-time visibility into what every agent is doing.
That is not science fiction. The architecture supports it today. The implementation is a matter of building it.
What We Are Building First
We are taking a phased approach. The first version focuses on the core loop: API key configuration, separate process spawning, cost tracking, and host controls. No custom MCP servers for guests yet. No guest-specific hooks. No session history persistence. Just the clean foundation that makes multi-agent collaboration possible.
Security hardening comes next. Environment whitelisting, cost cap enforcement, idle timeouts, concurrent process limits. Then the full UX: model selector, custom instructions, the host dashboard showing all active guest CLIs.
We are publishing this research now because the architecture is finalized and we want the community involved. If you are building with Claude Code and have opinions about how multi-user AI workspaces should work, we want to hear them.
SynaBun is open source. The research behind BringCLI, including the full architecture design, security analysis, and phased roadmap, is documented in our persistent memory system. Every decision, every trade-off, every rejected alternative is stored and searchable. That is what happens when your development tool has a brain that persists across sessions.
Follow the progress on GitHub or join the Discord. The future of AI development is not single-player.