Private Beta · Alpha
PyPI version Python versions License

One context window
isn't enough.

Claude Code, Cursor, Copilot — brilliant tools, single bottleneck. GraphBus distributes reasoning across specialized agents, one per concern. They negotiate at domain boundaries. No single agent holds everything.

spec_to_service — 5 agents, one pipeline
# Each agent owns one concern. None sees the whole picture.

class SpecParserAgent(GraphBusNode):
    SYSTEM_PROMPT = "You extract structured API requirements from plain-English specs."

class RouterAgent(GraphBusNode):
    SYSTEM_PROMPT = "You generate FastAPI route handlers from parsed endpoints."

class ModelAgent(GraphBusNode):
    SYSTEM_PROMPT = "You generate Pydantic models from API data schemas."

class TestAgent(GraphBusNode):
    SYSTEM_PROMPT = "You write pytest suites for FastAPI services."

@depends_on("RouterAgent", "ModelAgent", "TestAgent")
class OrchestratorAgent(GraphBusNode):
    SYSTEM_PROMPT = "You coordinate the full spec-to-service pipeline."

# Give it a spec. Get a working microservice.
# No single agent held the whole context.
832
Tests passing on every commit
<60s
From pip install to first agent running
MIT
Open source — no vendor lock-in
100%
Core test coverage

What developers are saying.

"Finally a framework that treats LLM reasoning like software engineering — separation of concerns, typed contracts, observable negotiation. This is how I'd design it if I were building it from scratch."
JM
James M.
Platform Engineer · 200k LOC Python monolith
"Cursor falls apart on our service mesh. Once I saw GraphBus distribute context across domain-specific agents, I stopped fighting the context window and started scaling with it."
SR
Sofia R.
Backend Lead · 12-service microservices stack
"The negotiation protocol is the piece I didn't know I was missing. Cross-domain refactors used to produce inconsistent outputs. Now agents propose, evaluate, and vote — the result is actually coherent."
DK
David K.
Staff Engineer · FinTech SaaS

Separation of concerns.
At the reasoning level.

Good software separates concerns in code. GraphBus does the same for LLM reasoning — each agent is a specialist, not a generalist staring at everything at once.

🧠
DISTRIBUTED CONTEXT

Each Agent, One Domain

Each class gets its own LLM agent with its own focused context. Your InventoryService agent understands inventory deeply. Your OrderService agent understands orders. Neither has to reason about everything — they specialize, then negotiate at the boundary.

  • Focused context per domain — no sprawl
  • Specialists, not a single generalist
  • Scales with codebase size, not against it
  • Quality reasoning per domain, not diluted reasoning across all
via the Bus
🤝
NEGOTIATION AT BOUNDARIES

Consensus, Not Guesswork

At domain boundaries, agents don't silently assume — they negotiate. Proposals cross the bus. Peers evaluate. Votes determine outcomes. An arbiter resolves conflicts. The result is a consistent, well-reasoned system that no single context window could produce.

  • Proposal → evaluate → vote → commit
  • Arbiter resolves cross-domain conflicts
  • Typed contracts enforced at every boundary
  • Observable — every negotiation is auditable
01

Separate your concerns

Subclass GraphBusNode. Give each class a system prompt defining its domain expertise. Each agent now holds focused context for its slice of the system — not the whole thing.

02

Negotiate at the boundaries

GraphBus runs a negotiation round across the bus. Each agent proposes, evaluates peers, and votes. The arbiter resolves conflicts. Complex cross-domain changes reach consensus — no single LLM had to hold it all.

03

Deploy the artifacts

The build emits clean JSON artifacts — graph, agents, topics, schemas. Runtime loads these, routes typed messages across the bus, and calls LLMs exactly when your agent logic needs them — you control the invocation.

04

Evolve over time

Your codebase improves with every build cycle. Agents negotiate schema contracts, refactor for coherence, and adapt to changing requirements — collaboratively.

See it in action.

Real GraphBus CLI — actual output from the framework. Hit play and watch the build pipeline run.

graphbus — ~/my-service
$

Spec in. Microservice out.

The spec_to_service example ships with GraphBus. Give it plain English — five specialized agents negotiate and produce a working FastAPI service.

📥 Input — plain English
TASK_SPEC = """A task management API with:
- CRUD operations for tasks
  (id, title, description, status,
   priority, due_date)
- User assignment by user_id
- Filter by status and priority
- Mark tasks complete
No auth required for MVP."""

executor.call_method(
    'OrchestratorAgent',
    'build_service',
    spec=TASK_SPEC,
    service_name='TaskManagerAPI'
)
📤 Output — working FastAPI service
# router.py — generated by RouterAgent
@router.get("/tasks", response_model=List[Task])
async def list_tasks(
    status: Optional[str] = None,
    priority: Optional[str] = None
):
    ...

@router.post("/tasks/{id}/assign")
async def assign_task(id: int, user_id: int):
    ...

# models.py — generated by ModelAgent
class Task(BaseModel):
    id: int
    title: str
    status: TaskStatus
    priority: Priority
    due_date: Optional[datetime]

# test_taskmanagerapi.py — by TestAgent
def test_create_task(client):
    res = client.post("/tasks", json={...})
    assert res.status_code == 201
Five agents. Each specialized. None overwhelmed.
📋
SpecParser
Reads the spec
🛣️
Router
Generates routes
🗂️
Model
Generates schemas
🧪
TestWriter
Writes tests
🎯
Orchestrator
Coordinates all

Built for distributed cognition.

🕸️

Graph-native

Every agent is a node. Every dependency is an edge. The entire system is a live, traversable networkx DAG — built for topological reasoning.

📨

Message Bus

Typed pub/sub messaging across agents. Topics, subscriptions, and event routing baked into the protocol — no external broker required.

🤝

Negotiation Protocol

No single agent decides. Each proposes, evaluates, and votes. An arbiter resolves conflicts. The outcome is emergent — distributed intelligence, not centralized reasoning.

🔒

Schema Contracts

Method-level input/output schemas define the contract between agents. The build validates every edge in the graph before emitting artifacts.

🖥️

CLI + TUI

16 production-grade CLI commands. An interactive TUI for build, runtime, deploy, and profiling — keyboard-driven, no mouse required.

🚀

Deploy anywhere

Built-in Docker, Kubernetes, and CI/CD tooling. Generate Dockerfiles, K8s manifests, GitHub Actions — from the CLI in seconds.

📊

Observability

Prometheus metrics, real-time dashboards, event timelines, and health monitoring. Know exactly what your agents are doing.

🧪

Test-first

100% passing tests on Runtime Core, CLI, and deployment tooling. pytest-native. Coverage reporting built in. Ship with confidence.

GraphBus vs. the alternatives.

Other frameworks run agents. GraphBus makes agents collaboratively write and evolve the code itself.

Capability GraphBus LangGraph CrewAI AutoGen
Agents rewrite source code ✓ Yes ✗ No ✗ No ⚬ Limited
Build-time LLM negotiation ✓ Built-in ✗ Not supported ✗ Not supported ✗ Not supported
Agent negotiation / consensus ✓ Built-in ✗ No ⚬ Partial ⚬ Partial
Graph-native DAG orchestration ✓ networkx ✓ Yes ✗ No ✗ No
Typed schema contracts per edge ✓ Yes ⚬ Partial ✗ No ✗ No
Build / Runtime mode separation ✓ Core design ✗ No ✗ No ✗ No
Pure Python, no vendor lock-in ✓ Yes ✓ Yes ✓ Yes ✓ Yes
Built-in K8s / Docker deploy ✓ CLI native ✗ No ✗ No ✗ No

Negotiation as infrastructure.

GraphBus defines the protocol for how specialized agents distribute cognitive load, communicate across domains, and negotiate to consensus — without any single agent doing all the thinking.

⚖️
Arbiter
Resolves conflicts
GRAPHBUS
proposal
evaluation
vote
consensus
🤖
ServiceA
🤖
ServiceB
🤖
ServiceC

Latest thinking on agent orchestration.

View all posts →

Beyond the context window.

Every existing tool funnels your codebase through a single LLM. That's the bottleneck GraphBus was designed to eliminate.

Claude Code / Cursor / Copilot GraphBus
Context strategy Single context window — everything in, everything degraded Distributed — each agent holds focused context for its domain
Separation of concerns At the code level only At the reasoning level — agents are specialists
Cross-domain changes One model guesses at consistency Agents negotiate at boundaries — consensus, not guesswork
Scales with codebase size Quality degrades as context fills Add agents — context stays focused, quality stays high
Architecture Centralized — one LLM, one shot Distributed — graph of specialists over a message bus
Auditability Black box — one response Every proposal, vote, and commit is observable

Up in 60 seconds.

1
Install
pip install graphbus
2
Init a project
graphbus init my-project --template microservices
cd my-project
3
Build + run
graphbus build agents/
graphbus run .graphbus
4
Enable LLM agents (optional)
export ANTHROPIC_API_KEY=sk-...
graphbus build agents/ --enable-agents

Join the waitlist

Download GraphBus

Three ways to use GraphBus — pick what fits your workflow.

🖥️

Desktop App

Native app with visual agent graph, real-time monitoring, and natural language control.

⌨️

CLI + Session

Terminal-first workflow. Interactive session mode wraps Claude Code for guided agent negotiation.

$ pip install graphbus
$ graphbus init my-project
$ cd my-project
$ graphbus session
🔌

API

REST API for programmatic access. Build, negotiate, and manage agents from any language or CI/CD pipeline.

POST api.graphbus.com/api/build
POST api.graphbus.com/api/run
GET api.graphbus.com/docs

GraphBus is in alpha. We're onboarding early adopters who want to shape the protocol. Drop your email — we'll reach out when we're ready for you.

No spam. Just launch news and protocol updates.

Already have questions? Email us directly →