Claude Code, Cursor, Copilot — brilliant tools, single bottleneck. GraphBus distributes reasoning across specialized agents, one per concern. They negotiate at domain boundaries. No single agent holds everything.
# Each agent owns one concern. None sees the whole picture.
class SpecParserAgent(GraphBusNode):
SYSTEM_PROMPT = "You extract structured API requirements from plain-English specs."
class RouterAgent(GraphBusNode):
SYSTEM_PROMPT = "You generate FastAPI route handlers from parsed endpoints."
class ModelAgent(GraphBusNode):
SYSTEM_PROMPT = "You generate Pydantic models from API data schemas."
class TestAgent(GraphBusNode):
SYSTEM_PROMPT = "You write pytest suites for FastAPI services."
@depends_on("RouterAgent", "ModelAgent", "TestAgent")
class OrchestratorAgent(GraphBusNode):
SYSTEM_PROMPT = "You coordinate the full spec-to-service pipeline."
# Give it a spec. Get a working microservice.
# No single agent held the whole context.
Good software separates concerns in code. GraphBus does the same for LLM reasoning — each agent is a specialist, not a generalist staring at everything at once.
Each class gets its own LLM agent with its own focused context. Your InventoryService agent understands inventory deeply. Your OrderService agent understands orders. Neither has to reason about everything — they specialize, then negotiate at the boundary.
At domain boundaries, agents don't silently assume — they negotiate. Proposals cross the bus. Peers evaluate. Votes determine outcomes. An arbiter resolves conflicts. The result is a consistent, well-reasoned system that no single context window could produce.
Subclass GraphBusNode. Give each class a system prompt defining its domain expertise. Each agent now holds focused context for its slice of the system — not the whole thing.
GraphBus runs a negotiation round across the bus. Each agent proposes, evaluates peers, and votes. The arbiter resolves conflicts. Complex cross-domain changes reach consensus — no single LLM had to hold it all.
The build emits clean JSON artifacts — graph, agents, topics, schemas. Runtime loads these, routes typed messages across the bus, and calls LLMs exactly when your agent logic needs them — you control the invocation.
Your codebase improves with every build cycle. Agents negotiate schema contracts, refactor for coherence, and adapt to changing requirements — collaboratively.
Real GraphBus CLI — actual output from the framework. Hit play and watch the build pipeline run.
The spec_to_service example ships with GraphBus. Give it plain English — five specialized agents negotiate and produce a working FastAPI service.
TASK_SPEC = """A task management API with:
- CRUD operations for tasks
(id, title, description, status,
priority, due_date)
- User assignment by user_id
- Filter by status and priority
- Mark tasks complete
No auth required for MVP."""
executor.call_method(
'OrchestratorAgent',
'build_service',
spec=TASK_SPEC,
service_name='TaskManagerAPI'
)
# router.py — generated by RouterAgent
@router.get("/tasks", response_model=List[Task])
async def list_tasks(
status: Optional[str] = None,
priority: Optional[str] = None
):
...
@router.post("/tasks/{id}/assign")
async def assign_task(id: int, user_id: int):
...
# models.py — generated by ModelAgent
class Task(BaseModel):
id: int
title: str
status: TaskStatus
priority: Priority
due_date: Optional[datetime]
# test_taskmanagerapi.py — by TestAgent
def test_create_task(client):
res = client.post("/tasks", json={...})
assert res.status_code == 201
Every agent is a node. Every dependency is an edge. The entire system is a live, traversable networkx DAG — built for topological reasoning.
Typed pub/sub messaging across agents. Topics, subscriptions, and event routing baked into the protocol — no external broker required.
No single agent decides. Each proposes, evaluates, and votes. An arbiter resolves conflicts. The outcome is emergent — distributed intelligence, not centralized reasoning.
Method-level input/output schemas define the contract between agents. The build validates every edge in the graph before emitting artifacts.
16 production-grade CLI commands. An interactive TUI for build, runtime, deploy, and profiling — keyboard-driven, no mouse required.
Built-in Docker, Kubernetes, and CI/CD tooling. Generate Dockerfiles, K8s manifests, GitHub Actions — from the CLI in seconds.
Prometheus metrics, real-time dashboards, event timelines, and health monitoring. Know exactly what your agents are doing.
100% passing tests on Runtime Core, CLI, and deployment tooling. pytest-native. Coverage reporting built in. Ship with confidence.
Other frameworks run agents. GraphBus makes agents collaboratively write and evolve the code itself.
| Capability | GraphBus | LangGraph | CrewAI | AutoGen |
|---|---|---|---|---|
| Agents rewrite source code | ✓ Yes | ✗ No | ✗ No | ⚬ Limited |
| Build-time LLM negotiation | ✓ Built-in | ✗ Not supported | ✗ Not supported | ✗ Not supported |
| Agent negotiation / consensus | ✓ Built-in | ✗ No | ⚬ Partial | ⚬ Partial |
| Graph-native DAG orchestration | ✓ networkx | ✓ Yes | ✗ No | ✗ No |
| Typed schema contracts per edge | ✓ Yes | ⚬ Partial | ✗ No | ✗ No |
| Build / Runtime mode separation | ✓ Core design | ✗ No | ✗ No | ✗ No |
| Pure Python, no vendor lock-in | ✓ Yes | ✓ Yes | ✓ Yes | ✓ Yes |
| Built-in K8s / Docker deploy | ✓ CLI native | ✗ No | ✗ No | ✗ No |
GraphBus defines the protocol for how specialized agents distribute cognitive load, communicate across domains, and negotiate to consensus — without any single agent doing all the thinking.
Every existing tool funnels your codebase through a single LLM. That's the bottleneck GraphBus was designed to eliminate.
| Claude Code / Cursor / Copilot | GraphBus | |
|---|---|---|
| Context strategy | Single context window — everything in, everything degraded | Distributed — each agent holds focused context for its domain |
| Separation of concerns | At the code level only | At the reasoning level — agents are specialists |
| Cross-domain changes | One model guesses at consistency | Agents negotiate at boundaries — consensus, not guesswork |
| Scales with codebase size | Quality degrades as context fills | Add agents — context stays focused, quality stays high |
| Architecture | Centralized — one LLM, one shot | Distributed — graph of specialists over a message bus |
| Auditability | Black box — one response | Every proposal, vote, and commit is observable |
pip install graphbus
graphbus init my-project --template microservices
cd my-project
graphbus build agents/
graphbus run .graphbus
export ANTHROPIC_API_KEY=sk-...
graphbus build agents/ --enable-agents
Three ways to use GraphBus — pick what fits your workflow.
Native app with visual agent graph, real-time monitoring, and natural language control.
Terminal-first workflow. Interactive session mode wraps Claude Code for guided agent negotiation.
REST API for programmatic access. Build, negotiate, and manage agents from any language or CI/CD pipeline.
GraphBus is in alpha. We're onboarding early adopters who want to shape the protocol. Drop your email — we'll reach out when we're ready for you.