GraphBus Documentation
A multi-agent orchestration protocol: you declare an intent, agents negotiate to achieve it at build time, then communicate via a typed message bus at runtime โ with zero LLM cost per call.
Installation
GraphBus requires Python 3.9 or higher. Install from source while we prepare the PyPI release:
# Clone the repo
git clone https://github.com/graphbus/graphbus-core
cd graphbus-core
# Install (editable mode for development)
pip install -e .
# Verify
graphbus --version
# graphbus 0.1.0
For LLM-powered agent builds, you'll also need an API key:
# Claude (recommended)
export ANTHROPIC_API_KEY=sk-ant-...
# GraphBus works without an API key in static build mode
pip install graphbus before the public launch. Join the waitlist at graphbus.com to be notified.
Hello World
Let's build a minimal GraphBus project from scratch. We'll create one agent, build it, and run it.
1. Initialize a project with intent
graphbus init hello-graphbus --intent "generate and log greeting messages"
cd hello-graphbus
The --intent flag records your plain-English goal for the project. Agents reference it during negotiation to stay on task.
2. Define an agent
Create agents/hello_service.py:
from graphbus_core import GraphBusNode, schema_method, subscribe
class HelloService(GraphBusNode):
"""Generates and logs greeting messages."""
SYSTEM_PROMPT = """
I generate friendly greeting messages.
During build cycles, I negotiate with other agents to ensure
my output schema is consistent and well-typed.
"""
@schema_method(
input_schema={},
output_schema={"message": str, "timestamp": str}
)
def generate_message(self) -> dict:
import datetime
return {
"message": "Hello from GraphBus!",
"timestamp": datetime.datetime.utcnow().isoformat()
}
@subscribe("/Hello/MessageGenerated")
def on_message(self, event):
self.log(f"Received: {event.payload['message']}")
3. Build (static, no LLM)
graphbus build agents/
# [BUILD] Scanning agents/hello_service.py
# [BUILD] Discovered: HelloService
# [BUILD] Graph: 1 node, 0 edges
# [BUILD] Schema validated: generate_message โ {message: str, timestamp: str}
# [BUILD] Artifacts written to .graphbus/
# [BUILD] Complete in 0.12s
4. Run the artifacts
graphbus run .graphbus/
# [RUNTIME] Loaded 1 agent (HelloService)
# [RUNTIME] Message bus initialized โ 1 topic
# [RUNTIME] HelloService โ "Hello from GraphBus!"
# [RUNTIME] Zero LLM calls made.
5. Run a negotiation with intent
Give agents a plain-English goal and let them negotiate how to achieve it:
export ANTHROPIC_API_KEY=sk-ant-...
graphbus negotiate .graphbus --intent "add error handling for clock failures"
# โฆ Intent: "add error handling for clock failures"
# [AGENT] HelloService: "I propose wrapping datetime.utcnow() in a try/except"
# [ARBITER] Accepted (no other agents to consult)
# โ 1 file improved (agents/hello_service.py)
# โ Artifacts written to .graphbus/
.graphbus/ ready to run deterministically โ with no LLM cost at runtime.
Project Structure
A typical GraphBus project looks like this:
my-project/
โโโ agents/
โ โโโ order_processor.py โ GraphBusNode subclasses
โ โโโ inventory_service.py
โ โโโ notification_service.py
โโโ .graphbus/ โ Build artifacts (auto-generated)
โ โโโ graph.json โ Agent dependency graph
โ โโโ agents.json โ Agent metadata + source
โ โโโ topics.json โ Topic registry
โ โโโ build_summary.json โ Build metadata
โโโ graphbus.yaml โ Project config (optional)
โโโ tests/
The .graphbus/ directory is the output of a build. You deploy this โ not the raw agents source. Think of it like a compiled binary, but inspectable JSON.
Intent
Intent is the plain-English goal you give GraphBus. It's the first thing you declare and it flows through every agent interaction โ keeping negotiations on task and making the system's purpose auditable.
# Set intent at project creation
graphbus init my-service --intent "order processing pipeline with payment validation"
# Reference it in any negotiation
graphbus negotiate .graphbus --intent "add null amount validation to order processor"
Intent is immutable per negotiation โ it's recorded in the negotiation history alongside every proposal and decision, giving you a permanent audit trail of why each change was made.
Why intent matters
- Focus: Agents check intent relevance before proposing. An agent outside the intent's scope stays quiet.
- Auditability: Every commit in
.graphbus/traces back to a specific intent. You always know why code changed. - Institutional memory: Negotiation history (intent + proposals + decisions) is queryable via the GraphBus API โ your codebase accrues context over time.
"add rate limiting to order processor" is better than "improve the service".
Two Modes
GraphBus has two strictly separated execution modes. Understanding this separation is the key to understanding GraphBus.
๐จ Agents Active
- LLM agent instantiated per class
- Agents read their own source code
- Agents propose improvements via the bus
- Proposals are evaluated + voted on
- Arbiter resolves split decisions
- Consensus changes committed to source
- Build artifacts emitted to
.graphbus/
โก Agents Dormant
- Loads
.graphbus/artifacts - No LLM calls โ ever
- Pure Python execution
- Pub/sub event routing via message bus
- Deterministic, auditable output
- Production-grade speed
- Deploy anywhere Python runs
The two modes are completely independent. You can build on your dev machine and deploy runtime artifacts to production with no Python build dependencies required.
GraphBusNode
GraphBusNode is the base class for every agent. Subclass it to create an agent.
from graphbus_core import GraphBusNode
class MyAgent(GraphBusNode):
# Optional: LLM system prompt for build-mode reasoning
SYSTEM_PROMPT = "Describe this agent's role and negotiation strategy."
# Optional: mark as arbiter (resolves conflicts)
IS_ARBITER = False
# Optional: custom memory backend
memory = None
# Your methods go here...
Built-in methods
| Method | Description |
|---|---|
self.publish(topic, payload) | Publish an event to a topic on the bus |
self.log(message) | Structured log output (runtime-safe) |
self.get_memory(key) | Read from agent's memory store |
self.set_memory(key, value) | Write to agent's memory store |
Decorators
GraphBus uses three decorators to declare the structure of your agent graph.
@schema_method
Declares typed input/output schema for a method. This forms the contract between agents โ the build validates every edge in the graph against these schemas before emitting artifacts.
@schema_method(
input_schema={"order_id": str, "items": list},
output_schema={"status": str, "total": float}
)
def process_order(self, order_id: str, items: list) -> dict:
...
@subscribe
Registers a handler for a topic on the message bus. The handler is called at runtime whenever an event is published to that topic.
@subscribe("/Order/Created")
def on_order_created(self, event):
# event.topic, event.payload, event.timestamp
self.log(f"New order: {event.payload['order_id']}")
@depends_on
Declares a dependency edge in the agent DAG. The build uses this to determine topological evaluation order.
from graphbus_core import depends_on
class NotificationService(GraphBusNode):
@depends_on("OrderProcessor")
def send_confirmation(self, order_id: str):
# OrderProcessor is guaranteed to have run first
...
Message Bus
The GraphBus message bus is a typed pub/sub system. Topics are path-like strings (e.g. /Order/Created). Any agent can publish to any topic; any agent can subscribe to any topic.
# Publishing (in any @schema_method or handler)
self.publish("/Order/Created", {
"order_id": "ord_123",
"total": 49.99
})
# Subscribing (declare on the class)
@subscribe("/Order/Created")
def handle_order(self, event):
print(event.payload["order_id"]) # ord_123
Topic conventions
| Pattern | Example | Use |
|---|---|---|
/Domain/Event | /Order/Created | Domain events |
/Agent/Action | /Arbiter/Resolved | Agent lifecycle |
/System/Status | /System/Ready | System events |
At runtime, the bus provides message history tracking and statistics. Use graphbus inspect .graphbus/ to see all registered topics and subscriptions.
Build Artifacts
A successful build writes JSON artifacts to .graphbus/. These are the only files needed to run your project.
{
"nodes": [
{
"id": "HelloService",
"module": "agents.hello_service",
"system_prompt": "...",
"methods": ["generate_message"],
"subscriptions": ["/Hello/MessageGenerated"]
}
],
"edges": [],
"topological_order": ["HelloService"]
}
Inspect your artifacts at any time:
graphbus inspect .graphbus/
# Shows: agent graph, schemas, topic registry, build metadata
graphbus validate agents/
# Validates schemas and dependency edges without building
Build Pipeline
When you run graphbus build agents/, this pipeline executes:
- DISCOVER_MODULES โ Scan target path for Python files
- DISCOVER_CLASSES โ Find all
GraphBusNodesubclasses - READ_SOURCE_CODE โ Load source for each agent class
- EXTRACT_AGENT_METADATA โ Parse methods, schemas, subscriptions, system prompts
- BUILD_AGENT_GRAPH โ Construct
networkxDAG from@depends_onedges - COMPUTE_TOPOLOGICAL_ORDER โ Sort agents for evaluation order
- EMIT_BUILD_ARTIFACTS โ Write
graph.json,agents.json,topics.json
Running agent negotiation
After a static build, use graphbus negotiate to run LLM-powered improvements guided by a specific intent:
# Static build first (fast, no LLM)
graphbus build agents/
# Then negotiate with a specific intent
graphbus negotiate .graphbus --intent "improve error handling in order processor"
# Negotiate with a specific model
graphbus negotiate .graphbus --intent "add retry logic" --llm-model deepseek/deepseek-reasoner
The negotiate pipeline adds these steps on top of a built artifact:
- LOAD_INTENT โ Record the user's plain-English goal; bind it to this negotiation
- ACTIVATE_AGENTS โ Instantiate one LLM per node; each agent reads its own source
- INTENT_RELEVANCE_CHECK โ Each agent evaluates if the intent is relevant to their scope
- MULTI_ROUND_NEGOTIATION โ Propose โ evaluate โ arbitrate โ commit
- STORE_HISTORY โ Negotiation log (intent + proposals + decisions) written to
.graphbus/
graphbus build agents/ --enable-agents still works as a combined build+negotiate in one step. The recommended workflow is build once, negotiate many times โ each negotiation is a focused, intent-driven improvement.
Negotiation Protocol
When agents are active, they communicate via structured Proposals on the bus:
class Proposal:
agent_id: str # who is proposing
intent: str # the user's goal this proposal serves
target_file: str # which file to change
diff: str # unified diff of proposed changes
rationale: str # LLM's reasoning tied back to intent
affects: list[str] # other agents whose schemas are impacted
Each affected agent evaluates the proposal and responds with an Evaluation:
class Evaluation:
agent_id: str # who is evaluating
proposal_id: str # which proposal
decision: str # "accept" | "reject"
reasoning: str # LLM's reasoning
counter_proposal: str | None # alternative diff if rejecting
View the full negotiation history after a build:
graphbus inspect-negotiation
# Browse: proposals, evaluations, arbitration decisions, commits
The Arbiter
When agents disagree on a proposal (split vote), the Arbiter makes the final call. Designate any agent as the arbiter:
class ArbiterService(GraphBusNode):
IS_ARBITER = True # Mark as the system arbiter
SYSTEM_PROMPT = """
You are an impartial arbiter. When agents disagree on a proposal:
- Favor changes that improve correctness and maintainability
- Reject changes that introduce risk without clear benefit
- Be conservative โ when in doubt, reject
- Provide clear reasoning for every decision
"""
IS_ARBITER = True. If no arbiter is defined, split proposals are rejected by default (conservative behavior).
Runtime Engine
The runtime loads your .graphbus/ artifacts and executes them as plain Python:
# Programmatic runtime usage
from graphbus_core.runtime import RuntimeExecutor, RuntimeConfig
config = RuntimeConfig(artifacts_dir=".graphbus/")
executor = RuntimeExecutor(config)
executor.start()
result = executor.invoke("HelloService", "generate_message", {})
print(result) # {"message": "Hello from GraphBus!", "timestamp": "..."}
executor.stop()
Or use the CLI:
graphbus run .graphbus/
graphbus run .graphbus/ --watch # hot reload on artifact changes
graphbus run .graphbus/ --debug # interactive debugger
Event Routing
The runtime's event router dispatches published events to all registered subscribers. Routing is synchronous by default and happens in topological order:
# Agent A publishes
self.publish("/Order/Created", {"order_id": "ord_123"})
# Router dispatches to all @subscribe("/Order/Created") handlers
# across all loaded agents โ in topological order
# Agent B receives
@subscribe("/Order/Created")
def handle_order(self, event):
# event.topic = "/Order/Created"
# event.payload = {"order_id": "ord_123"}
# event.timestamp = datetime object
CLI โ Core Commands
| Command | Description |
|---|---|
graphbus build <path> | Build agent graph and emit artifacts (static, no LLM) |
graphbus negotiate <artifacts> --intent "..." | Run LLM agent negotiation toward a specific intent |
graphbus run <artifacts> | Load artifacts and start the runtime ($0 LLM cost) |
graphbus run <artifacts> --watch | Hot-reload when artifacts change |
graphbus inspect <artifacts> | Inspect build artifacts (graph, agents, topics) |
graphbus validate <path> | Validate schemas without building |
graphbus tui | Launch interactive TUI |
CLI โ Development Tools
| Command | Description |
|---|---|
graphbus init <name> --intent "..." | Scaffold a new project, recording the intent |
graphbus generate agent <Name> | Generate agent boilerplate |
graphbus negotiate <artifacts> --intent "..." | Run LLM agent negotiation toward a specific intent |
graphbus negotiate <artifacts> --rounds N | Control how many negotiation rounds to run |
graphbus inspect-negotiation | Browse full negotiation history (intent, proposals, decisions) |
graphbus profile <artifacts> | Profile runtime performance |
graphbus dashboard | Web-based visualization dashboard |
graphbus coherence <path> | Check inter-agent schema coherence |
graphbus contract <path> | Validate all schema contracts |
CLI โ Deployment
| Command | Description |
|---|---|
graphbus docker build | Generate Dockerfile for your project |
graphbus docker run | Build and run in Docker |
graphbus k8s generate | Generate Kubernetes manifests |
graphbus k8s deploy | Deploy to a Kubernetes cluster |
graphbus ci github | Generate GitHub Actions workflow |
graphbus ci gitlab | Generate GitLab CI pipeline |
Example: Hello GraphBus
The simplest complete example. Three agents (HelloService, PrinterService, LoggerService) negotiate greeting format improvements during build, then execute cleanly at runtime.
cd examples/hello_graphbus
python build.py # static build
ANTHROPIC_API_KEY=sk-... python build.py # agent build
python run.py # run artifacts
Example: MCP Integration
GraphBus ships an MCP (Model Context Protocol) server so any MCP-compatible client can invoke your agents as tools.
cd examples/hello_world_mcp
graphbus build agents/
graphbus run .graphbus/ --mcp
# Exposes agents as MCP tools on ws://localhost:8765
Example: News Summarizer Pipeline
A real multi-agent pipeline: one agent fetches articles, one summarizes, one formats the output. Agents negotiate a shared output schema during build. Runtime is pure Python with no LLM cost per summarization.
cd examples/news_summarizer
graphbus build agents/
graphbus run .graphbus/
Ready to build?
GraphBus is in alpha. Join the waitlist to get early access and shape the protocol.
Join the waitlist โ Email us