CONCEPTS
Understanding the core abstractions that power Turbine.
§ RIG
A Rig is a workspace containing one or more AI agents with shared configuration. Rigs provide isolation and resource allocation boundaries.
rig: protocol ├─ agent-0 (claude) ├─ agent-1 (claude) └─ agent-2 (codex)
Each rig can have its own rate limits, cost budgets, and provider preferences.
§ CONVOY
A Convoy is a sequence of related tasks executed across multiple agents. Convoys track progress from initiation to completion.
convoy: auth-system status: in_progress progress: 67% (8/12 beads) agents: [agent-0, agent-1]
Convoys enable high-level task tracking and cost attribution.
§ BEAD
A Bead is an atomic unit of work. Beads are the smallest trackable unit in Turbine, representing individual tasks or requests.
bead: bd-a2f3 type: implementation status: complete agent: agent-0 tokens: 4,521 cost: $0.14
Beads flow through the system from creation to completion, tracking all associated costs.
§ PROVIDER
A Provider is an AI model backend. Turbine supports multiple providers and handles routing, failover, and load balancing.
- — Claude (Anthropic)
- — Codex (OpenAI)
- — Gemini (Google)
- — Custom (self-hosted or other APIs)
§ RATE LIMITING
Turbine maintains a sliding window of API usage across all providers. When a provider approaches its limit, requests are automatically routed to available alternatives.
§ PRIORITY ROUTING
Tasks can be assigned priority levels that influence provider selection:
- — Critical — Always uses primary provider
- — High — Prefers primary, falls back if needed
- — Normal — Uses optimal available provider
- — Low — Routes to cost-effective alternatives