TURBINE

TECHNICAL WHITEPAPER

v0.1.0

§ 0. ABSTRACT

Turbine is a coordination layer for multi-agent AI systems. It addresses the fundamental challenges of orchestrating 10-30 concurrent AI coding agents: rate limit management, cost optimization, and unified observability. Turbine operates as a companion to existing orchestration frameworks, adding intelligence without replacing core functionality.

§ 1. THE COORDINATION PROBLEM

Modern AI-assisted development has evolved from single-agent interactions to swarms of autonomous coding agents working in parallel.

  • Rate limits fragment across providers with no unified view
  • Cost attribution remains opaque at the agent and task level
  • Provider failover requires manual intervention
  • Monitoring demands constant terminal attention

§ 2. ARCHITECTURE

Turbine implements a non-invasive wrapper pattern. It reads existing state from orchestrator filesystems without modifying source data.

CORE COMPONENTS

  • Rig — Agent workspace with shared configuration
  • Convoy — Sequence of related tasks across agents
  • Bead — Atomic unit of work
  • Provider — AI model backend

§ 3. RATE LIMIT ORCHESTRATION

Unified rate limit tracking across all configured providers. Turbine maintains a sliding window of API usage, predicts exhaustion, and automatically routes requests to available capacity.

§ 4. COST TRACKING

Granular cost attribution: per-provider, per-agent, per-convoy, and per-bead.

§ 5. IMPLEMENTATION

Node.js CLI with optional web dashboard. No modifications to existing configurations required.

npm install -g @turbine/cli
turbine status
turbine dashboard