PROVIDERS
Configure AI model providers for use with Turbine.
§ CLAUDE (ANTHROPIC)
Claude is the recommended primary provider for code generation tasks.
Setup
turbine config set claude.apiKey YOUR_API_KEY
Configuration
"claude": {
"priority": 1,
"rateLimit": 1000,
"model": "claude-3-opus",
"maxTokens": 4096
}Rate limits: 1000 RPM (requests per minute) on standard tier.
§ CODEX (OPENAI)
OpenAI Codex for code completion and generation.
Setup
turbine config set codex.apiKey YOUR_API_KEY
Configuration
"codex": {
"priority": 2,
"rateLimit": 500,
"model": "gpt-4-turbo",
"maxTokens": 4096
}Rate limits vary by tier. Check OpenAI dashboard for your limits.
§ GEMINI (GOOGLE)
Google Gemini for cost-effective background tasks.
Setup
turbine config set gemini.apiKey YOUR_API_KEY
Configuration
"gemini": {
"priority": 3,
"rateLimit": 2000,
"model": "gemini-pro",
"maxTokens": 8192
}Gemini offers higher rate limits and lower costs for suitable tasks.
§ CUSTOM PROVIDERS
Add custom or self-hosted model providers:
"custom": {
"priority": 4,
"endpoint": "https://your-api.com/v1",
"apiKey": "YOUR_KEY",
"model": "custom-model",
"rateLimit": 100
}Custom providers must implement the OpenAI-compatible API format.
§ PROVIDER PRIORITY
Provider priority determines routing order. Lower numbers = higher priority. When the primary provider hits rate limits, requests automatically route to the next available provider.
Priority 1: claude → Primary choice
Priority 2: codex → First fallback
Priority 3: gemini → Second fallback
Priority 2: codex → First fallback
Priority 3: gemini → Second fallback