Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getpioneer.dev/llms.txt

Use this file to discover all available pages before exploring further.

This page is the extended supported provider list. Use the same connection flow from Providers Overview unless a provider requires extra environment or endpoint details.

Direct Cloud Providers

Pioneer supports direct integrations for Anthropic, OpenAI, Google Gemini, OpenRouter, Azure OpenAI, and AWS Bedrock.

Local And Self-Hosted Runtimes

Pioneer can connect to local or self-hosted runtimes such as Ollama, LM Studio, llama.cpp, SGLang, vLLM, and LiteLLM. For local runtimes, remember that local means local to the gateway. If your desktop app connects to a remote gateway, the runtime must be reachable from that remote gateway.

CLI Providers

Pioneer includes provider adapters for CLI-based tools such as Claude Code, Gemini CLI, and Kilo CLI. CLI providers depend on the tool being installed and authenticated on the gateway host.

OpenAI-Compatible Providers

Many hosted providers expose OpenAI-compatible APIs. Pioneer can work with providers such as Groq, Mistral, xAI, DeepSeek, Together, Fireworks, Novita, Perplexity, Cohere, Venice, Cerebras, SambaNova, Hyperbolic, DeepInfra, Hugging Face, AI21, Reka, Baseten, Nscale, Anyscale, Nebius, Friendli, Lepton, SiliconFlow, AIHubMix, StepFun, Baichuan, Hunyuan, NVIDIA NIM, Doubao, and Qianfan. For these providers, the usual requirements are a provider entry in Pioneer, an API key or token, a base URL when needed, and an exact model ID.