Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getpioneer.dev/llms.txt

Use this file to discover all available pages before exploring further.

Pioneer needs a model provider before it can answer in threads or run model-backed tasks. A provider is the place the gateway sends model requests: a cloud API, a local runtime, a CLI-backed model tool, or an OpenAI-compatible endpoint. Provider setup is intentionally the same shape everywhere. Connect the desktop app to the gateway you want to configure, click the providers icon in the bottom bar to open Providers, choose the provider type, enter the required credential or endpoint details, save, and select a model in a thread. The important detail is where the provider is saved. Provider configuration belongs to the gateway, not to the desktop app. Secret API keys and tokens are stored in that gateway’s keystore.db. If you configure a provider on Local and then switch to Work Gateway, the work gateway needs its own provider configuration and credentials.
Provider requests are made by the gateway host. A remote gateway needs network access to the provider even if your desktop computer already has access.

General Connection Flow

Use this flow for any supported provider:
1

Choose the gateway

Connect the desktop app to the gateway that should own this provider configuration.
2

Open Providers

Click the providers icon in the bottom bar to open Providers.
3

Add the provider

Choose the provider type and enter the required key, endpoint, region, deployment, or local runtime address.
4

Save on the gateway

Save the provider configuration and confirm it appears for the current gateway.
5

Test with a thread

Create a thread, select the provider and model, and send a short test message.
Some providers can list models automatically. Others require you to enter the exact model ID. If model listing does not work, copy the model ID from the provider dashboard or runtime and test it with a small prompt.

Supported Providers

Pioneer supports dedicated provider integrations, local runtimes, CLI providers, and OpenAI-compatible endpoints.
Provider typeSupported providers
Direct cloud providersAnthropic, OpenAI, Google Gemini, OpenRouter, Azure OpenAI, AWS Bedrock
Local and self-hosted runtimesOllama, LM Studio, llama.cpp, SGLang, vLLM, LiteLLM
CLI providersClaude Code, Gemini CLI, Kilo CLI
OpenAI-compatible providersGroq, Mistral, xAI, DeepSeek, Together, Fireworks, Perplexity, Cohere, Cerebras, Hugging Face, NVIDIA NIM, and others
Use Other Providers when you need the broader compatibility list.

Choosing A Provider

Use the provider you already have access to. If your organization manages a specific cloud account, start there. If you want local model execution, use a local runtime available from the gateway host. If you want a broad model catalog behind one integration, use a router or another compatible endpoint. Do not assume every model behaves the same way. Before using a model for tool-heavy work or scheduled tasks, confirm that your account can access it, that the model supports the kind of tool use you need, and that cost, latency, and context size are acceptable.
Make sure you configured it on the same gateway you are currently using.
Try entering a model ID manually. Some providers do not expose model listing in a way Pioneer can use.
Check the key, token scope, account status, endpoint, region, deployment name, and gateway environment.
Check network access, proxy settings, firewall rules, and DNS from the gateway machine.