The agent loop lives inDocumentation Index
Fetch the complete documentation index at: https://docs.getpioneer.dev/llms.txt
Use this file to discover all available pages before exploring further.
crates/agent. It is the execution engine for a thread turn after the gateway has accepted turn/start, persisted the initial state, and selected the provider/model.
The gateway does not call the provider directly for normal turns. It delegates to AgentManager, which owns a per-thread runtime task. That task receives commands such as StartTurn, runs one active turn at a time, and publishes durable events and live progress back to the gateway.
The agent loop is where Pioneer stops being a chat wrapper. It decides when to call the model, which tools the model may see, how tool results return to the model, when a failed tool deserves a retry, when a child task must be waited on, and when the final answer is allowed to close the turn.
The agent loop does not persist directly to SQLite. It emits durable events. The gateway persists those events and publishes committed notifications. See Gateway and Persistence Layer.
Why This Layer Exists
Provider APIs are not enough to run an assistant. Pioneer needs a layer that can repeatedly call a provider, execute real tools between provider calls, update UI state while work is happening, and preserve enough state to recover from interruptions. That responsibility belongs inpioneer-agent rather than in the gateway handlers because it is turn execution logic, not transport logic. It also does not belong in provider adapters because provider adapters should only translate common chat requests into provider-specific API calls.
Thread Runtime
run_agent_loop in crates/agent/src/agent_loop.rs is the per-thread command loop. It keeps the active turn handle, turn run id, cancellation token, and recovery context. A second running turn for the same thread is rejected, because the thread transcript and tool state are linear.
When a turn starts, the loop resolves the provider from ProviderRegistry, builds an ActiveTurnRequest, and spawns the chat execution task. Cancellation uses a short grace period before the runtime marks the turn interrupted.
Chat Mode And Agent Mode
Pioneer has two thread modes:| Mode | Behavior |
|---|---|
chat | Sends history and the current user message to the provider. No tools are exposed. The system prompt may still be compiled and passed as provider prompt payload. |
agent | Builds a full prompt bundle, resolves skills, materializes MCP/tools/tasks, runs provider rounds, executes tool calls, feeds tool results back into the model, and stops when the model produces a final answer or the loop budget is exhausted. |
execute_chat_turn_flow. The gateway chooses the mode from the thread state and passes it into AgentManager::start_turn.
Chat mode exists for direct model conversations where tool access would be unnecessary or risky. Agent mode exists when Pioneer should be allowed to inspect files, run tools, call MCP servers, use skills, and coordinate subagents. This is a product-level distinction, not just a provider flag.
Agent Mode Flow
Model Input Construction
The model request is assembled in layers:- The gateway supplies historical
ChatMessagevalues from completed turns. - The agent builds the current user message from
UserInput. - The agent resolves active skills and builds a skills prompt.
- The agent materializes task tools if task orchestration is available.
- The agent compiles the prompt bundle through
pioneer-promt. - The agent materializes MCP tools and dynamic skill tools.
- The tool router exposes only model-visible tool definitions for the current round.
- Recovered tool context is appended as assistant tool-call and tool-result messages when a turn is being recovered.
ChatRequest containing messages, optional tools, parallel_tool_calls, and compiled_prompt. Provider adapters are responsible for mapping that common request into provider-specific API payloads.
The ordering is intentional. History comes from the gateway because it is persisted thread state. Skills, MCP tools, and task tools are resolved inside the agent because they are turn-scoped capabilities. The prompt is compiled after those decisions because skills and task orchestration can add dynamic prompt sections.
For the exact prompt sections and history compression rules, see Prompt And Context.
Tool Loop Guard
Agent mode usesToolLoopGuard from pioneer-tools to prevent runaway loops. It limits provider rounds and total tool calls per turn. When the budget is exhausted, the prompt is refreshed with a final-answer instruction and tools are disabled so the model can finish from available evidence.
Tool retry is separate from tool-loop budget. ToolRetryController classifies recoverable tool failures, decides whether another corrected tool call should be attempted, and can inject retry instructions into the dynamic prompt section.
Durable Events
The agent emits durable events such as:PromptManifestCompiledTurnSkillsResolvedSkillAuditEventsTurnLlmContextAppendedItemStartedItemCompleted- tool retry lifecycle events
- provider failure and recovery events
TurnCompleted,TurnFailed, orTurnInterrupted
CrudStore before publishing committed notifications to clients. This keeps UI state and database state aligned.
Recovery Context
Some tool outputs are retained inturn_llm_context while the turn is active. If a provider failure or recovery attempt needs to reconstruct the LLM-visible state, the agent can reinsert those retained tool-result messages in sequence order. After the turn reaches a terminal state, the gateway deletes retained LLM context rows for that turn.
Recovery is intentionally tied to durable events. Provider failures are classified, recovery jobs are scheduled by the gateway, and successful or failed recovery attempts are reconciled back into the turn lifecycle.
Where Tooling Enters
The agent loop does not know how to run every tool itself. It askspioneer-tools to build a runtime from built-ins plus extension bundles. MCP, skills, and tasks all become tool bundles, which means the provider sees one coherent tool list even though the backing implementations are very different.
That is the reason tool output policies live below the agent loop. The agent needs a model-visible result, but the gateway timeline, storage layer, and recovery layer may need different projections of the same output. See Tools System.
Related Pages
- Prompt And Context explains how
compiled_promptandmessagesare built. - Provider System explains how
ChatRequestreaches OpenAI, Anthropic, Gemini, Ollama, Bedrock, CLI providers, and compatible endpoints. - Tasks And Subagents explains the task tools and child-thread execution path.
- Tools System explains routing, output projection, and retry classification.