Overview
The swarm workflow is triggered by setting force_swarm: true in the task context. It uses the same POST /api/v1/tasks endpoint as all other workflows — no separate endpoint is needed.
Swarm mode decomposes your query into subtasks, spawns persistent agents that work in parallel with inter-agent messaging, and synthesizes results into a unified response.
Submitting a Swarm Task
Endpoint
POST http://localhost:8080/api/v1/tasks
Request Body
{
"query" : "Your complex multi-faceted task" ,
"session_id" : "optional-session-id" ,
"context" : {
"force_swarm" : true
}
}
Swarm-Specific Context Parameters
Parameter Type Default Description force_swarmboolean falseRequired to trigger swarm workflowmodel_tierstring (auto) Model tier for agent execution: small, medium, large
All standard task parameters (session_id, mode, model_tier, model_override, provider_override) work with swarm tasks.
The force_swarm flag must be set inside the context object, not as a top-level parameter. The swarm must also be enabled in the server configuration (workflows.swarm.enabled: true in features.yaml).
Example: Basic Swarm Task
cURL
Python SDK
Python (httpx)
curl -X POST http://localhost:8080/api/v1/tasks \
-H "Content-Type: application/json" \
-d '{
"query": "Compare AI chip markets across US, Japan, and South Korea",
"session_id": "swarm-demo",
"context": {
"force_swarm": true
}
}'
Response
{
"task_id" : "task-abc123..." ,
"status" : "STATUS_CODE_OK" ,
"message" : "Task submitted successfully" ,
"created_at" : "2025-11-10T10:00:00Z"
}
Headers:
X-Workflow-ID: Temporal workflow identifier (same as task_id)
X-Session-ID: Session identifier
Submit + Stream
Use the combined endpoint to submit and get a stream URL in one call:
POST http://localhost:8080/api/v1/tasks/stream
curl -s -X POST http://localhost:8080/api/v1/tasks/stream \
-H "Content-Type: application/json" \
-d '{
"query": "Analyze competitive landscape of major cloud AI platforms",
"context": { "force_swarm": true }
}' | jq
Response (201 Created):
{
"workflow_id" : "task-def456..." ,
"task_id" : "task-def456..." ,
"stream_url" : "/api/v1/stream/sse?workflow_id=task-def456..."
}
Monitoring Swarm Progress
SSE Event Stream
GET http://localhost:8080/api/v1/stream/sse?workflow_id={workflow_id}
Swarm-Specific Events
Event Type agent_idDescription WORKFLOW_STARTEDswarm-supervisorSwarm workflow initialized PROGRESS (planning)swarm-supervisorTask decomposition in progress PROGRESS (spawning)swarm-supervisorAgents being assigned PROGRESS (monitoring)swarm-supervisorAgents working in parallel AGENT_STARTEDAgent name (e.g., takao) Individual agent began execution PROGRESS (iteration)Agent name Agent iteration progress AGENT_COMPLETEDAgent name Individual agent finished PROGRESS (synthesizing)swarm-supervisorCombining results from all agents WORKFLOW_COMPLETEDswarm-supervisorFinal synthesis complete
Example SSE Output
data: {"type":"WORKFLOW_STARTED","agent_id":"swarm-supervisor","message":"Assigning a team of agents","timestamp":"..."}
data: {"type":"PROGRESS","agent_id":"swarm-supervisor","message":"Planning approach","timestamp":"..."}
data: {"type":"PROGRESS","agent_id":"swarm-supervisor","message":"Assigning 3 agents","timestamp":"..."}
data: {"type":"AGENT_STARTED","agent_id":"takao","message":"Agent takao started","timestamp":"..."}
data: {"type":"PROGRESS","agent_id":"takao","message":"Agent takao progress: iteration 1/25, action: tool_call","timestamp":"..."}
data: {"type":"AGENT_COMPLETED","agent_id":"takao","message":"Agent takao completed","timestamp":"..."}
data: {"type":"PROGRESS","agent_id":"swarm-supervisor","message":"Combining findings from 3 agents","timestamp":"..."}
data: {"type":"WORKFLOW_COMPLETED","agent_id":"swarm-supervisor","message":"All done","timestamp":"..."}
Task Status Response
GET http://localhost:8080/api/v1/tasks/{task_id}
When a swarm workflow completes, the status response includes swarm-specific metadata:
{
"task_id" : "task-abc123..." ,
"status" : "TASK_STATUS_COMPLETED" ,
"result" : "## AI Chip Market Comparison \n\n ..." ,
"metadata" : {
"workflow_type" : "swarm" ,
"total_agents" : 3 ,
"agents" : [
{
"agent_id" : "takao" ,
"iterations" : 8 ,
"tokens" : 14200 ,
"success" : true ,
"model" : "gpt-5-mini-2025-08-07"
},
{
"agent_id" : "mitaka" ,
"iterations" : 6 ,
"tokens" : 9800 ,
"success" : true ,
"model" : "gpt-5-mini-2025-08-07"
},
{
"agent_id" : "kichijoji" ,
"iterations" : 7 ,
"tokens" : 11200 ,
"success" : true ,
"model" : "gpt-5-mini-2025-08-07"
}
]
},
"usage" : {
"total_tokens" : 35200 ,
"estimated_cost" : 0.042
}
}
Field Type Description metadata.workflow_typestring Always "swarm" for swarm workflows metadata.total_agentsinteger Total agents that participated (initial + dynamic) metadata.agentsarray Per-agent execution summary metadata.agents[].agent_idstring Deterministic agent name (Japanese station name) metadata.agents[].iterationsinteger Reason-act cycles completed metadata.agents[].tokensinteger Total tokens consumed metadata.agents[].successboolean Whether the agent completed successfully metadata.agents[].modelstring LLM model used
Server Configuration
Swarm parameters are configured in config/features.yaml under workflows.swarm:
workflows :
swarm :
enabled : true # Enable/disable swarm routing
max_agents : 10 # Total agent cap (initial + dynamic)
max_iterations_per_agent : 25 # Max reason-act loops per agent
agent_timeout_seconds : 600 # Per-agent wall-clock timeout
max_messages_per_agent : 20 # P2P message cap per agent
workspace_snippet_chars : 800 # Max chars per workspace entry in prompt
workspace_max_entries : 5 # Recent entries shown per topic
Parameter Default Description enabledtrueMust be true for force_swarm to work max_agents10Total cap including dynamically spawned helpers max_iterations_per_agent25Per-agent iteration limit agent_timeout_seconds60010-minute per-agent timeout max_messages_per_agent20Prevents P2P message flooding workspace_snippet_chars800Controls token usage from workspace context workspace_max_entries5Limits workspace entries per topic in agent prompt
Error Handling and Fallback
Partial Failure
If some agents fail but at least one succeeds, the swarm workflow still produces a result using the successful agents’ outputs.
If all agents fail, the response includes an error:
{
"task_id" : "task-xyz..." ,
"status" : "TASK_STATUS_COMPLETED" ,
"result" : "" ,
"error" : "All 3 agents failed — no results to synthesize" ,
"metadata" : {
"workflow_type" : "swarm" ,
"total_agents" : 3 ,
"agents" : [
{ "agent_id" : "takao" , "success" : false , "error" : "consecutive tool errors" },
{ "agent_id" : "mitaka" , "success" : false , "error" : "LLM step failed at iteration 2" },
{ "agent_id" : "kichijoji" , "success" : false , "error" : "consecutive tool errors" }
]
}
}
Automatic Fallback
If the entire swarm workflow fails (decomposition error, all agents fail, etc.), Shannon automatically falls back to standard workflow routing (DAG or Supervisor). The force_swarm flag is removed from context to prevent recursive failures.