Endpoint
POST http://localhost:8080/api/v1/tasks
Description
Submits a new task to Shannon for execution. The task is queued immediately and processed asynchronously by the Temporal workflow engine.
Authentication
Required : Yes
Include API key in header:
X-API-Key: sk_test_123456
Request
Header Required Description Example X-API-KeyYes Authentication key sk_test_123456Content-TypeYes Must be application/json application/jsonIdempotency-KeyNo Unique key for idempotency 550e8400-e29b-41d4-a716-446655440000traceparentNo W3C trace context 00-4bf92f...
Body Parameters
Parameter Type Required Description querystring Yes Natural language task description session_idstring No Session identifier for multi-turn conversations contextobject No Additional context data as key-value pairs modestring No Workflow routing: simple, standard, complex, or supervisor model_tierstring No Preferred tier: small, medium, or large model_overridestring No Specific model name (canonical; e.g., gpt-5, claude-sonnet-4-5-20250929) provider_overridestring No Force provider (e.g., openai, anthropic, google) skillstring No Markdown-defined skill to drive single-agent execution (Skills System ) research_strategystring No Research strategy preset: quick, standard, deep, or academic max_iterationsinteger No Maximum number of iterations for the task (1-50) max_concurrent_agentsinteger No Maximum number of concurrent agents (1-20) enable_verificationboolean No Enable verification step after task completion
Request Body Schema
Example 1: General AI-powered execution
{
"query" : "Analyze August website traffic trends" , // REQUIRED: Task to execute
"session_id" : "analytics-session-123" , // OPTIONAL: Session ID for multi-turn conversations (auto-generated if omitted)
"mode" : "supervisor" , // OPTIONAL: Workflow routing - "simple", "standard", "complex", or "supervisor" (default: auto-detect)
"model_tier" : "large" , // OPTIONAL: Model size - "small", "medium", or "large" (default: "small")
"model_override" : "gpt-5" , // OPTIONAL: Specific model (canonical id)
"provider_override" : "openai" , // OPTIONAL: Force specific provider
"context" : { // OPTIONAL: Execution context object
"role" : "data_analytics" , // OPTIONAL: Role preset name (e.g., "analysis", "research", "writer")
"system_prompt" : "You are a data analyst specializing in website analytics." , // OPTIONAL: Custom system prompt (overrides role preset)
"prompt_params" : { // OPTIONAL: Arbitrary key-value pairs for prompts/tools/adapters
"profile_id" : "49598h6e" , // EXAMPLE: Custom parameter (passed to tools/adapters)
"aid" : "fcb1cd29-9104-47b1-b914-31db6ba30c1a" , // EXAMPLE: Custom parameter (application ID)
"current_date" : "2025-10-31" // EXAMPLE: Custom parameter (current date)
},
"history_window_size" : 75 , // OPTIONAL: Max conversation history messages (default: 50)
"primers_count" : 3 , // OPTIONAL: Number of early messages to keep (default: 5)
"recents_count" : 20 , // OPTIONAL: Number of recent messages to keep (default: 15)
"compression_trigger_ratio" : 0.75 , // OPTIONAL: Trigger compression at 75% of window (default: 0.8)
"compression_target_ratio" : 0.375 // OPTIONAL: Compress to 37.5% of window (default: 0.5)
}
}
Example 2: Template-only execution (no AI)
{
"query" : "Generate weekly research briefing" , // REQUIRED: Task description
"session_id" : "research-session-456" , // OPTIONAL: Session ID
"context" : { // OPTIONAL: Context object
"template" : "research_summary" , // OPTIONAL: Template name to use
"template_version" : "1.0.0" , // OPTIONAL: Template version (default: latest)
"disable_ai" : true , // OPTIONAL: Template-only mode, no AI fallback (default: false)
"prompt_params" : { // OPTIONAL: Parameters for template rendering
"week" : "2025-W44" // EXAMPLE: Custom parameter for template
}
}
}
Parameter Conflicts to Avoid:
Don’t use both template and template_name (they’re aliases - use template only)
Don’t combine disable_ai: true with model controls - Gateway returns 400 error when conflicts detected:
disable_ai: true + model_tier → 400
disable_ai: true + model_override → 400
disable_ai: true + provider_override → 400
Top-level parameters override context equivalents:
Top-level model_tier overrides context.model_tier
Top-level model_override overrides context.model_override
Top-level provider_override overrides context.provider_override
Top-level skill overrides context.skill
Top-level research_strategy overrides context.research_strategy
Context Parameters (context.*)
Recognized keys:
role — role preset (e.g., analysis, research, writer, ads_research, financial_news, browser_use)
system_prompt — overrides role prompt; supports ${var} from prompt_params
prompt_params — arbitrary parameters for prompts/tools/adapters
model_tier — fallback when top‑level not provided
model_override — specific model name (canonical; e.g., gpt-5, claude-sonnet-4-5-20250929)
provider_override — force provider (e.g., openai, anthropic, google)
research_strategy — (deprecated: use top-level research_strategy instead; context value is ignored when top-level is set)
skill — (deprecated: use top-level skill instead; context value is ignored when top-level is set)
template — template name (alias: template_name)
template_version — template version
disable_ai — template-only mode (no AI fallback) - cannot be combined with model controls
Window controls: history_window_size, use_case_preset, primers_count, recents_count, compression_trigger_ratio, compression_target_ratio
Deep Research 2.0 controls (when force_research: true):
iterative_research_enabled — Enable/disable iterative coverage loop (default: true)
iterative_max_iterations — Max iterations 1-5 (strategy presets seed defaults; otherwise falls back to 3)
enable_fact_extraction — Extract structured facts into metadata (default: false)
Ads Research Platform Toggles (when role: "ads_research"):
platforms.google — Enable/disable Google Shopping Ads (default: true)
platforms.yahoo_jp — Enable/disable Yahoo Japan Ads (default: true)
platforms.meta — Enable/disable Meta Ad Library (default: true)
platforms.meta_platform — Meta platform filter: facebook, instagram, messenger, whatsapp, or all (default: all)
Rules:
Top-level parameters override context equivalents: model_tier, model_override, provider_override, skill, research_strategy
mode supports: simple|standard|complex|supervisor (default: auto-detect)
model_tier supports: small|medium|large
Conflict validation : disable_ai: true cannot be combined with model_tier, model_override, or provider_override (returns 400)
Role Presets
Role presets provide specialized system prompts and tool allowlists for different task types. Set via context.role:
Role Description Available Tools Use Case generalistGeneral-purpose assistant (default) All tools Simple queries, general chat analysisAnalytical assistant with structured reasoning web_search, file_readData analysis, structured reasoning researchResearch assistant for gathering and synthesizing information web_search, web_fetch, web_subpage_fetch, web_crawlInformation gathering, fact-finding writerTechnical writer for clear, organized prose file_readDocumentation, content creation criticCritical reviewer for identifying flaws and risks file_readCode review, quality assurance developerDeveloper assistant with filesystem access file_read, file_write, file_list, bash, python_executorCoding, debugging, file operations browser_useBrowser automation specialist browser_* tools, web_searchWeb scraping, UI automation, screenshot capture ads_research (Shannon Cloud Only) Multi-platform ads competitor analysis ads_serp_extract, ads_transparency_search, meta_ad_library, yahoo_jp_adsMarketing research, competitor ads analysis financial_news (Shannon Cloud Only) Financial news and sentiment analysis news_aggregator, alpaca_news, sec_filings, twitter_sentimentStock news, market sentiment analysis data_analyticsData analytics with vendor integrations Vendor-specific analytics tools Business intelligence, analytics reporting
Shannon Cloud Only : Roles marked as “Shannon Cloud Only” are enterprise features and require a Shannon Cloud deployment with vendor adapter configuration.
Response
Success Response
Status : 200 OK
Headers :
X-Workflow-ID: Temporal workflow identifier
X-Session-ID: Session identifier (auto-generated if not provided)
Body :
{
"task_id" : "string" ,
"status" : "string" ,
"message" : "string (optional)" ,
"created_at" : "timestamp"
}
Response Fields
Field Type Description task_idstring Unique task identifier (also workflow ID) statusstring Submission status (e.g., STATUS_CODE_OK) messagestring Optional status message created_attimestamp Task creation time (ISO 8601)
Examples
Basic Task Submission
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "What is the capital of France?"
}'
Response :
{
"task_id" : "task_01HQZX3Y9K8M2P4N5S7T9W2V" ,
"status" : "STATUS_CODE_OK" ,
"message" : "Task submitted successfully" ,
"created_at" : "2025-10-22T10:30:00Z"
}
Task with Session ID (Multi-Turn)
# First turn
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "What is Python?",
"session_id": "user-123-chat"
}'
# Second turn (references previous context)
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "What are its main advantages?",
"session_id": "user-123-chat"
}'
Task with Context
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "Summarize this user feedback",
"context": {
"user_id": "user_12345",
"feedback_type": "bug_report",
"severity": "high",
"product": "mobile_app",
"role": "analysis",
"model_override": "gpt-5"
}
}'
Force Tier (Top‑Level)
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "Complex analysis",
"model_tier": "large"
}'
Template‑Only Execution
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "Weekly research briefing",
"context": {"template": "research_summary", "template_version": "1.0.0", "disable_ai": true}
}'
Supervisor Mode
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "Assess system reliability",
"mode": "supervisor"
}'
Ads Research (Shannon Cloud Only)
Multi-platform advertising competitor analysis with platform toggles.
# Analyze competitors across all platforms
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "Analyze competitor ads for organic skincare products",
"context": {
"role": "ads_research"
}
}'
# Google Shopping Ads only
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "Find competitor pricing strategies for wireless earbuds",
"context": {
"role": "ads_research",
"platforms": {
"google": true,
"yahoo_jp": false,
"meta": false
}
}
}'
# Meta (Instagram only) + Yahoo Japan
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "Research fashion brand advertising on Instagram and Yahoo Japan",
"context": {
"role": "ads_research",
"platforms": {
"google": false,
"yahoo_jp": true,
"meta": true,
"meta_platform": "instagram"
}
}
}'
Platform Defaults : All platforms are enabled by default. Use platforms object to selectively disable platforms or filter Meta by platform (facebook, instagram, messenger, whatsapp, all).
Deep Research 2.0
Deep Research 2.0 provides iterative coverage improvement for comprehensive research tasks.
# Basic Deep Research (uses default settings)
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "AI trends in 2025",
"context": {
"force_research": true
}
}'
# Custom iterations (faster, less thorough)
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "Compare major LLM providers",
"context": {
"force_research": true,
"iterative_max_iterations": 2
}
}'
# Disable iterative mode (use legacy research)
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Content-Type: application/json" \
-d '{
"query": "Explain machine learning basics",
"context": {
"force_research": true,
"iterative_research_enabled": false
}
}'
Deep Research 2.0 is enabled by default when force_research: true. It uses a multi-stage workflow with coverage evaluation to ensure comprehensive results. Use iterative_max_iterations to control depth (1-5, default: 3).
With Idempotency
# Generate idempotency key (use UUID)
IDEMPOTENCY_KEY =$( uuidgen )
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "Idempotency-Key: $IDEMPOTENCY_KEY " \
-H "Content-Type: application/json" \
-d '{
"query": "Analyze sales data for Q4"
}'
With Distributed Tracing
curl -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: sk_test_123456" \
-H "traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01" \
-H "Content-Type: application/json" \
-d '{
"query": "Research latest AI trends"
}'
Error Responses
400 Bad Request
Missing Query :
{
"error" : "Query is required"
}
Invalid JSON :
{
"error" : "Invalid request body: unexpected EOF"
}
401 Unauthorized
Missing API Key :
{
"error" : "Unauthorized"
}
Invalid API Key :
{
"error" : "Unauthorized"
}
429 Too Many Requests
{
"error" : "Rate limit exceeded"
}
Headers :
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1609459200
Retry-After: 60
500 Internal Server Error
{
"error" : "Failed to submit task: database connection failed"
}
Code Examples
Python with httpx
import httpx
response = httpx.post(
"http://localhost:8080/api/v1/tasks" ,
headers ={
"X-API-Key" : "sk_test_123456" ,
"Content-Type" : "application/json"
},
json ={
"query" : "What is the capital of France?"
}
)
if response.status_code == 200 :
data = response.json()
print ( f "Task ID: { data[ 'task_id' ] } " )
print ( f "Status: { data[ 'status' ] } " )
else :
print ( f "Error: { response.status_code } " )
print (response.json())
Python with requests
import requests
response = requests.post(
"http://localhost:8080/api/v1/tasks" ,
headers ={
"X-API-Key" : "sk_test_123456"
},
json ={
"query" : "Analyze customer sentiment" ,
"context" : {
"source" : "twitter" ,
"date_range" : "2025-10-01 to 2025-10-22"
}
}
)
task = response.json()
print ( f "Task submitted: { task[ 'task_id' ] } " )
JavaScript/Node.js
const axios = require ( 'axios' );
async function submitTask ( query ) {
try {
const response = await axios . post (
'http://localhost:8080/api/v1/tasks' ,
{
query: query
},
{
headers: {
'X-API-Key' : 'sk_test_123456' ,
'Content-Type' : 'application/json'
}
}
);
console . log ( 'Task ID:' , response . data . task_id );
console . log ( 'Status:' , response . data . status );
return response . data ;
} catch ( error ) {
console . error ( 'Error:' , error . response ?. data || error . message );
throw error ;
}
}
submitTask ( 'What is quantum computing?' );
cURL with Idempotency
#!/bin/bash
API_KEY = "sk_test_123456"
IDEMPOTENCY_KEY =$( uuidgen )
# Submit task
RESPONSE =$( curl -s -X POST http://localhost:8080/api/v1/tasks \
-H "X-API-Key: $API_KEY " \
-H "Idempotency-Key: $IDEMPOTENCY_KEY " \
-H "Content-Type: application/json" \
-d '{
"query": "Analyze Q4 revenue trends"
}' )
echo $RESPONSE | jq
TASK_ID =$( echo $RESPONSE | jq -r '.task_id' )
echo "Track progress: http://localhost:8088/workflows/ $TASK_ID "
package main
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
)
type TaskRequest struct {
Query string `json:"query"`
SessionID string `json:"session_id,omitempty"`
Context map [ string ] interface {} `json:"context,omitempty"`
}
type TaskResponse struct {
TaskID string `json:"task_id"`
Status string `json:"status"`
Message string `json:"message,omitempty"`
}
func submitTask ( query string ) (* TaskResponse , error ) {
req := TaskRequest {
Query : query ,
}
body , _ := json . Marshal ( req )
httpReq , _ := http . NewRequest (
"POST" ,
"http://localhost:8080/api/v1/tasks" ,
bytes . NewBuffer ( body ),
)
httpReq . Header . Set ( "X-API-Key" , "sk_test_123456" )
httpReq . Header . Set ( "Content-Type" , "application/json" )
client := & http . Client {}
resp , err := client . Do ( httpReq )
if err != nil {
return nil , err
}
defer resp . Body . Close ()
var taskResp TaskResponse
json . NewDecoder ( resp . Body ). Decode (& taskResp )
return & taskResp , nil
}
func main () {
task , err := submitTask ( "What is machine learning?" )
if err != nil {
fmt . Println ( "Error:" , err )
return
}
fmt . Printf ( "Task ID: %s \n " , task . TaskID )
fmt . Printf ( "Status: %s \n " , task . Status )
}
Implementation Details
Workflow Creation
When you submit a task:
Gateway receives request → Validates authentication, rate limits
Generates session ID → If not provided, auto-generates UUID
Calls Orchestrator gRPC → SubmitTask(metadata, query, context)
Orchestrator starts Temporal workflow → Durable execution
Response returned → Task ID, initial status
Task executes asynchronously → Independent of HTTP connection
Idempotency Behavior
Idempotency keys allow safe retries of task submissions without creating duplicate tasks.
How it works :
First request with an Idempotency-Key:
Shannon creates the task
Caches the response in Redis with 24-hour TTL
Returns task ID and status
Duplicate requests (same Idempotency-Key):
Shannon detects the cached response
Returns the same task ID without creating a new task
Response is identical to the first request
After 24 hours :
Cache expires
New request with same key creates a new task
Cache Details :
Storage : Redis
TTL : 24 hours (86400 seconds)
Key format : idempotency:<16-char-hash> (SHA-256 of the idempotency key plus user ID, path, and request body)
Scope : Per authenticated user (user ID is part of the hash; when auth is disabled the hash is based on the header, path, and body)
Cached responses : Only 2xx responses are stored; cached hits include X-Idempotency-Cached: true and X-Idempotency-Key: <your-key>
Body Behavior :
If the request body changes, the cache key changes too, so the gateway treats it as a brand-new request. Duplicate detection only triggers when the header, user, path, and body all match.
Best Practice : Generate a unique key per unique request body.
Example :
import uuid
import httpx
# Generate unique key
idempotency_key = str (uuid.uuid4())
# First request - creates task
response1 = httpx.post(
"http://localhost:8080/api/v1/tasks" ,
headers ={ "X-API-Key" : "sk_test_123456" , "Idempotency-Key" : idempotency_key},
json ={ "query" : "Analyze Q4 sales" }
)
task_id_1 = response1.json()[ "task_id" ]
# Retry with same key - returns same task ID
response2 = httpx.post(
"http://localhost:8080/api/v1/tasks" ,
headers ={ "X-API-Key" : "sk_test_123456" , "Idempotency-Key" : idempotency_key},
json ={ "query" : "Analyze Q4 sales" }
)
task_id_2 = response2.json()[ "task_id" ]
assert task_id_1 == task_id_2 # Same task, no duplicate
When to use :
Network retry logic (avoid duplicate tasks on timeout)
Webhook deliveries (handle duplicate webhook calls)
Critical operations (payments, data writes)
Background job queues (prevent duplicate scheduling)
Session Management
No session_id : Auto-generates UUID, fresh context
With session_id : Loads previous conversation history from Redis
Session persistence : 30 days default TTL
Multi-turn conversations : All tasks with same session_id share context
Context Object
The context object is stored as metadata and passed to:
Agent execution environment
Tool invocations (can access via ctx.get("key"))
Session memory (for reference in future turns)
Example use cases :
User preferences: {"language": "spanish", "format": "markdown"}
Business context: {"company_id": "acme", "department": "sales"}
Constraints: {"max_length": 500, "tone": "formal"}
Best Practices
1. Always Use Idempotency Keys for Critical Tasks
import uuid
idempotency_key = str (uuid.uuid4())
response = httpx.post(
"http://localhost:8080/api/v1/tasks" ,
headers ={
"X-API-Key" : "sk_test_123456" ,
"Idempotency-Key" : idempotency_key
},
json ={ "query" : "Process payment for order #12345" }
)
2. Use Sessions for Conversations
session_id = "user-456-chat"
# Turn 1
httpx.post(..., json ={
"query" : "Load sales data for Q4" ,
"session_id" : session_id
})
# Turn 2 (references Q4 data from Turn 1)
httpx.post(..., json ={
"query" : "Compare it to Q3" ,
"session_id" : session_id
})
3. Provide Rich Context
httpx.post(..., json ={
"query" : "Analyze this customer feedback" ,
"context" : {
"customer_id" : "cust_789" ,
"subscription_tier" : "enterprise" ,
"account_age_days" : 365 ,
"previous_tickets" : 3 ,
"sentiment_history" : [ "positive" , "neutral" , "negative" ]
}
})
4. Handle Errors Gracefully
try :
response = httpx.post(..., timeout = 30.0 )
response.raise_for_status()
task = response.json()
except httpx.TimeoutException:
print ( "Request timed out" )
except httpx.HTTPStatusError as e:
if e.response.status_code == 429 :
retry_after = int (e.response.headers.get( "Retry-After" , 60 ))
time.sleep(retry_after)
# Retry...
else :
print ( f "Error: { e.response.json() } " )
5. Store Task IDs for Tracking
response = httpx.post(...)
task_id = response.json()[ "task_id" ]
workflow_id = response.headers[ "X-Workflow-ID" ]
# Save to database
db.tasks.insert({
"task_id" : task_id,
"workflow_id" : workflow_id,
"user_id" : "user_123" ,
"query" : "..." ,
"created_at" : datetime.now()
})
# Later: check status
status = httpx.get( f "http://localhost:8080/api/v1/tasks/ { task_id } " )
Submit + Stream in One Call
Need real-time updates? Use POST /api/v1/tasks/stream instead to submit a task and get a stream URL in one call. Perfect for frontend applications that need immediate progress updates.See Unified Submit + Stream for examples.
Submit + Stream POST /api/v1/tasks/stream (recommended for UIs)
Get Task Status GET /api/v1/tasks/
Stream Events Real-time task events
List Tasks GET /api/v1/tasks
Python SDK Use the SDK instead