Endpoint
GET http://localhost:8080/api/v1/tasks/{id}
Description
Retrieves the current status, result, and metadata for a specific task. Use this endpoint to check task progress or retrieve final results.
Authentication
Required : Yes
Include API key in header:
X-API-Key: sk_test_123456
Request
Path Parameters
Parameter Type Required Description idstring Yes Task ID (also serves as workflow ID)
Header Required Description X-API-KeyYes Authentication key
Response
Success Response
Status : 200 OK
Headers :
X-Workflow-ID: Temporal workflow identifier (same as task ID)
Body :
{
"task_id" : "string" ,
"workflow_id" : "string" ,
"status" : "string" ,
"result" : "string" ,
"response" : {},
"metadata" : {},
"error" : "string" ,
"created_at" : "timestamp" ,
"updated_at" : "timestamp" ,
"query" : "string" ,
"session_id" : "string" ,
"context" : {},
"unified_response" : {},
"mode" : "string" ,
"model_used" : "string" ,
"provider" : "string" ,
"usage" : {
"total_tokens" : 0 ,
"input_tokens" : 0 ,
"output_tokens" : 0 ,
"estimated_cost" : 0.0
}
}
Response Fields
Field Type Description task_idstring Unique task identifier workflow_idstring Workflow identifier (same as task_id) statusstring Current task status resultstring Raw LLM output (plain text or JSON string) responseobject Parsed JSON (only present if result contains valid JSON) metadataobject Task metadata (citations, verification, unified_response, extracted_facts, model_breakdown, etc.) errorstring Error message (empty if no error) created_attimestamp Task creation time updated_attimestamp Last update time querystring Original task query session_idstring Session identifier contextobject Task context (budget info, date, parameters) unified_responseobject Structured unified response with parsed result, performance metrics, and metadata modestring Execution mode used model_usedstring Primary model used (e.g., gpt-5-mini-2025-08-07) providerstring Provider name (e.g., openai, anthropic) usageobject Token usage and cost: { total_tokens, input_tokens?, output_tokens?, estimated_cost? }
Status Values
TASK_STATUS_UNSPECIFIED - Status unknown
TASK_STATUS_QUEUED - Waiting to execute
TASK_STATUS_RUNNING - Currently executing
TASK_STATUS_COMPLETED - Successfully completed
TASK_STATUS_FAILED - Failed with error
TASK_STATUS_PAUSED - Paused by user or HITL review
TASK_STATUS_CANCELLED - Cancelled by user
TASK_STATUS_TIMEOUT - Exceeded timeout limit
Execution Mode Values
EXECUTION_MODE_SIMPLE - Single LLM call, no tools
EXECUTION_MODE_STANDARD - Multi-step with tools
EXECUTION_MODE_COMPLEX - Advanced reasoning patterns
Examples
Check Task Status
curl -X GET "http://localhost:8080/api/v1/tasks/task_01HQZX3Y9K8M2P4N5S7T9W2V" \
-H "X-API-Key: sk_test_123456"
Response (Queued) :
{
"task_id" : "task_01HQZX3Y9K8M2P4N5S7T9W2V" ,
"status" : "TASK_STATUS_QUEUED" ,
"response" : null ,
"error" : "" ,
"created_at" : "2025-10-22T10:30:00Z" ,
"updated_at" : "2025-10-22T10:30:00Z" ,
"query" : "What is the capital of France?" ,
"session_id" : "user-123-session" ,
"mode" : "EXECUTION_MODE_SIMPLE"
}
Response (Running) :
{
"task_id" : "task_01HQZX3Y9K8M2P4N5S7T9W2V" ,
"status" : "TASK_STATUS_RUNNING" ,
"response" : null ,
"error" : "" ,
"created_at" : "2025-10-22T10:30:00Z" ,
"updated_at" : "2025-10-22T10:30:02Z" ,
"query" : "What is the capital of France?" ,
"session_id" : "user-123-session" ,
"mode" : "EXECUTION_MODE_SIMPLE"
}
Response (Completed) :
{
"task_id" : "task_01HQZX3Y9K8M2P4N5S7T9W2V" ,
"status" : "TASK_STATUS_COMPLETED" ,
"result" : "The capital of France is Paris. Paris has been the capital since 987 AD and is located in the north-central part of the country." ,
"error" : "" ,
"created_at" : "2025-10-22T10:30:00Z" ,
"updated_at" : "2025-10-22T10:30:05Z" ,
"query" : "What is the capital of France?" ,
"session_id" : "user-123-session" ,
"mode" : "EXECUTION_MODE_SIMPLE" ,
"model_used" : "gpt-5-mini-2025-08-07" ,
"provider" : "openai" ,
"usage" : {
"total_tokens" : 300 ,
"input_tokens" : 200 ,
"output_tokens" : 100 ,
"estimated_cost" : 0.006
}
}
Response (Failed) :
{
"task_id" : "task_01HQZX3Y9K8M2P4N5S7T9W2V" ,
"status" : "TASK_STATUS_FAILED" ,
"response" : null ,
"error" : "LLM service unavailable: connection timeout" ,
"created_at" : "2025-10-22T10:30:00Z" ,
"updated_at" : "2025-10-22T10:30:10Z" ,
"query" : "What is the capital of France?" ,
"session_id" : "user-123-session" ,
"mode" : "EXECUTION_MODE_SIMPLE"
}
Deep Research Response Payload
When a task is submitted with force_research: true, the completed response includes additional metadata fields with structured research data.
For Deep Research tasks, the metadata object contains:
Field Type Description unified_responseobject Consolidated response with all metadata citationsarray Structured citation data from research verificationobject Claim verification results (if enable_verification: true) extracted_factsarray Structured facts (if enable_fact_extraction: true) fact_summaryobject Summary statistics for extracted facts
Example: Deep Research Completed Response
{
"task_id" : "task-abc123-research" ,
"status" : "TASK_STATUS_COMPLETED" ,
"result" : "# AI Trends in 2025 \n\n The artificial intelligence landscape in 2025 is characterized by...[1][2] \n\n ## Sources \n [1] MIT Technology Review (https://...) - 2025-01-15 \n [2] Nature AI (https://...) - 2025-01-10" ,
"metadata" : {
"unified_response" : {
"task_id" : "task-abc123-research" ,
"session_id" : "session-xyz" ,
"status" : "completed" ,
"result" : "The artificial intelligence landscape in 2025..." ,
"metadata" : {
"model" : "claude-sonnet-4-5-20250929" ,
"execution_mode" : "EXECUTION_MODE_COMPLEX" ,
"complexity_score" : 0.72 ,
"agents_used" : 3
},
"usage" : {
"input_tokens" : 12100 ,
"output_tokens" : 3320 ,
"total_tokens" : 15420 ,
"cost_usd" : 0.0462
},
"performance" : {
"execution_time_ms" : 45200
},
"stop_reason" : "completed" ,
"error" : null ,
"timestamp" : "2025-01-20T10:00:00Z"
},
"citations" : [
{
"title" : "AI Breakthrough Report 2025" ,
"url" : "https://www.technologyreview.com/2025/01/ai-report" ,
"source" : "MIT Technology Review" ,
"credibility_score" : 0.92 ,
"quality_score" : 0.88
},
{
"title" : "Nature AI Special Issue" ,
"url" : "https://www.nature.com/articles/ai-2025" ,
"source" : "Nature" ,
"credibility_score" : 0.95 ,
"quality_score" : 0.91
}
],
"verification" : {
"overall_confidence" : 0.87 ,
"total_claims" : 12 ,
"supported_claims" : 10 ,
"unsupported_claims" : [ "Claim about future predictions" ],
"conflicts" : [],
"claim_details" : [
{
"claim" : "AI models have achieved 95% accuracy on benchmark X" ,
"supporting_citations" : [ 1 , 2 ],
"conflicting_citations" : [],
"confidence" : 0.92
}
]
}
},
"model_used" : "claude-sonnet-4-5-20250929" ,
"provider" : "anthropic" ,
"usage" : {
"total_tokens" : 15420 ,
"input_tokens" : 12100 ,
"output_tokens" : 3320 ,
"estimated_cost" : 0.0462
}
}
When enable_fact_extraction: true is set in the request context:
{
"metadata" : {
"extracted_facts" : [
{
"statement" : "GPT-5 achieved 95% accuracy on MMLU benchmark" ,
"confidence" : 0.92 ,
"source_citation" : [ 1 , 3 ],
"category" : "performance" ,
"entity_mentions" : [ "GPT-5" , "MMLU" ],
"temporal_marker" : "2025" ,
"is_quantitative" : true
}
],
"fact_summary" : {
"total_facts" : 24 ,
"high_confidence" : 18 ,
"categorized_facts" : {
"performance" : 8 ,
"market" : 6 ,
"research" : 10
},
"contradiction_count" : 1
}
}
}
Citation Object Schema
Field Type Description urlstring Source URL titlestring Article/page title sourcestring Publisher/domain name credibility_scorefloat Source credibility (0.0-1.0) quality_scorefloat Content quality (0.0-1.0)
The position of each citation in the metadata.citations array corresponds to the [n] index used in inline references.
Verification Object Schema
Field Type Description overall_confidencefloat Aggregate verification confidence (0.0-1.0) total_claimsinteger Number of factual claims extracted supported_claimsinteger Claims with supporting citations unsupported_claimsarray List of claim texts without citation support conflictsarray Detected conflicting information claim_detailsarray Per-claim verification details
Accessing Deep Research Data : The metadata.citations array and metadata.verification object are only populated for research workflows (force_research: true). For simple tasks, these fields will be absent from the response.
Error Responses
401 Unauthorized
{
"error" : "Unauthorized"
}
404 Not Found
{
"error" : "Task not found"
}
500 Internal Server Error
{
"error" : "Failed to get task status: database error"
}
Code Examples
Python - Simple Status Check
import httpx
def get_task_status ( task_id : str , api_key : str ):
"""Get task status."""
response = httpx.get(
f "http://localhost:8080/api/v1/tasks/ { task_id } " ,
headers ={ "X-API-Key" : api_key}
)
if response.status_code == 404 :
return None
return response.json()
status = get_task_status( "task_abc123" , "sk_test_123456" )
if status:
print ( f "Status: { status[ 'status' ] } " )
if status[ 'status' ] == 'TASK_STATUS_COMPLETED' :
print ( f "Result: { status[ 'result' ] } " )
Python - Poll Until Completion
import httpx
import time
def wait_for_completion ( task_id : str , api_key : str , timeout : int = 300 ):
"""Poll task status until completion or timeout."""
start_time = time.time()
while True :
response = httpx.get(
f "http://localhost:8080/api/v1/tasks/ { task_id } " ,
headers ={ "X-API-Key" : api_key}
)
status = response.json()
current_status = status[ "status" ]
# Check if terminal state
if current_status == "TASK_STATUS_COMPLETED" :
return status.get( "result" )
elif current_status == "TASK_STATUS_FAILED" :
raise Exception ( f "Task failed: { status[ 'error' ] } " )
elif current_status == "TASK_STATUS_TIMEOUT" :
raise Exception ( "Task timed out" )
elif current_status == "TASK_STATUS_CANCELLED" :
raise Exception ( "Task was cancelled" )
# Check timeout
if time.time() - start_time > timeout:
raise TimeoutError ( f "Polling timeout after { timeout } s" )
# Wait before next poll
time.sleep( 2 )
# Usage
try :
result = wait_for_completion( "task_abc123" , "sk_test_123456" )
print ( "Result:" , result)
except Exception as e:
print ( "Error:" , e)
JavaScript/Node.js
const axios = require ( 'axios' );
async function getTaskStatus ( taskId ) {
try {
const response = await axios . get (
`http://localhost:8080/api/v1/tasks/ ${ taskId } ` ,
{
headers: {
'X-API-Key' : 'sk_test_123456'
}
}
);
const status = response . data ;
console . log ( 'Status:' , status . status );
if ( status . status === 'TASK_STATUS_COMPLETED' ) {
console . log ( 'Result:' , status . result );
} else if ( status . status === 'TASK_STATUS_FAILED' ) {
console . error ( 'Error:' , status . error );
}
return status ;
} catch ( error ) {
if ( error . response ?. status === 404 ) {
console . error ( 'Task not found' );
} else {
console . error ( 'Error:' , error . response ?. data || error . message );
}
throw error ;
}
}
getTaskStatus ( 'task_abc123' );
JavaScript - Poll with Async/Await
async function waitForCompletion ( taskId , timeout = 300000 ) {
const startTime = Date . now ();
const pollInterval = 2000 ; // 2 seconds
while ( true ) {
const response = await axios . get (
`http://localhost:8080/api/v1/tasks/ ${ taskId } ` ,
{ headers: { 'X-API-Key' : 'sk_test_123456' } }
);
const { status , result , error } = response . data ;
if ( status === 'TASK_STATUS_COMPLETED' ) {
return result ;
} else if ( status === 'TASK_STATUS_FAILED' ) {
throw new Error ( `Task failed: ${ error } ` );
} else if ( status === 'TASK_STATUS_TIMEOUT' ) {
throw new Error ( 'Task timed out' );
}
// Check timeout
if ( Date . now () - startTime > timeout ) {
throw new Error ( `Polling timeout after ${ timeout } ms` );
}
// Wait before next poll
await new Promise ( resolve => setTimeout ( resolve , pollInterval ));
}
}
// Usage
waitForCompletion ( 'task_abc123' )
. then ( result => console . log ( 'Result:' , result ))
. catch ( error => console . error ( 'Error:' , error . message ));
package main
import (
"encoding/json"
"fmt"
"net/http"
"time"
)
type TaskStatusResponse struct {
TaskID string `json:"task_id"`
WorkflowID string `json:"workflow_id"`
Status string `json:"status"`
Result string `json:"result"`
Response map [ string ] interface {} `json:"response"`
Error string `json:"error"`
Query string `json:"query"`
SessionID string `json:"session_id"`
Mode string `json:"mode"`
Context map [ string ] interface {} `json:"context"`
ModelUsed string `json:"model_used"`
Provider string `json:"provider"`
CreatedAt string `json:"created_at"`
UpdatedAt string `json:"updated_at"`
Usage map [ string ] interface {} `json:"usage"`
Metadata map [ string ] interface {} `json:"metadata"`
UnifiedResponse map [ string ] interface {} `json:"unified_response"`
}
func getTaskStatus ( taskID string ) (* TaskStatusResponse , error ) {
url := fmt . Sprintf ( "http://localhost:8080/api/v1/tasks/ %s " , taskID )
req , _ := http . NewRequest ( "GET" , url , nil )
req . Header . Set ( "X-API-Key" , "sk_test_123456" )
client := & http . Client {}
resp , err := client . Do ( req )
if err != nil {
return nil , err
}
defer resp . Body . Close ()
if resp . StatusCode == 404 {
return nil , fmt . Errorf ( "task not found" )
}
var status TaskStatusResponse
json . NewDecoder ( resp . Body ). Decode (& status )
return & status , nil
}
func waitForCompletion ( taskID string , timeout time . Duration ) ( map [ string ] interface {}, error ) {
start := time . Now ()
for {
status , err := getTaskStatus ( taskID )
if err != nil {
return nil , err
}
switch status . Status {
case "TASK_STATUS_COMPLETED" :
return status . Response , nil
case "TASK_STATUS_FAILED" :
return nil , fmt . Errorf ( "task failed: %s " , status . Error )
case "TASK_STATUS_TIMEOUT" :
return nil , fmt . Errorf ( "task timed out" )
}
if time . Since ( start ) > timeout {
return nil , fmt . Errorf ( "polling timeout" )
}
time . Sleep ( 2 * time . Second )
}
}
func main () {
result , err := waitForCompletion ( "task_abc123" , 5 * time . Minute )
if err != nil {
fmt . Println ( "Error:" , err )
return
}
fmt . Println ( "Result:" , result )
}
Bash - Monitor Task Progress
#!/bin/bash
API_KEY = "sk_test_123456"
TASK_ID = " $1 "
if [ -z " $TASK_ID " ]; then
echo "Usage: $0 <task_id>"
exit 1
fi
echo "Monitoring task: $TASK_ID "
echo ""
while true ; do
RESPONSE =$( curl -s "http://localhost:8080/api/v1/tasks/ $TASK_ID " \
-H "X-API-Key: $API_KEY " )
STATUS =$( echo $RESPONSE | jq -r '.status' )
echo "[$( date +%T)] Status: $STATUS "
case $STATUS in
"TASK_STATUS_COMPLETED" )
echo ""
echo "✓ Task completed!"
echo ""
echo $RESPONSE | jq -r '.result'
exit 0
;;
"TASK_STATUS_FAILED" )
echo ""
echo "✗ Task failed!"
ERROR =$( echo $RESPONSE | jq -r '.error' )
echo "Error: $ERROR "
exit 1
;;
"TASK_STATUS_TIMEOUT" )
echo ""
echo "✗ Task timed out!"
exit 1
;;
"TASK_STATUS_CANCELLED" )
echo ""
echo "✗ Task cancelled!"
exit 1
;;
esac
sleep 2
done
Use Cases
1. Submit and Wait Pattern
import httpx
import time
def submit_and_wait ( query : str , api_key : str ):
"""Submit task and wait for result."""
# Submit
submit_response = httpx.post(
"http://localhost:8080/api/v1/tasks" ,
headers ={ "X-API-Key" : api_key},
json ={ "query" : query}
)
task_id = submit_response.json()[ "task_id" ]
print ( f "Task submitted: { task_id } " )
# Wait
while True :
status_response = httpx.get(
f "http://localhost:8080/api/v1/tasks/ { task_id } " ,
headers ={ "X-API-Key" : api_key}
)
status = status_response.json()
if status[ "status" ] == "TASK_STATUS_COMPLETED" :
return status[ "result" ]
elif status[ "status" ] == "TASK_STATUS_FAILED" :
raise Exception (status[ "error" ])
time.sleep( 2 )
result = submit_and_wait( "What is Python?" , "sk_test_123456" )
print (result)
def get_task_summary ( task_id : str , api_key : str ):
"""Get task summary for dashboard."""
response = httpx.get(
f "http://localhost:8080/api/v1/tasks/ { task_id } " ,
headers ={ "X-API-Key" : api_key}
)
status = response.json()
return {
"id" : task_id,
"query" : status[ "query" ][: 50 ] + "..." ,
"status" : status[ "status" ].replace( "TASK_STATUS_" , "" ),
"mode" : status[ "mode" ].replace( "EXECUTION_MODE_" , "" ),
"created" : status[ "created_at" ]
}
# Display in UI
summary = get_task_summary( "task_abc123" , "sk_test_123456" )
print ( f " { summary[ 'status' ] } : { summary[ 'query' ] } " )
3. Batch Status Check
def check_multiple_tasks ( task_ids : list , api_key : str ):
"""Check status of multiple tasks."""
results = {}
for task_id in task_ids:
try :
response = httpx.get(
f "http://localhost:8080/api/v1/tasks/ { task_id } " ,
headers ={ "X-API-Key" : api_key},
timeout = 5.0
)
results[task_id] = response.json()[ "status" ]
except Exception as e:
results[task_id] = f "ERROR: { e } "
return results
# Check 5 tasks
task_ids = [ "task_1" , "task_2" , "task_3" , "task_4" , "task_5" ]
statuses = check_multiple_tasks(task_ids, "sk_test_123456" )
for task_id, status in statuses.items():
print ( f " { task_id } : { status } " )
Best Practices
1. Use Streaming Instead of Polling
For long-running tasks, use SSE streaming instead of polling:
# ❌ Bad - Polls every 2 seconds
while True :
status = httpx.get( f ".../ { task_id } " )
if status[ "status" ] == "COMPLETED" :
break
time.sleep( 2 )
# ✅ Good - Use streaming
for event in client.stream(task_id):
print (event.type, event.message)
if event.type == "WORKFLOW_COMPLETED" :
break
2. Handle All Status States
status = get_task_status(task_id, api_key)
match status[ "status" ]:
case "TASK_STATUS_QUEUED" :
print ( "Task is queued..." )
case "TASK_STATUS_RUNNING" :
print ( "Task is running..." )
case "TASK_STATUS_COMPLETED" :
result = status[ "result" ]
print ( f "Result: { result } " )
case "TASK_STATUS_FAILED" :
print ( f "Failed: { status[ 'error' ] } " )
case "TASK_STATUS_TIMEOUT" :
print ( "Task timed out" )
case "TASK_STATUS_CANCELLED" :
print ( "Task was cancelled" )
3. Implement Exponential Backoff
import time
def poll_with_backoff ( task_id , api_key , max_wait = 60 ):
"""Poll with exponential backoff."""
wait_time = 1
while True :
status = get_task_status(task_id, api_key)
if status[ "status" ] in [ "TASK_STATUS_COMPLETED" , "TASK_STATUS_FAILED" ]:
return status
time.sleep(wait_time)
wait_time = min (wait_time * 2 , max_wait) # Cap at 60s
4. Cache Status Responses
from functools import lru_cache
import time
@lru_cache ( maxsize = 1000 )
def get_cached_status ( task_id : str , api_key : str , timestamp : int ):
"""Cache status for 5 seconds."""
return get_task_status(task_id, api_key)
# Usage
current_time = int (time.time() / 5 ) # 5-second buckets
status = get_cached_status( "task_abc123" , "sk_test_123456" , current_time)
def extract_task_info ( task_id : str , api_key : str ):
"""Extract useful metadata."""
status = get_task_status(task_id, api_key)
return {
"task_id" : status[ "task_id" ],
"query" : status[ "query" ],
"status" : status[ "status" ],
"mode" : status[ "mode" ],
"session_id" : status[ "session_id" ],
"has_result" : status[ "response" ] is not None ,
"has_error" : bool (status[ "error" ]),
"workflow_url" : f "http://localhost:8088/workflows/ { task_id } "
}
Submit Task POST /api/v1/tasks
Stream Events Real-time monitoring
Python SDK Use client.get_status()
Notes
Don’t Poll in Production : For long-running tasks, use streaming endpoints instead of polling status. Polling creates unnecessary load and adds latency.
Session Tracking : The session_id field allows you to track which session a task belongs to, useful for multi-turn conversations and cost attribution.