Skip to main content

Overview

Shannon emits real-time events via Server-Sent Events (SSE) to provide visibility into task execution. This document catalogs the 33 event types actually emitted by the platform, their structure, and when they occur. Events provide:
  • Real-time progress - Track task execution as it happens
  • Debugging insights - LLM prompts, tool invocations, agent reasoning
  • Cost monitoring - Track token usage and costs in real-time
  • Multi-agent coordination - Observe team formation and collaboration
  • Error recovery - Monitor error handling and recovery attempts

Event Structure

All events follow this base structure:
{
  "workflow_id": "wf-550e8400-e29b-41d4-a716-446655440000",
  "type": "AGENT_THINKING",
  "agent_id": "agent-001",
  "message": "Analyzing task complexity...",
  "timestamp": "2024-10-27T10:00:00Z",
  "seq": 42,
  "stream_id": "stream-abc123",
  "payload": {}
}

Base Fields

FieldTypeDescription
workflow_idstringUnique task/workflow identifier
typestringEvent type (see catalog below)
agent_idstringAgent that emitted the event
messagestringHuman-readable event description
timestampISO 8601When the event occurred
seqintegerSequence number for ordering
stream_idstringStream identifier for reconnection
payloadobjectEvent-specific payload (optional; serialized as payload in JSON)

Event Categories

Events are organized into logical categories:
  1. Workflow Events - Task lifecycle
  2. Agent Events - Agent execution
  3. Tool Events - Tool invocations
  4. Pattern Events - Cognitive pattern execution
  5. Team Events - Multi-agent coordination
  6. LLM Events - Language model interactions
  7. Progress Events - Task progress and status
  8. System Events - Errors and system state

Event Type Quick Reference

Authoritative list of event types emitted by Shannon:
Event TypeCategoryDescription
WORKFLOW_STARTEDWorkflowTask begins execution
WORKFLOW_COMPLETEDWorkflowTask completes successfully
WORKFLOW_PAUSINGWorkflow controlPause requested; workflow preparing to pause
WORKFLOW_PAUSEDWorkflow controlWorkflow paused at a checkpoint
WORKFLOW_RESUMEDWorkflow controlWorkflow resumed after pause
WORKFLOW_CANCELLINGWorkflow controlCancel requested; workflow preparing to terminate
WORKFLOW_CANCELLEDWorkflow controlWorkflow cancelled at a checkpoint
AGENT_STARTEDAgentAgent begins processing
AGENT_THINKINGAgentAgent reasoning/planning
AGENT_COMPLETEDAgentAgent finishes successfully
TOOL_INVOKEDToolTool is called
TOOL_OBSERVATIONToolAgent observes tool result
TEAM_RECRUITEDTeamMulti-agent team assembled
TEAM_RETIREDTeamTeam disbanded
TEAM_STATUSTeamTeam coordination update
DEPENDENCY_SATISFIEDTeamDependencies resolved
MESSAGE_SENTMessageAgent sends message
MESSAGE_RECEIVEDMessageAgent receives message
LLM_PROMPTLLMPrompt sent to LLM
LLM_PARTIALLLMIncremental LLM output
LLM_OUTPUTLLMFinal LLM output
PROGRESSProgressGeneral progress update
DATA_PROCESSINGProgressProcessing/analyzing data
WAITINGProgressWaiting for resources
ERROR_OCCURREDSystemError occurred
ERROR_RECOVERYSystemError recovery attempt
APPROVAL_REQUESTEDSystemHuman approval needed
APPROVAL_DECISIONSystemApproval decision made
WORKSPACE_UPDATEDSystemMemory/context updated
ROLE_ASSIGNEDSystemAgent role assigned
DELEGATIONSystemTask delegated
BUDGET_THRESHOLDSystem/ProgressBudget warning threshold reached for a task
STREAM_ENDStreamExplicit end-of-stream signal (no more events for this workflow)
Total: 33 event types Note: Events such as WORKFLOW_FAILED, TASK_COMPLETED, TOOL_COMPLETED, TOOL_FAILED, and BUDGET_UPDATE are not emitted by the streaming API. Failures are surfaced via ERROR_OCCURRED; completion is indicated by WORKFLOW_COMPLETED. STREAM_END is emitted as a lifecycle signal after completion/termination when streaming ends.

Workflow Events

Events related to the overall task workflow.

WORKFLOW_STARTED

Emitted: When a task begins execution Data:
{
  "type": "WORKFLOW_STARTED",
  "message": "Workflow started",
  "data": {
    "query": "Analyze Q4 sales data",
    "mode": "STANDARD",
    "session_id": "sess-123",
    "estimated_complexity": 0.75
  }
}
Fields:
  • query: Original task query
  • mode: Execution mode (SIMPLE, STANDARD, COMPLEX)
  • session_id: Session identifier
  • estimated_complexity: Complexity score (0.0-1.0)

WORKFLOW_COMPLETED

Emitted: When a task completes successfully Data:
{
  "type": "WORKFLOW_COMPLETED",
  "message": "Workflow completed successfully",
  "data": {
    "result": "Analysis complete. Key findings: ...",
    "duration_ms": 45000,
    "total_tokens": 5420,
    "total_cost_usd": 0.0814,
    "agents_used": 3,
    "tools_invoked": 7
  }
}
Fields:
  • result: Final task result
  • duration_ms: Total execution time
  • total_tokens: Cumulative token usage
  • total_cost_usd: Total cost
  • agents_used: Number of agents invoked
  • tools_invoked: Number of tool calls

Selected Examples

AGENT_THINKING

{
  "type": "AGENT_THINKING",
  "agent_id": "researcher",
  "message": "Analyzing data structure...",
  "timestamp": "2025-01-20T10:00:02Z"
}

TOOL_INVOKED / TOOL_OBSERVATION

{ "type": "TOOL_INVOKED", "message": "web_search: q=\"LLM caching\"" }
{ "type": "TOOL_OBSERVATION", "message": "web_search: 12 results" }

LLM_OUTPUT

{
  "type": "LLM_OUTPUT",
  "message": "Final summary of findings..."
}

ERROR_OCCURRED

{
  "type": "ERROR_OCCURRED",
  "message": "Provider rate limit exceeded",
  "data": { "error": "RATE_LIMIT", "retry_after": 30 }
}

APPROVAL_REQUESTED

{
  "type": "APPROVAL_REQUESTED",
  "message": "Delete temporary files older than 30 days?",
  "data": { "approval_id": "appr-456" }
}
Common Error Types:
  • BUDGET_EXCEEDED - Cost/token limit reached
  • TIMEOUT - Execution timeout
  • TOOL_EXECUTION_FAILED - Tool error
  • LLM_ERROR - LLM provider error
  • INVALID_INPUT - Malformed request

Agent Events

Events related to individual agent execution.

AGENT_STARTED

Emitted: When an agent begins processing Data:
{
  "type": "AGENT_STARTED",
  "agent_id": "data-analyst",
  "message": "Agent started",
  "data": {
    "role": "data-analyst",
    "subtask": "Analyze revenue trends",
    "tools_available": ["csv_loader", "pandas", "matplotlib"]
  }
}

AGENT_THINKING

Emitted: Agent is reasoning/processing (most frequent event) Data:
{
  "type": "AGENT_THINKING",
  "agent_id": "data-analyst",
  "message": "Analyzing data structure...",
  "data": {
    "thought": "I need to first load the CSV and examine column types",
    "next_action": "invoke_tool",
    "confidence": 0.85
  }
}
Usage: Display as progress indicator to user

AGENT_COMPLETED

Emitted: Agent finished its subtask Data:
{
  "type": "AGENT_COMPLETED",
  "agent_id": "data-analyst",
  "message": "Agent completed successfully",
  "data": {
    "result": "Revenue increased 15% YoY",
    "tokens_used": 1200,
    "cost_usd": 0.018,
    "duration_ms": 8000
  }
}

AGENT_FAILED

Emitted: Agent encountered an error Data:
{
  "type": "AGENT_FAILED",
  "agent_id": "data-analyst",
  "message": "Agent failed: Tool execution error",
  "data": {
    "error": "TOOL_EXECUTION_FAILED",
    "error_message": "CSV file not found",
    "recoverable": true
  }
}

Tool Events

Events related to tool invocations.

TOOL_INVOKED

Emitted: When a tool is called Data:
{
  "type": "TOOL_INVOKED",
  "message": "Invoking tool: csv_loader",
  "data": {
    "tool_name": "csv_loader",
    "tool_args": {
      "file_path": "sales_q4.csv"
    },
    "timeout_seconds": 30
  }
}

TOOL_OBSERVATION

Emitted: Agent observes a tool result Data:
{
  "type": "TOOL_OBSERVATION",
  "message": "Tool result: csv_loader",
  "data": {
    "tool_name": "csv_loader",
    "result": {
      "rows": 15000,
      "columns": 12,
      "sample": ["date", "product", "revenue", "..."]
    },
    "duration_ms": 450,
    "truncated": false
  }
}
Fields:
  • tool_name: Name of the tool that was invoked
  • result: Tool output (structured data or text)
  • duration_ms: Tool execution time
  • truncated: Whether result was truncated (true if > 2000 chars)
Note: Large tool results are automatically truncated to 2000 characters with UTF-8 safety to prevent overwhelming the streaming connection. The truncated field indicates if this occurred. Full results are always available in the task completion response.

Pattern Events

Pattern selection and decomposition events are not part of the public streaming schema and are omitted for brevity.

Team Events

Multi-agent team coordination and management.

TEAM_RECRUITED

Emitted: When a team of agents is assembled for execution Data:
{
  "type": "TEAM_RECRUITED",
  "message": "Team recruited: 3 agents",
  "data": {
    "team_size": 3,
    "agents": [
      {
        "agent_id": "data-analyst",
        "role": "analyst",
        "capabilities": ["data_loading", "analysis"]
      },
      {
        "agent_id": "visualizer",
        "role": "visualization",
        "capabilities": ["charts", "graphs"]
      },
      {
        "agent_id": "writer",
        "role": "report_generation",
        "capabilities": ["summarization", "writing"]
      }
    ]
  }
}

TEAM_RETIRED

Emitted: When a team is disbanded after task completion Data:
{
  "type": "TEAM_RETIRED",
  "message": "Team retired after task completion",
  "data": {
    "team_size": 3,
    "duration_ms": 45000,
    "total_tokens": 8000,
    "reason": "task_completed"
  }
}

TEAM_STATUS

Emitted: Periodic updates on multi-agent team coordination Data:
{
  "type": "TEAM_STATUS",
  "message": "Team progress update",
  "data": {
    "active_agents": 2,
    "idle_agents": 1,
    "tasks_completed": 5,
    "tasks_remaining": 3,
    "coordination_mode": "parallel"
  }
}

DEPENDENCY_SATISFIED

Emitted: When task dependencies are resolved and execution can proceed Data:
{
  "type": "DEPENDENCY_SATISFIED",
  "message": "Dependencies satisfied for subtask-3",
  "data": {
    "subtask_id": "subtask-3",
    "satisfied_dependencies": ["subtask-1", "subtask-2"],
    "can_proceed": true
  }
}

Message Events

Agent-to-agent communication.

MESSAGE_SENT

Emitted: Agent sends message to another agent Data:
{
  "type": "MESSAGE_SENT",
  "agent_id": "supervisor",
  "message": "Message sent to data-analyst",
  "data": {
    "to": "data-analyst",
    "content": "Please analyze revenue trends",
    "message_type": "DELEGATION"
  }
}

MESSAGE_RECEIVED

Emitted: Agent receives message Data:
{
  "type": "MESSAGE_RECEIVED",
  "agent_id": "data-analyst",
  "message": "Message received from supervisor",
  "data": {
    "from": "supervisor",
    "content": "Please analyze revenue trends",
    "acknowledged": true
  }
}

LLM Events

Language model interaction events for debugging and monitoring.

LLM_PROMPT

Emitted: When a prompt is sent to the LLM (sanitized for privacy) Data:
{
  "type": "LLM_PROMPT",
  "message": "Sending prompt to LLM",
  "data": {
    "model": "gpt-5",
    "prompt_length": 1200,
    "max_tokens": 2000,
    "temperature": 0.7,
    "sanitized_prompt": "Analyze the provided data..."
  }
}

LLM_PARTIAL

Emitted: Incremental LLM output chunk during streaming Data:
{
  "type": "LLM_PARTIAL",
  "message": "Received partial LLM output",
  "data": {
    "chunk": "Based on the analysis",
    "chunk_index": 5,
    "total_tokens_so_far": 50
  }
}

LLM_OUTPUT

Emitted: Final LLM output for a step Data:
{
  "type": "LLM_OUTPUT",
  "message": "LLM output complete",
  "data": {
    "output": "Analysis complete. Revenue increased 15% YoY...",
    "model": "gpt-5",
    "provider": "openai",
    "usage": {
      "total_tokens": 350,
      "input_tokens": 200,
      "output_tokens": 150
    },
    "cost_usd": 0.0105,
    "duration_ms": 2000
  }
}
Fields:
  • output: Complete LLM response text
  • model: Model used (canonical name)
  • provider: LLM provider (openai, anthropic, google, xai, etc.)
  • usage: OpenAI-compatible usage object containing:
    • total_tokens: Total tokens (input + output)
    • input_tokens: Input/prompt tokens
    • output_tokens: Generated tokens
  • cost_usd: Estimated cost in USD
  • duration_ms: Request duration in milliseconds
Note: Usage metadata follows OpenAI’s standard format and is now available for all providers including OpenAI, Anthropic, Google, Groq, xAI, and OpenAI-compatible endpoints. The usage object structure matches OpenAI’s streaming response format for seamless integration.

TOOL_OBSERVATION

Emitted: Tool result observation by agent Data:
{
  "type": "TOOL_OBSERVATION",
  "message": "Agent observed tool result",
  "data": {
    "tool_name": "web_search",
    "observation": "Found 10 relevant results about AI trends",
    "relevance_score": 0.85
  }
}

Progress Events

Task progress and status updates for user feedback.

PROGRESS

Emitted: General progress update during execution Data:
{
  "type": "PROGRESS",
  "message": "Progress: 60% complete",
  "data": {
    "percentage": 60,
    "current_step": 3,
    "total_steps": 5,
    "current_task": "Generating visualizations"
  }
}

DATA_PROCESSING

Emitted: Agent is processing or analyzing data Data:
{
  "type": "DATA_PROCESSING",
  "message": "Processing sales data",
  "data": {
    "operation": "data_analysis",
    "records_processed": 15000,
    "total_records": 15000,
    "processing_stage": "aggregation"
  }
}

WAITING

Emitted: Agent is waiting for resources or responses Data:
{
  "type": "WAITING",
  "message": "Waiting for dependencies",
  "data": {
    "waiting_for": "subtask-2",
    "wait_reason": "dependency_not_satisfied",
    "estimated_wait_seconds": 10
  }
}

System Events

System-level events and errors.

ERROR_OCCURRED

Emitted: System error during execution Data:
{
  "type": "ERROR_OCCURRED",
  "message": "Database connection failed",
  "data": {
    "error_type": "DATABASE_ERROR",
    "error_message": "Connection timeout",
    "recoverable": true,
    "retry_count": 2,
    "max_retries": 3
  }
}

ERROR_RECOVERY

Emitted: System is recovering from an error Data:
{
  "type": "ERROR_RECOVERY",
  "message": "Recovering from database connection error",
  "data": {
    "error_type": "DATABASE_ERROR",
    "recovery_action": "retry_connection",
    "attempt": 2,
    "max_attempts": 3,
    "success": true
  }
}

APPROVAL_REQUESTED

Emitted: Human approval needed to proceed Data:
{
  "type": "APPROVAL_REQUESTED",
  "message": "Approval requested for file system access",
  "data": {
    "approval_id": "appr-123",
    "tool_name": "file_system",
    "operation": "write",
    "risk_level": "HIGH",
    "timeout_seconds": 7200,
    "details": {
      "file_path": "/data/critical.db",
      "action": "delete"
    }
  }
}

APPROVAL_DECISION

Emitted: Human has made an approval decision Data:
{
  "type": "APPROVAL_DECISION",
  "message": "Approval granted",
  "data": {
    "approval_id": "appr-123",
    "decision": "approved",
    "approved_by": "user-456",
    "timestamp": "2024-10-27T10:05:00Z",
    "comments": "Verified action is necessary"
  }
}
Decision Values:
  • approved - Action allowed to proceed
  • denied - Action blocked
  • timeout - No decision within timeout period

WORKSPACE_UPDATED

Emitted: Working memory/context updated Data:
{
  "type": "WORKSPACE_UPDATED",
  "message": "Workspace updated",
  "data": {
    "key": "loaded_datasets",
    "value": ["sales_q4.csv"],
    "action": "ADD"
  }
}

ROLE_ASSIGNED

Emitted: Agent role assigned during execution Data:
{
  "type": "ROLE_ASSIGNED",
  "agent_id": "agent-002",
  "message": "Role assigned: data-analyst",
  "data": {
    "role": "data-analyst",
    "capabilities": ["data_loading", "analysis", "visualization"],
    "tools": ["csv_loader", "pandas", "matplotlib"]
  }
}

DELEGATION

Emitted: Task delegated to another agent Data:
{
  "type": "DELEGATION",
  "agent_id": "supervisor",
  "message": "Delegated to data-analyst",
  "data": {
    "to_agent": "data-analyst",
    "task": "Analyze revenue trends",
    "priority": "HIGH"
  }
}

BUDGET_THRESHOLD

Emitted: Token budget reaches a warning threshold (typically 80% of limit) Data:
{
  "type": "BUDGET_THRESHOLD",
  "message": "Budget threshold reached: 80%",
  "data": {
    "threshold_percentage": 80,
    "tokens_used": 8000,
    "tokens_limit": 10000,
    "tokens_remaining": 2000,
    "cost_usd": 0.24,
    "estimated_cost_at_limit": 0.30
  }
}
Fields:
  • threshold_percentage: Warning threshold (e.g., 80)
  • tokens_used: Cumulative tokens consumed so far
  • tokens_limit: Maximum allowed tokens for task
  • tokens_remaining: Tokens left before limit
  • cost_usd: Current cost in USD
  • estimated_cost_at_limit: Projected cost if limit is reached
Usage: Monitor this event to warn users before hitting hard budget limits, allowing graceful degradation or early termination decisions.

Event Ordering

Events are strictly ordered by sequence number (seq):
{"seq": 1, "type": "WORKFLOW_STARTED"}
{"seq": 2, "type": "AGENT_STARTED"}
{"seq": 3, "type": "AGENT_THINKING"}
{"seq": 4, "type": "TOOL_INVOKED"}
{"seq": 5, "type": "LLM_OUTPUT"}
{"seq": 6, "type": "WORKFLOW_COMPLETED"}
Properties:
  • Sequence numbers are monotonically increasing
  • No gaps in sequence (every number from 1 to N)
  • Events from same workflow always ordered correctly

Typical Event Flow (Simplified)

1. WORKFLOW_STARTED
2. AGENT_STARTED
3. AGENT_THINKING
4. TOOL_INVOKED (optional)
5. LLM_OUTPUT (final answer)
6. WORKFLOW_COMPLETED

Event Persistence

Events are stored in:
  • PostgreSQL: Permanent event log
  • Redis: Recent events (hot cache)
  • Real-time: SSE stream
Retrieving Historical Events:
# Get all events for a task
GET /api/v1/tasks/{task_id}/events?limit=1000

Event Reliability and Guarantees

Ordering Guarantees

Shannon provides strict ordering within a single workflow:
  • Events are numbered sequentially (seq field)
  • No gaps in sequence numbers (1, 2, 3, …)
  • Events from the same workflow always arrive in order
  • Events from different workflows may be interleaved

Delivery Guarantees

  • At-least-once delivery: Events may be delivered multiple times (use seq for deduplication)
  • Event persistence: All events stored in PostgreSQL event_logs table
  • Hot cache: Recent events cached in Redis for fast retrieval
  • Historical access: Query past events via REST API

Stream Reconnection

If SSE connection drops:
# Reconnect and resume from last sequence number
last_seq = 42  # Last received event
for event in client.stream_events(workflow_id, from_seq=last_seq + 1):
    process_event(event)

Event Retention

StorageRetention PeriodPurposeEvent Types
Redis24 hoursReal-time streamingAll events
PostgreSQL90 days (default)Historical queriesCritical events only*
Archival1+ years (optional)Long-term auditConfigurable
PostgreSQL Selective Persistence: To optimize database performance, only critical events are persisted to PostgreSQL, including: WORKFLOW_COMPLETED, AGENT_COMPLETED, TOOL_INVOKED, LLM_OUTPUT, and ERROR_OCCURRED. Ephemeral events like LLM_PARTIAL, HEARTBEAT, and AGENT_THINKING are excluded from database writes (reducing write load by ~92%) but remain fully available via real-time SSE streaming and Redis cache. See Database Schema for event storage details.