AI APIs (Spotter Agent and Spotter 3)

AI APIs (Spotter Agent and Spotter 3)

ThoughtSpot’s Spotter Agent APIs allow users to start a conversation session with Spotter Agent and send queries to explore data and receive responses synchronously or as a real-time Server-Sent Events (SSE) stream.

OverviewπŸ”—

Spotter Agent APIs support conversation sessions with natural language query strings, provide context-aware and guided data analysis, and allow integration with other agentic systems.

The key capabilities of the Spotter APIs include the following:

  • Initiating and managing conversational sessions

  • Processing natural-language queries

  • Generating analytical responses, insights, and visualizations

  • Recommending relevant datasets or data sources

  • Decomposing complex user queries

API endpointsπŸ”—

The AI REST API endpoints listed in the following table provide all the functionality necessary to implement a Spotter 3 conversational experience in your application, from data source discovery through to streaming query responses. The API endpoints introduced for Spotter 2 also support Spotter 3 capabilities as of version 26.2.0.cl. Some of these API endpoints are deprecated in 26.5.0.cl; ThoughtSpot recommends using the new API endpoints instead.

Initialize session

Call the create agent conversation API (/api/rest/2.0/ai/agent/conversation/create) with a data source ID to establish the session context. When auto mode is enabled, and no data source ID is specified in the API request, Spotter will automatically identify the appropriate data source.

Execute queries

To execute queries and generate a standard response synchronously, use the Send agent conversation message API (/api/rest/2.0/ai/agent/conversation/{conversation_identifier}/send). The send agent message API (/api/rest/2.0/ai/agent/{conversation_identifier}/converse) is deprecated in 26.5.0.cl and later versions.

Real-time output (streaming)

To stream responses to the application UI in real-time, use the POST /api/rest/2.0/ai/agent/conversation/{conversation_identifier}/send/stream. The legacy streaming API (/api/rest/2.0/ai/agent/converse/sse) is deprecated in 26.5.0.cl and later versions.

Supported API endpointsπŸ”—

POST /api/rest/2.0/ai/agent/conversation/create
Creates a conversation session with the Spotter agent to generate Answers for the specified data context. Available on ThoughtSpot Cloud instances from 10.13.0.cl onwards. Breaking changes introduced in 26.5.0.cl.

POST /api/rest/2.0/ai/agent/conversation/{conversation_identifier}/send New
Sends natural language messages to an existing Spotter agent conversation and returns the complete response synchronously. Replaces /api/rest/2.0/ai/agent/{conversation_identifier}/converse.

POST /api/rest/2.0/ai/agent/conversation/{conversation_identifier}/send/stream New
Sends one or more natural language messages to an existing Spotter agent conversation and returns the response as a real-time Server-Sent Events (SSE) stream. Replaces /api/rest/2.0/ai/agent/{conversation_identifier}/converse.

POST /api/rest/2.0/ai/data-source-suggestions Beta
Returns a list of relevant data sources, such as Models, based on a query and thus helping users and agents choose the most appropriate data source for analytics.
Available on ThoughtSpot Cloud instances from 10.15.0.cl onwards.

POST /api/rest/2.0/ai/relevant-questions/ Beta
Decomposes a user query into relevant sub-questions. Guides users to explore data more deeply for a comprehensive analysis.
Available on ThoughtSpot Cloud instances from 10.13.0.cl onwards.

POST /api/rest/2.0/ai/agent/converse/sse Deprecated
Legacy API endpoint for streaming responses, including tokens and visualizations, for a specific conversation context. Deprecated in 26.5.0.cl.

POST /api/rest/2.0/ai/agent/{conversation_identifier}/converse Deprecated
Legacy API endpoint to send natural language queries to a conversation session with Spotter agent.
Deprecated in 26.5.0.cl.

Create a conversation session with Spotter AgentπŸ”—

The /api/rest/2.0/ai/agent/conversation/create API endpoint creates a new conversation session with Spotter Agent for a specific or multi-data context and returns a conversation ID.

Request parametersπŸ”—

The request body must include the metadata_context. REST API clients must have at least view access to the data source objects specified in the API request to create a conversation session and use it for subsequent queries.

Form parameterDescription

metadata_context

Defines the data context for the conversation.

  • type
    Metadata context type. The context type is mandatory. Select one of the following values:

    • AUTO_MODE to allow Spotter Agent to automatically discover and select the most relevant datasets for user’s queries.

    • DATA_SOURCE to set a specific data source as the data context. You must specify data_source_context and data source IDs.
      To set a specific data source object, use data_source_identifier.
      To set multi-data context, use data_source_identifiers.

    • data_source Deprecated
      This option is deprecated in 26.5.0.cl. ThoughtSpot recommends using the DATA_SOURCE with data_source_context and data source IDs instead.

conversation_settings

Optional. Defines additional parameters for the conversation context. You can set any of the following attributes as needed:

  • enable_contextual_change_analysis
    Boolean. When enabled, Spotter analyzes how context changes over time, that is, comparing results from different queries. Enabled by default in 26.2.0.cl and later versions.

  • enable_natural_language_answer_generation
    Boolean. Allows sending natural language queries to the conversation session. Enabled by default in 26.2.0.cl and later versions.

  • enable_reasoning
    Boolean. Allows Spotter to use reasoning for deep analysis and precise responses. Enabled by default in 26.2.0.cl and later versions.

  • enable_save_chat
    When set to true, adds the conversation to chat history.

Example requestπŸ”—

With AUTO_MODE for metadata context
curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_context": {
    "type": "AUTO_MODE"
  },
  "conversation_settings": {
    "enable_save_chat": true
  }
}'
For a single data source as the data context
curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_context": {
    "type": "DATA_SOURCE",
    "data_source_context": {
      "data_source_identifier": "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
    }
  },
  "conversation_settings": {}
}'
For multi-data source context
curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_context": {
    "type": "DATA_SOURCE",
    "data_source_context": {
      "data_source_identifiers": [
        "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
        "b2c3d4e5-f6a7-8901-bcde-f12345678901"
      ]
    }
  },
  "conversation_settings": {
    "enable_save_chat": true
  }
}'

API responseπŸ”—

If the API request is successful, the API returns the conversation ID and identifier in the response body.

{
  "conversation_id": "wwHQ5j8O8dQC",
  "conversation_identifier": "wwHQ5j8O8dQC"
}
  • conversation_identifier
    Use this for all subsequent message calls.

  • conversation_id Deprecated
    Returns the same value as conversation_identifier.

Send queries to a conversation sessionπŸ”—

To send queries to an ongoing conversation session with the Spotter agent and receive a response synchronously, use the /api/rest/2.0/ai/agent/conversation/{conversation_identifier}/send API endpoint.

This API operation requires the conversation ID obtained from the conversation creation API endpoint (/api/rest/2.0/ai/agent/conversation/create). The user making the API request must have access to the conversation session. The API request body must include at least one message in natural language format.

Request parametersπŸ”—

ParameterTypeDescription

conversation_identifier

Path parameter

String. Required. Specify the conversation ID received from the POST /api/rest/2.0/ai/agent/conversation/create API call.

messages

Form parameter

Array of strings. Required. Specify at least one query in natural language. For example, total sales of jackets last month.

Request and response examplesπŸ”—

The following example sends a data comparison query to a conversation session. The conversation ID is specified in the request URL as a path parameter.

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/{conversation_identifier}/send' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}'  \
  --data-raw '{
  "messages": [
    "Sales in 2025 vs 2024"
  ]
}'

If the request is successful, the API returns an array of objects in the response. The messages in the API response include the following parts:

[
  {
    "type": "text",
    "text": "\n\nI'll compare sales between 2025 and 2024. First, let me get the dataset context.",
    "metadata": {},
    "internal": {},
    "agent_context": ""
  },
  {
    "type": "text",
    "text": "```json\n{\"dataset_name\":\"(Sample) Retail - Apparel\",\"columns\":[{\"name\":\"sales\",\"type\":\"MEASURE\"},{\"name\":\"date\",\"type\":\"ATTRIBUTE\"}]}\n```",
    "metadata": {},
    "internal": {},
    "agent_context": ""
  },
  {
    "type": "answer",
    "title": "Compare total sales for 2025 vs 2024",
    "description": "",
    "session_id": "842bb67a-e08e-4861-97e8-8db9538db51d",
    "gen_no": 2,
    "sage_query": "[sales] [date] = '2025' vs [date] = '2024'",
    "tml_tokens": ["[sales]", "[date] = '2025' vs [date] = '2024'"],
    "formulas": [],
    "parameters": [],
    "subqueries": [],
    "viz_suggestion": "CAEQIBomEiQ2NjE5NzI0Yy1kMjVlLTU4MDItOWNjOC1jNDA3MWY3OWY5MzAoATIA",
    "metadata": {
      "output": "<base64-encoded-protobuf-output>",
      "worksheet_id": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "chart_type": "KPI",
      "interrupted": false,
      "data_awareness_enabled": true
    },
    "internal": {}
  },
  {
    "type": "text",
    "text": "\n\nThe visualization shows year-over-year comparison. You can identify growth or decline trends.",
    "metadata": {},
    "internal": {},
    "agent_context": ""
  }
]

The following example sends a follow-up question to the same conversation session.

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/{conversation_identifier}/send' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}'  \
  --data-raw '{
  "messages": [
    "Now break that down by product category"
  ]
}'

If the request is successful, the agent returns the response for the follow-up question:

[{
    "type": "text",
    "text": "I'll add product category to the comparison.",
    "metadata": {},
    "internal": {},
    "agent_context": ""
  },
  {
    "type": "answer",
    "title": "Sales by Product Category: 2025 vs 2024",
    "session_id": "9abc1234-0000-0000-0000-000000000005",
    "gen_no": 3,
    "sage_query": "[sales] [product category] [date] = '2025' vs [date] = '2024'",
    "tml_tokens": ["[sales]", "[product category]", "[date] = '2025' vs [date] = '2024'"],
    "formulas": [],
    "parameters": [],
    "subqueries": [],
    "viz_suggestion": "",
    "metadata": {
      "chart_type": "BAR",
      "worksheet_id": "cd252e5c-b552-49a8-821d-3eadaa049cca"
    },
    "internal": {}
  }]

In each response, the agent returns the following information:

  • type
    Type of the message, such as text, answer, or error.

  • text
    Response message generated for the query.

  • metadata
    Additional information based on the message type. For example, answer metadata, chart type, or the data source ID.

  • tml_tokens
    Query string broken down as TML tokens.

In case of errors, the response returns the error details:

[{
    "type": "error",
    "message": "The conversation session has expired. Please create a new conversation.",
    "code": "SESSION_EXPIRED"
}]

Send a query to agent and get streaming responsesπŸ”—

To send queries to an ongoing conversation session with Spotter agent and receive streaming responses, use the /api/rest/2.0/ai/agent/conversation/{conversation_identifier}/send/stream API endpoint. This API endpoint uses the SSE protocol to deliver data incrementally in real time, rather than waiting for the entire response to be generated before sending it to the client.

The /api/rest/2.0/ai/agent/conversation/{conversation_identifier}/send/stream API can be used as an integrated tool for real-time streaming of conversational interactions between agents and the ThoughtSpot backend.

Request parametersπŸ”—

ParameterDescription

conversation_identifier

String. Specify the conversation ID received from the POST /api/rest/2.0/ai/agent/conversation/create API call.

messages

Array of strings. Include at least one natural language query. For example, Sales data for Jackets, Top performing products in the west coast.

Example requestπŸ”—

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/{conversation_identifier}/send/stream'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "conversation_identifier": "h2I_pTGaRQof",
  "messages": [
    "Net sales of Jackets"
  ]
}'

API responseπŸ”—

If the API request is successful, the response includes a stream of events, each containing a partial or complete message from the AI agent, rather than a single JSON object.

Each event is a simple text-based message in a specific format, data: <your_data>\n\n; <your_data>\n\n means that each message sent from the server to the client is prefixed with the data: keyword, followed by the actual payload (<your_data>), and ends with two newline characters (\n\n).

The API uses this format so that the clients can reconstruct the AI-generated response as it streams in, chunk by chunk, and show the responses in real-time. In agentic workflows, the receiving client or agent listens to the SSE stream, parses each event, and assembles the full response for its users.

Example responseπŸ”—

If the request is valid, the API returns SSE streams. Each line has the form data: [{"type": "…​", …​}], a JSON array of event objects.

data: [{"type":"ack","node_id":"aGxzcFVrtom8"}]

data: [{"type":"conv_title","title":"Sales 2025 vs 2024","conv_id":"-XIi04l5rrof"}]

data: [{"type":"notification","group_id":"cDEsAQbSnd3J","metadata":{"type":"thinking","tool_title":"Analyzing Sales Performance: 2025 vs 2024"},"code":"TOOL_CALL_NOTIFICATION"}]

data: [{"id":"mNAdvy-NK2l6","type":"text-chunk","group_id":"cDEsAQbSnd3J","metadata":{"format":"markdown","type":"thinking"},"content":"\n\nI need to compare sales performance between 2025 and 2024."}]

data: [{"type":"notification","group_id":"m1MTvttEUa7o","code":"nls_start"}]

data: [{"id":"hxWMDP-pgR3B","type":"answer","group_id":"m1MTvttEUa7o","metadata":{"sage_query":"[sales] [date] = '2025' vs [date] = '2024'","session_id":"431adcf9-1328-4d8c-81a1-0faa7fa37ba6","title":"Compare sales for 2025 vs 2024"},"title":"Compare sales for 2025 vs 2024"}]

data: [{"type":"notification","code":"FINAL_RESPONSE_NOTIFICATION"}]

For the complete response in one payload, use sendAgentConversationMessage instead.

SSE event typesπŸ”—

The SSE event types streamed in the API response include:

  • ack
    Confirms receipt of the request. For example, the type in the first message data: [{"type": "ack", "node_id": "BRxCtJ-aGt8l"}], which indicates that the server has received the client’s request and is acknowledging it.

  • conv_title
    A conversation title (title, conv_id).

  • notification
    Progress or status update (group_id, metadata, code). For example, TOOL_CALL_NOTIFICATION, nls_start, FINAL_RESPONSE_NOTIFICATION.

  • type
    Type can be thinking, text

  • text
    Complete text block in markdown format.

  • text-chunk
    Text fragments in incremental streaming, often in markdown (id, group_id, metadata with format)

  • content
    The actual text content sent incrementally. For example, "I", "understand", "you’re", "interested", "in", "the", "net", "sales", and so on.

  • text
    Full text block with same structure as text-chunk.

  • answer
    Structured answer with metadata (id, group_id, metadata with sage_query, session_id, title and more)

  • error
    In case of failures.

  • *-interrupt
    If the generation was stopped mid-stream.

  • group_id
    Groups related chunks together.

For more information and examples, see SSE event payload reference.

Thinking versus output eventsπŸ”—

Spotter responses have two phases:

  • A thinking phase, where the AI reasons through the query and calls internal tools, followed by an output phase containing the final response delivered to the user.

Events in the thinking phase carry "metadata": { "type": "thinking" }. All other events are final output.

Every event includes a group_id. Events sharing the same group_id belong together. During the thinking phase, each tool call gets its own group_id. A FINAL_RESPONSE_NOTIFICATION notification marks the boundary between the thinking and output phases.

THINKING PHASE
───────────────────────────────────────────────────────────
ack

β”Œβ”€ group_id: g1 ── Tool Call 1 ("Searching data") ─────────┐
β”‚  notification  (thinking, TOOL_CALL_NOTIFICATION)        β”‚
β”‚  text-chunk    (thinking)                                β”‚
β”‚  answer        (thinking)                                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

β”Œβ”€ group_id: g2 ── Tool Call 2 ("Running code") ───────────┐
β”‚  notification  (thinking, TOOL_CALL_NOTIFICATION)        β”‚
β”‚  text-chunk    (thinking)                                β”‚
β”‚  text-chunk    (thinking)                                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

notification (FINAL_RESPONSE_NOTIFICATION)  ←── boundary
────────────────────────────────────────────────────────────

OUTPUT PHASE
────────────────────────────────────────────────────────────
β”Œβ”€ group_id: g3 ────────────────────────────────────────────┐
β”‚  text      "Here are the results:"                        β”‚
β”‚  answer    (final visualization)                          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
[stream closes]

Notification codes referenceπŸ”—

CodeWhen it appears

QH

Query handling started

TML_GEN / TML_GEN_RETRY

Generating or retrying TML

ANSWER_GEN

Generating an answer

IDENTIFYING_ATTRIBUTES

Identifying data attributes

PERFORMING_CHANGE_ANALYSIS

Running change analysis

PERFORMING_FORECASTING_ANALYSIS

Running forecasting

SUMMARIZING_RESULTS

Summarizing results

TOOL_CALL_NOTIFICATION

Tool invocation (during thinking phase)

FINAL_RESPONSE_NOTIFICATION

Marks the transition from thinking to output

search_datasets_start / search_datasets_end

Data source discovery in progress or complete

approval_required

An external tool requires user permission before proceeding

SSE event payload referenceπŸ”—

ackπŸ”—

data: {
  "type": "ack",
  "group_id": "a1b2c3",
  "id": "evt-001",
  "node_id": "resp-node-abc"
}

notification (thinking β€” tool call)πŸ”—

data: {
  "type": "notification",
  "group_id": "g1",
  "id": "evt-002",
  "code": "TOOL_CALL_NOTIFICATION",
  "message": "Searching for relevant data",
  "metadata": {
    "type": "thinking",
    "tool_title": "Searching sales data",
    "tool_code": "RUNNING_CODE_EXECUTION",
    "tool_name": "code_interpreter"
  }
}

notification (thinking - external tool with MCP integration)πŸ”—

data: {
  "type": "notification",
  "group_id": "g2",
  "id": "evt-003",
  "code": "TOOL_CALL_NOTIFICATION",
  "message": "Querying Salesforce",
  "metadata": {
    "type": "thinking",
    "tool_title": "Salesforce: Get Opportunities",
    "tool_name": "get_opportunities",
    "integration_id": "int-sf-123",
    "integration_name": "Salesforce"
  }
}

notification (approval required)πŸ”—

Sent when an external MCP tool requires explicit user permission before proceeding. Your application should prompt the user to approve or deny the action before continuing.

data: {
  "type": "notification",
  "group_id": "g2",
  "id": "evt-005",
  "code": "approval_required",
  "metadata": {
    "request_id": "perm-req-789",
    "integration_id": "int-sf-123",
    "integration_name": "Salesforce",
    "tool_name": "get_opportunities",
    "annotated_title": "Access Salesforce Opportunities"
  }
}

notification (FINAL_RESPONSE_NOTIFICATION)πŸ”—

data: {
  "type": "notification",
  "group_id": "g1",
  "id": "evt-004",
  "code": "FINAL_RESPONSE_NOTIFICATION",
  "message": ""
}

textπŸ”—

data: {
  "type": "text",
  "group_id": "g3",
  "id": "evt-007",
  "content": "Here is the total revenue breakdown by region for Q4 2025:\n\n- **North America:** $4.2M\n- **EMEA:** $2.8M\n- **APAC:** $1.5M"
}

text-chunkπŸ”—

Multiple chunks sharing the same id should be appended together to reconstruct the full text item.

data: { "type": "text-chunk", "group_id": "g3", "id": "evt-009", "content": "Based on the analysis, " }
data: { "type": "text-chunk", "group_id": "g3", "id": "evt-009", "content": "revenue grew 12% quarter-over-quarter." }

answerπŸ”—

When an answer event is received, the session_id and gen_no fields are returned. You can export the visualization data using the Export Answer Report API to process the results. This allows users to download the answer as a PDF, PNG, CSV, or XLSX file.

data: {
  "type": "answer",
  "group_id": "g3",
  "id": "evt-010",
  "title": "Revenue by Region Q4 2025",
  "metadata": {
    "session_id": "sess-abc-123",
    "gen_no": 1,
    "transaction_id": "txn-456",
    "worksheet_id": "ws-def-789",
    "cached": false,
    "is_hidden": false
  }
}

search_datasetsπŸ”—

Emitted as a start/end pair during Auto mode data source discovery.

data: { "type": "search_datasets", "group_id": "g0", "id": "evt-012", "code": "search_datasets_start", "metadata": {} }

data: {
  "type": "search_datasets",
  "group_id": "g0",
  "id": "evt-013",
  "code": "search_datasets_end",
  "metadata": {
    "data_sources": [
      { "worksheet_id": "ws-1", "worksheet_name": "Sales Data", "confidence": "high", "reasoning": "Contains revenue columns" },
      { "worksheet_id": "ws-2", "worksheet_name": "Marketing Data", "confidence": "low", "reasoning": "No revenue columns" }
    ],
    "auto_selected": { "worksheet_id": "ws-1", "worksheet_name": "Sales Data", "confidence": "high", "reasoning": "Best match" }
  }
}

fileπŸ”—

data: {
  "type": "file",
  "group_id": "g3",
  "id": "evt-014",
  "files": [
    { "ts_file_id": "file-abc-001", "display_name": "quarterly_report.csv", "file_type": "csv", "created_at": "2025-11-15T10:30:00Z" },
    { "ts_file_id": "file-abc-002", "display_name": "chart.png", "file_type": "png", "created_at": "2025-11-15T10:30:01Z" }
  ],
  "metadata": { "conv_id": "conv-123" }
}

conv_titleπŸ”—

data: {
  "type": "conv_title",
  "group_id": "g0",
  "id": "evt-015",
  "title": "Revenue Analysis Q4 2025",
  "conv_id": "conv-123"
}

errorπŸ”—

data: {
  "type": "error",
  "group_id": "g3",
  "id": "evt-016",
  "code": "RATE_LIMIT_EXCEEDED",
  "message": "Too many requests",
  "display_message": "You've exceeded the rate limit. Please try again in a few minutes."
}

agent-interruptπŸ”—

Sent when generation is stopped mid-stream.
data: {
  "type": "notification",
  "group_id": "g3",
  "id": "evt-017",
  "code": "agent-interrupt",
  "message": "Generation stopped"
}

Process results generated from a conversation sessionπŸ”—

To export or download the Answer data generated by the Spotter APIs, use the Answer report API.

The session_id and gen_no values from the answer event metadata are required to identify the answer to export.

Note
Requires at least view access to the Answer.
curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/report/answer' \
  -H 'Authorization: Bearer {Bearer_token}' \
  -H 'Accept: application/octet-stream' \
  -H 'Content-Type: application/json' \
  --data-raw '{
  "session_identifier": "sess-abc-123",
  "generation_number": 1,
  "file_format": "CSV"
}'

The file_format parameter accepts PDF, PNG, CSV, or XLSX.

Note

Using tokens generated by the Spotter API in a Search Data API request can return invalid column errors, because these tokens may reference formulas or columns not present in the data model. Instead, use the Answer report API and include the session ID and generation number obtained from the Spotter API in your API request to retrieve the data.

Data literacy and query assistanceπŸ”—

The query assistance APIs help users find the appropriate dataset for a given query string, suggest what questions can be asked, and return example questions. These APIs are specifically designed to improve data literacy for users who may not be familiar with the underlying data, making it easier for them to explore and analyze data effectively.

Get data source suggestionsπŸ”—

The POST /api/rest/2.0/ai/data-source-suggestions API provides relevant data source recommendations for a user-submitted natural language query. To use this API, you must have at least view access to the underlying metadata object referenced in the response.

Request parametersπŸ”—

ParameterDescription

metadata_context

Required. Specify one of the following attributes to set the metadata context:

  • data_source_identifiers
    Array of strings. IDs of the data source object such as Models.

  • answer_identifiers
    Array of strings. GUIDs of the Answer objects that you want to use as metadata.

  • conversation_identifier
    String. ID of the conversation session.

  • liveboard_identifiers
    Array of strings. GUIDs of the Liveboards that you want to use as metadata.

query

String. Required parameter. Specify the query string that needs to be decomposed into smaller, analytical sub-questions.

limit_relevant_questions
Optional

Integer. Sets a limit on the number of sub-questions to return in the response. Default is 5.

bypass_cache
Optional

Boolean. When set to true, disables cache and forces fresh computation.

ai_context
Optional.

Additional context to guide the response. Define the following attributes as needed:

Example requestπŸ”—

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/data-source-suggestions'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_context": {
    "data_source_identifiers": [
      "cd252e5c-b552-49a8-821d-3eadaa049cca"
    ]
  },
  "query": "Net sales of Jackets in west coast",
  "limit_relevant_questions": 3
}'

API responseπŸ”—

If the API request is successful, ThoughtSpot returns a ranked list of data sources, each annotated with relevant reasoning.

{
  "relevant_questions": [
    {
      "query": "What is the trend of sales by type over time?",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    },
    {
      "query": "Sales by item",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    },
    {
      "query": "Sales across regions",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    }
  ]
}

The returned results include metadata such as:

  • confidence
    A float indicating the Model’s confidence in the relevance of each recommendation.

  • details
    The data source ID, name, and description for each recommended data source.

  • reasoning
    Reason provided by the LLM to explain why each data source was recommended.

Get relevant questionsπŸ”—

The /api/rest/2.0/ai/relevant-questions/ API endpoint breaks down a user-submitted query into relevant sub-questions. It accepts the original query and optional additional context, then generates a set of related questions to help users explore their data comprehensively.

During agentic interactions, this API can be used as an integrated tool to decompose user queries and suggest relevant questions for a specific data context. REST clients can also call this API directly to fetch relevant questions via a POST request.

Request parametersπŸ”—

ParameterDescription

metadata_context

Required. Specify one of the following attributes to set the metadata context:

  • data_source_identifiers
    Array of strings. IDs of the data source object such as Models.

  • answer_identifiers
    Array of strings. GUIDs of the Answer objects that you want to use as metadata.

  • conversation_identifier
    String. ID of the conversation session.

  • liveboard_identifiers
    Array of strings. GUIDs of the Liveboards that you want to use as metadata.

query

String. Required parameter. Specify the query string that needs to be decomposed into smaller, analytical sub-questions.

limit_relevant_questions
Optional

Integer. Sets a limit on the number of sub-questions to return in the response. Default is 5.

bypass_cache
Optional

Boolean. When set to true, disables cache and forces fresh computation.

ai_context
Optional.

Additional context to guide the response. Define the following attributes as needed:

  • instructions
    Array of strings. Custom user instructions to influence how the AI interprets and processes the query.

  • content
    Array of strings. Additional input such as raw text or CSV-formatted data to enhance context and answer quality.

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/relevant-questions/'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_context": {
    "data_source_identifiers": [
      "cd252e5c-b552-49a8-821d-3eadaa049cca"
    ]
  },
  "query": "Net sales of Jackets in west coast",
  "limit_relevant_questions": 3
}'

Example responseπŸ”—

If the request is successful, the API returns a set of questions related to the query and metadata context in the relevant_questions array. Each object in the relevant_questions array contains the following fields:

  • query
    A string containing the natural language (NL) sub-question.

  • data_source_identifier
    GUID of the data source object.

  • data_source_name
    Name of the associated data source object.

{
  "relevant_questions": [
    {
      "query": "What is the trend of sales by type over time?",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    },
    {
      "query": "Sales by item",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    },
    {
      "query": "Sales across regions",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    }
  ]
}

Additional resourcesπŸ”—

Β© 2026 ThoughtSpot Inc. All Rights Reserved.