Spotter AI APIs

Spotter AI APIs

ThoughtSpot provides a set of Spotter AI APIs Beta to create a conversation session with Spotter, ask follow-up questions, and generate Answers for their analytic queries. These APIs collectively enable natural language interaction, context-aware analytics, and guided data analysis.

Note

The Spotter AI APIs are in beta and disabled by default on ThoughtSpot instances. To enable these APIs on your instance, contact ThoughtSpot Support.

OverviewπŸ”—

The AI APIs Beta enable agentic conversational analytics by allowing users and systems to interact with data using natural language. Each of these APIs serves a specific function:

PurposeAPI endpoints

Create a conversation session

Get relevant questions

  • POST /api/rest/2.0/ai/relevant-questions/
    Breaks down a user-submitted query into a series of analytical sub-questions using relevant contextual metadata. Provides a list of recommended or relevant questions for a given data source and conversation context to allow users to explore their data further.
    Available on ThoughtSpot Cloud instances from 10.13.0.cl onwards.

Send queries to a conversation session

  • POST /api/rest/2.0/ai/agent/converse/sse (Recommended for agentic workflows)
    Allows sending a natural language query or a follow-up question to an ongoing conversation session and returns the AI agent’s response, including answers, tokens, and visualization details.
    Available on ThoughtSpot Cloud instances from 10.13.0.cl onwards.

  • POST /api/rest/2.0/ai/conversation/{conversation_identifier}/converse
    Allows sending a follow-up message to an ongoing conversation within the context of the metadata model. This is a legacy API and will be deprecated in an upcoming release version.

Generate a single answer

  • POST /api/rest/2.0/ai/answer/create
    Allows users to submit a natural language search query and fetch an AI-generated response.

Get data source suggestions

  • POST /api/rest/2.0/ai/data-source-suggestions
    Returns a list of relevant data sources, such as Models, based on a query and thus helping users and agents choose the most appropriate data source for analytics.
    Limited availability on ThoughtSpot Cloud instances from 10.13.0.cl onwards. Please contact ThoughtSpot Support to enable this feature on your instance.

Create a conversation sessionπŸ”—

A conversation session acts as a container for maintaining continuity across user inputs, system responses, and agent-driven clarifications. Once created, users can send queries or ask follow-up questions to the conversation session to explore data and get further insights.

The following AI API endpoints allow you to initiate a conversation session with Spotter:

Create a conversation session with Spotter agentπŸ”—

The /api/rest/2.0/ai/agent/conversation/create API endpoint allows you to initiate a new conversation session with ThoughtSpot’s AI Agent. Developers and system integrators embedding Spotter into agentic workflows, custom applications, or internal MCP (Managed Content Platform) servers, can use this API endpoint to create a conversation session from different data contexts such as Answers, Liveboards, or Models.

Note

Clients must have at least view access to the objects specified in the API request to create a conversation context and use it for subsequent queries.

Request parametersπŸ”—

To set the context for the conversation session, you must specify the metadata type and context in the POST request body. Optionally, you can also define additional parameters to refine the data context and generate accurate and precise responses.

Form parameterDescription

metadata_context

Defines the data context for the conversation. Specify the following values:

  • type
    Metadata type. Valid values are:

    • answer - To use an existing Spotter-generated Answer as the object

    • liveboard - To use an existing Liveboard as data object

    • data_source - To create a new conversation session using data objects such as Model.

  • answer_context
    If the metadata type is set as answer, specify the following attributes:

    • session_identifier: string, Unique ID representing the answer session.

    • generation_number: Integer. Specific generation/version number of the answer within a conversation session.

      The session identifier and generation numbers are assigned to the Answers generated from the Spotter AI REST APIs. These properties serve as the ID of the AI-generated Answer within the ThoughtSpot system.

  • liveboard_context
    If the metadata type is set as liveboard, specify the GUID of the Liveboard and visualization.

  • data_source_context
    If the metadata type is set as data_source, specify the GUID of the data source object.

conversation_settings

Optional. Defines additional parameters for the conversation context. You can set any of the following attributes as needed:

  • enable_contextual_change_analysis
    Boolean. When enabled, Spotter analyzes how context changes over time, that is comparing results from different queries.

  • enable_natural_language_answer_generation
    Boolean. Allows sending natural language queries to the conversation session.

  • enable_reasoning
    Boolean. Allows Spotter to use reasoning for deep analysis and precise responses.

Example requestπŸ”—

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_context": {
    "type": "data_source",
    "data_source_context": {
      "guid": "cd252e5c-b552-49a8-821d-3eadaa049cca"
    }
  },
  "conversation_settings": {
    "enable_contextual_change_analysis": false,
    "enable_natural_language_answer_generation": true,
    "enable_reasoning": false
  }
}'

API responseπŸ”—

If the API request is successful, the API returns the conversation ID. You can use this ID to send follow-up questions to the conversation session.

{"conversation_id":"q9tZYf_6WnFC"}

Note the conversation ID for further agentic interactions and API calls.

Create a conversation session (legacy API endpoint)πŸ”—

To create a conversation session, send a POST request body with the data source ID and search token string to the /api/rest/2.0/ai/conversation/create API endpoint.

Request parametersπŸ”—

Form parameterDescription

metadata_identifier

String. Required. Specify the GUID of the data source objects such as ThoughtSpot Models. The metadata object specified in the API request will be used as a data source for the conversation.

tokens
Optional

String. To set the context for the conversation, you can specify a set of keywords as token string. For example, [sales],[item type],[state].

Example requestsπŸ”—

With tokens
curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
  "tokens": "[sales],[item type],[Jackets]"
}'
Without tokens
curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}'

API responseπŸ”—

If the API request is successful, a conversation identifier is created. Note the GUID of the conversation and use it when sending follow-up queries.

{"conversation_identifier":"98f9b8b0-6224-4f9d-b61c-f41307bb6a89"}

Get relevant questionsπŸ”—

To discover follow-up or related questions that can be asked of a data model, ThoughtSpot provides the /api/rest/2.0/ai/relevant-questions/ REST API endpoint. This API endpoint supports both agentic workflows and direct user interaction, and generates contextually relevant questions for a given data context and user query.

The /api/rest/2.0/ai/relevant-questions/ API is exposed as the getRelevantQuestions tool in ThoughtSpot’s MCP server implementation. The MCP server can call this API directly to fetch relevant questions, which can then be used to generate reports or for further analysis and interactions. For more information, see MCP server integration.

You can also call this API directly from your REST client to fetch relevant questions by making a POST request. The API breaks the user-submitted query into a structured set of analytical sub-questions and returns these in the API response.

Request parametersπŸ”—

ParameterDescription

metadata_context

Required. Specify one of the following attributes to set the metadata context:

  • data_source_identifiers
    Array of strings. IDs of the data source object such as Models.

  • answer_identifiers
    Array of strings. GUIDs of the Answer objects that you want to use as metadata.

  • conversation_identifier
    String. ID of the conversation session.

  • liveboard_identifiers
    Array of strings. GUIDs of the Liveboards that you want to use as metadata.

query

String. Required parameter. Specify the query string that needs to be decomposed into smaller, analytical sub-questions.

limit_relevant_questions
Optional

Integer. Sets a limit on the number of sub-questions to return in the response. Default is 5.

bypass_cache
Optional

Boolean. When set to true, disables cache and forces fresh computation.

ai_context
Optional.

Additional context to guide the response. Define the following attributes as needed:

  • instructions
    Array of strings. Custom user instructions to influence how the AI interprets and processes the query.

  • content
    Array of strings. Additional input such as raw text or CSV-formatted data to enhance context and answer quality.

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/relevant-questions/'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}'
  --data-raw '{
  "metadata_context": {
    "data_source_identifiers": [
      "cd252e5c-b552-49a8-821d-3eadaa049cca"
    ]
  },
  "query": "Net sales of Jackets in west coast",
  "limit_relevant_questions": 3
}'

Example responseπŸ”—

If the request is successful, the API returns a set of questions related to the query and metadata context in the relevant_questions array. Each object in the relevant_questions array contains the following fields:

  • query
    A string containing the natural language (NL) sub-question.

  • data_source_identifier
    GUID of the data source object that can be used as data context for the sub-question.

  • data_source_name
    Name of the associated data source object.

{
  "relevant_questions": [
    {
      "query": "What is the trend of sales by type over time?",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    },
    {
      "query": "Sales by item",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    },
    {
      "query": "Sales across regions",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    }
  ]
}

Send a question to a conversation sessionπŸ”—

The following AI API endpoints allow you to send a follow-up query to an ongoing conversation:

  • POST /api/rest/2.0/ai/agent/converse/sse
    Allows a client to send queries to an ongoing conversation session with the AI agent (Spotter) and uses the Server-Sent Events (SSE) protocol to stream responses for a real-time conversational experience. It returns a streaming response (using SSE) with the AI agent’s replies, allowing clients to receive incremental updates as the AI agent processes and generates its response.
    The POST /api/rest/2.0/ai/agent/converse/sse API call supports only the agent sessions created via /api/rest/2.0/ai/agent/conversation/create API call.

  • POST /api/rest/2.0/ai/conversation/{conversation_identifier}/converse
    Sends query to an ongoing conversation session and generates Answer.
    The POST /api/rest/2.0/ai/conversation/{conversation_identifier}/converse API call supports only the conversation sessions created using the POST /api/rest/2.0/ai/conversation/create API call.
    This is a legacy API endpoint and will be deprecated in an upcoming release version.

Send a question and generate streaming responsesπŸ”—

To send queries to an ongoing conversation session and receive streaming responses, ThoughtSpot provides the /api/rest/2.0/ai/agent/converse/sse API endpoint. This API endpoint uses the SSE protocol to deliver data incrementally as it becomes available, rather than waiting for the entire response to be generated before sending it to the client. This enables immediate feedback and a more interactive user experience for AI-generated responses.

This API can be called directly, either through the Multi-Component Protocol (MCP) server or by integrating it into your own agentic workflow. In the MCP context, the /api/rest/2.0/ai/agent/converse/sse API is used as a "tool" for real-time, streaming of conversational interactions between agents and the ThoughtSpot backend. It enables AI agents to send user queries and receive incremental, streamed responses, which can be processed and displayed to the users.

REST clients can also send a POST request with a conversation ID and query string to fetch streaming responses.

Request parametersπŸ”—

ParameterDescription

conversation_identifier

String. Specify the GUID of the conversation received from the create conversation API call.

message

Array of Strings. Specify the query text in natural language format. For example, Sales data for Jackets, Top performing products in the west coast.

Example requestπŸ”—

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/converse/sse'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "conversation_identifier": "h2I_pTGaRQof",
  "messages": [
    "Net sales of Jackets"
  ]
}'

API responseπŸ”—

If the API request is successful, the response includes a stream of events, each containing a partial or complete message from the AI agent, rather than a single JSON object.

Each event is a simple text-based message in a specific format, data: <your_data>\n\n; <your_data>\n\n means that each message sent from the server to the client is prefixed with data: keyword, followed by the actual payload (<your_data>), and ends with two newline characters (\n\n).

The API uses this format so that clients can reconstruct the AI-generated response as it streams in, chunk by chunk, and show the responses in real-time. In agentic workflows and the MCP server context, the API response is processed by the MCP host or AI agent. The agent listens to the SSE stream, parses each event, and assembles the full response for the user.

Example response
data: [{"type": "ack", "node_id": "BRxCtJ-aGt8l"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "I"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " understand"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " you're"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " interested"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " in"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " net"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " of"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " Jackets"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "."}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " I'll"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " retrieve"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " relevant"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " data"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " you"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "."}]

data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "metadata": {"title": "Net sales of Jackets"}, "code": "nls_start"}]

data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "QH", "message": "Fetching Worksheet Data"}]

data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "TML_GEN", "message": "Translating your query with the Reasoning Engine"}]

data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "ANSWER_GEN", "message": "Verifying results with the Trust Layer"}]

data: [{"id": "r24X7D99SROD", "type": "answer", "group_id": "o8dQ9SAWdtrL", "metadata": {"sage_query": "[sales] [item type] = [item type].'jackets'", "session_id": "b321b404-cbf1-4905-9b0c-b93ad4eedf89", "gen_no": 1, "transaction_id": "6874259d-13b1-478c-83cb-b3ed52628850", "generation_number": 1, "warning_details": null, "ambiguous_phrases": null, "query_intent": null, "assumptions": "You want to see the total sales amount for jackets item type.", "tml_phrases": ["[sales]", "[item type] = [item type].'jackets'"], "cached": false, "sub_queries": null, "title": "Net sales of Jackets", "worksheet_id": "cd252e5c-b552-49a8-821d-3eadaa049cca"}, "title": "Net sales of Jackets"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "The"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " net"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " Jackets"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " have"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " been"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " visual"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "ized"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " you"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "."}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " This"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " analysis"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " specifically"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " filtered"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " item"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " type"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "jackets"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "\""}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " and"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " calculated"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " total"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " amount"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " associated"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " with"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " those"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " products"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n\n"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "**"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "Summary"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " &"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " Insights"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ":"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "**\n"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " The"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " visualization"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " shows"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " total"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " net"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " all"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jacket"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " transactions"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " in"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " your"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " apparel"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " dataset"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " The"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " calculation"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " uses"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " only"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " amounts"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " where"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " item"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " type"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " is"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " \""}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "J"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "ackets"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\"\n"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " This"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " information"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " is"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " useful"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " understanding"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " revenue"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " contribution"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " of"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jackets"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " within"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " your"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " product"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " mix"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n\n"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "If"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " you'd"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " like"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " to"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " see"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " a"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " breakdown"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " by"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " region"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " state"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " time"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " period"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " or"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " compare"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jacket"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " to"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " other"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " product"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " types"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " please"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " let"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " me"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " know"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "!"}]

The messages in the API response include the following parts:

  • id
    A unique identifier for the message group

  • type Type of the message. Valid types are:

    • ack
      Confirms receipt of the request. For example, the type in the first message data: [{"type": "ack", "node_id": "BRxCtJ-aGt8l"}], which indicates that the server has received the client’s request and is acknowledging it.

    • text / text-chunk
      Content chunks, optionally formatted.

    • answer
      The final structured response with metadata and analytics

    • error
      Indicates a failure.

    • notification
      Notification messages.

  • group_id
    Groups related chunks together.

  • metadata: Indicates content format, for example, markdown.

  • content
    The actual text content sent incrementally. For example, "I", "understand", "you’re", "interested", "in", "the", "net", "sales", and so on.

The following example shows the response text contents for the answer message type.

[
  {
    "id": "r24X7D99SROD",
    "type": "answer",
    "group_id": "o8dQ9SAWdtrL",
    "metadata": {
      "sage_query": "[sales] [item type] = [item type].'jackets'",
      "session_id": "b321b404-cbf1-4905-9b0c-b93ad4eedf89",
      "gen_no": 1,
      "transaction_id": "6874259d-13b1-478c-83cb-b3ed52628850",
      "generation_number": 1,
      "warning_details": null,
      "ambiguous_phrases": null,
      "query_intent": null,
      "assumptions": "You want to see the total sales amount for jackets item type.",
      "tml_phrases": [
        "[sales]",
        "[item type] = [item type].'jackets'"
      ],
      "cached": false,
      "sub_queries": null,
      "title": "Net sales of Jackets",
      "worksheet_id": "cd252e5c-b552-49a8-821d-3eadaa049cca"
    },
    "title": "Net sales of Jackets"
  }
]
  • The session ID and generation number serve as the context data for Answer. You can use this information to create a new conversation session using /api/rest/2.0/ai/agent/conversation/create or download the answer via /api/rest/2.0/report/answer operations.

  • The tokens and TML phrases returned in the response can be used as inputs for the search data API call to get an Answer.

Send a question to generate answerπŸ”—

To send a question to an ongoing conversation session or ask follow-up questions, send a POST request body with conversation ID and query text to the POST /api/rest/2.0/ai/conversation/{conversation_identifier}/converse API endpoint.

Request parametersπŸ”—

ParameterTypeDescription

conversation_identifier

Path parameter

String. Required. Specify the GUID of the conversation received from the create conversation API call.

metadata_identifier

Form parameter

String. Required. Specify the GUID of the data source object, for example, Model. The metadata object specified in the API request will be used as a data source for the follow-up conversation.

message

Form parameter

String. Required. Specify a natural language query string. For example, Sales data for Jackets.

Example requestπŸ”—

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/03f48527-b973-4efa-81fd-a8568a4f9e78/converse'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
  "message": "Top performing products in the west coast"
}'

API responseπŸ”—

If the API request is successful, the following data is sent in the API response:

  • session_identifier
    GUID of the Answer session.

  • generation_number
    Number assigned to the Answer session.

  • message_type
    Type of response received for the query. For example, TSAnswer (ThoughtSpot Answer).

  • visualization_type
    The data format of the generated Answer, for example, chart or table. When you download this Answer, the data will be exported in the format indicated by the visualization_type.

  • tokens
    Tokens generated from the natural language search query string specified in the API request. You can use these tokens as input for query_string in your API request to /api/rest/2.0/searchdata and export the raw data of the query, or as input to POST /api/rest/2.0/ai/conversation/create to initiate a new conversation with a new context.

Note

Note the session ID and generation number. To export the Answer generated from this conversation, send these attributes in the POST request body to the /api/rest/2.0/report/answer endpoint.

[
  {
    "session_identifier": "1290f8bc-415a-4ecb-ae3b-e1daa593eb24",
    "generation_number": 3,
    "message_type": "TSAnswer",
    "visualization_type": "Chart",
    "tokens": "[sales], [state], [item type], [region] = [region].'west', sort by [sales] descending"
  }
]

Ask follow-up questionsπŸ”—

The API retains the context of previous queries when you send follow-up questions. To verify this, you can send another API request with a follow-up question to drill down into the data.

Generate a single AnswerπŸ”—

To generate an Answer from a natural language search query, send a POST request to the /api/rest/2.0/ai/answer/create API endpoint. In the request body, include the query and the data source ID.

Request parametersπŸ”—

Form parameterDescription

query

String. Required. Specify the string as a natural language query. For example, Top performing products in the west coast.

metadata_identifier

String. Required. Specify the GUID of the data source object, for example, Model. The metadata object specified in the API request will be used as a data source for the follow-up conversation.

Example requestπŸ”—

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/answer/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN} \
  --data-raw '{
  "query": "Top performing products in the west coast",
  "metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}'

API responseπŸ”—

If the API request is successful, the following data is sent in the API response:

  • session_identifier
    GUID of the Answer session.

  • generation_number
    Number assigned to the Answer session.

  • message_type Type of response received for the query. For example, TSAnswer (ThoughtSpot Answer).

  • visualization_type
    The data format of the generated Answer; for example, chart or table. When you download this Answer, the data will be exported in the format indicated by the visualization_type.

  • tokens
    Tokens generated from the natural language search query string specified in the API request. You can use these tokens as input for query_string in your API request to /api/rest/2.0/searchdata and export the raw data of the query, or as input to POST /api/rest/2.0/ai/conversation/create to initiate a new conversation with a new context.

Note

Note the session ID and generation number. To export the result generated from this API call, send these attributes in the POST request body to the /api/rest/2.0/report/answer endpoint.

[{
  "session_identifier": "57784fa1-10fa-431d-8d82-a1657d627bbe",
  "generation_number": 2,
  "message_type": "TSAnswer",
  "visualization_type": "Undefined",
  "tokens": "[product], [region] = [region].'west', sort by [sales] descending"
}]

Process results generated from Spotter APIsπŸ”—

To generate an Answer using the data returned from the Spotter APIs, use the following options:

  • Download the generated Answer using the session ID and generation number via api/rest/2.0/report/answer API endpoint.

  • Use tokens generated from Spotter API requests as raw data in query strings and generate an Answer via /api/rest/2.0/searchdata API endpoint.

Get data source suggestionsπŸ”—

The POST /api/rest/2.0/ai/data-source-suggestions API provides relevant data source recommendations for a user-submitted natural language query. To use this API, you must have at least view access to the underlying metadata source referenced in the response.

Note

The Get data source suggestions feature is not by default on all ThoughtSpot instances. To enable this API on your instance, contact ThoughtSpot Support.

Request parametersπŸ”—

ParameterDescription

query

String. Required. Specify a natural language query string. For example, Sales data for Jackets.

Example requestπŸ”—

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/data-source-suggestions'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "query": "Sales data for Jackets"
}'

API responseπŸ”—

If the API request is successful, ThoughtSpot returns a ranked list of data sources, each annotated with relevant reasoning.

{
  "data_sources": [
    {
      "confidence": 0.97,
      "details": {
        "description": "",
        "data_source_name": "(Sample) Retail - Apparel",
        "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca"
      },
      "reasoning": "Following similar NL queries were asked earlier on this worksheet - \"show sales of jackets quarter on quarter\", \"show sales of jackets last quarter in east\", \"jacket sales for february. (ignore previous context\""
    },
    {
      "confidence": 0.62,
      "details": {
        "description": "",
        "data_source_name": "Dunder Mifflin Sales",
        "data_source_identifier": "0e4406c7-d978-4be7-abd7-c34e8f7da835"
      },
      "reasoning": ""
    },
    {
      "confidence": 0.45,
      "details": {
        "description": "",
        "data_source_name": "Copy of Dunder Mifflin Sales-SSD",
        "data_source_identifier": "c8305843-d31f-468a-ab1b-2636f64c83e5"
      },
      "reasoning": "Columns include 'Product', 'Category', 'Quantity', and 'Amount', which could support sales analysis for jackets if present, but no direct NLQ or answer matches."
    }
  ]
}

The returned results include metadata such as:

  • confidence
    A float indicating the Model’s confidence in the relevance of each recommendation.

  • details
    The data source ID, name, and description for each recommended data source.

  • reasoning
    Reason provided by the LLM to explain why each data source was recommended.

Additional resourcesπŸ”—

  • See REST API v2 Playground to verify the request and response workflows

  • For information MCP tools, see MCP server integration

Β© 2025 ThoughtSpot Inc. All Rights Reserved.