Spotter AI APIs

Spotter AI APIs

ThoughtSpot’s Spotter AI APIs Beta allow users to query and explore data through conversational interactions.

Note

The Spotter AI APIs are in Beta and disabled by default on ThoughtSpot instances. To enable these APIs on your instance, contact ThoughtSpot Support.

OverviewπŸ”—

Spotter AI APIs collectively support natural-language-driven analytics, context-aware and guided data analysis, and integration with agentic systems.

The key capabilities of the Spotter APIs include the following:

  • Initiating and managing conversational sessions

  • Processing natural-language queries and interpreting user intent

  • Generating analytical responses, insights, and visualizations

  • Decomposing complex user queries

Spotter manages conversation sessions, context tracking, and response generation for user-submitted queries. The Spotter APIs are designed for use in Spotter-driven analytics and also for agentic interactions within an orchestrated agent framework.

Locale settings for API requestsπŸ”—

When using the Single Answer and Send message APIs, the locale used for API requests depends on your application’s locale settings:

  • If your application is set to "Use browser language," the API will not apply the default locale. In this case, you must explicitly include the desired locale code in the Accept-Language header of your API request. If you do not specify the locale, the API may not return responses in the expected language or regional format.

  • If you have set a specific locale in your ThoughtSpot instance or user profile, the API will use this locale to generate responses, overriding the browser or OS locale.

To ensure consistent localization, set the Accept-Language header in your API requests when relying on browser language detection, or configure the locale explicitly in the user profile settings in ThoughtSpot.

API endpointsπŸ”—

Each of the Spotter AI APIs serves a specific function:

CategoryAPI endpoints

Conversational analytics with Spotter (Classic)

  • POST /api/rest/2.0/ai/conversation/create
    Creates a conversation session with Spotter to generate Answers from a specific data Model. The resulting session sets the context for subsequent queries and responses.

  • POST /api/rest/2.0/ai/conversation/{conversation_identifier}/converse
    Allows Sending a message or follow-up query to an ongoing conversation session.

  • POST /api/rest/2.0/ai/answer/create
    Generates an answer for a natural language query specified in the API request.

Advanced analytics and agentic interaction

  • POST /api/rest/2.0/ai/agent/conversation/create
    Creates a conversation session with the Spotter agent to generate Answers for the specified data context. This API endpoint is designed for agentic or orchestrated frameworks that leverage Spotter agent’s capabilities for advanced analytics, context-aware conversations, and data literacy.
    Available on ThoughtSpot Cloud instances from 10.13.0.cl onwards.

  • POST /api/rest/2.0/ai/agent/converse/sse
    Streams responses, including tokens and visualizations, for a specific conversation context. This API endpoint can be used for real-time agentic interactions and orchestrated experiences.
    Available on ThoughtSpot Cloud instances from 10.13.0.cl onwards.

Guided analysis

  • POST /api/rest/2.0/ai/relevant-questions/]
    Decomposes a user query into relevant sub-questions. Guides users to explore data more deeply for a comprehensive analysis.
    Available on ThoughtSpot Cloud instances from 10.13.0.cl onwards.

Conversational analytics with Spotter (Classic)πŸ”—

In the Spotter classic mode, the conversation session and context will be managed by Spotter. The APIs allow users to interact directly with Spotter with no specific agentic capabilities or framework.

Create a conversation sessionπŸ”—

To create a conversation session with Spotter, send a POST request to the /api/rest/2.0/ai/conversation/create API endpoint. The resulting conversation session maintains the context and can be used to send queries and follow-up questions to generate answers.

Request parametersπŸ”—

Include the following parameters in the request body:

Form parameterDescription

metadata_identifier

String. Required. Specify the GUID of the data source objects such as ThoughtSpot Models. The metadata object specified in the API request will be used as a data source for the conversation.

tokens
Optional

String. To set the context for the conversation, you can specify a set of keywords as token string. For example, [sales],[item type],[state].

Example requestsπŸ”—

With tokens
curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
  "tokens": "[sales],[item type],[Jackets]"
}'
Without tokens
curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}'

API responseπŸ”—

If the API request is successful, a conversation identifier is created. Note the GUID of the conversation and use it when sending follow-up queries.

{"conversation_identifier":"98f9b8b0-6224-4f9d-b61c-f41307bb6a89"}

Send a query to a conversation sessionπŸ”—

To send a question to an ongoing conversation session or ask follow-up questions to , send a POST request body with conversation ID and query text to the POST /api/rest/2.0/ai/conversation/{conversation_identifier}/converse API endpoint.

This API endpoint supports only the conversation sessions created using the POST /api/rest/2.0/ai/conversation/create API call.

Request parametersπŸ”—

ParameterTypeDescription

conversation_identifier

Path parameter

String. Required. Specify the GUID of the conversation received from the create conversation API call.

metadata_identifier

Form parameter

String. Required. Specify the GUID of the data source object, for example, Model. The metadata object specified in the API request will be used as a data source for the follow-up conversation.

message

Form parameter

String. Required. Specify a natural language query string. For example, Sales data for Jackets.

Example requestπŸ”—

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/03f48527-b973-4efa-81fd-a8568a4f9e78/converse'  \
  -H 'Accept: application/json' \
  -H 'accept-language: en-US', \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
  "message": "Top performing products in the west coast"
}'

API responseπŸ”—

If the API request is successful, the following data is sent in the API response:

  • session_identifier
    GUID of the Answer session.

  • generation_number
    Number assigned to the Answer session.

  • message_type
    Type of response received for the query. For example, TSAnswer (ThoughtSpot Answer).

  • visualization_type
    The data format of the generated Answer, for example, a chart or table. When you download this Answer, the data will be exported in the format indicated by the visualization_type.

  • tokens
    Tokens generated from the natural language search query specified in the API request. These tokens can be used as input to the /api/rest/2.0/ai/conversation/create API endpoint to set the context for a conversation session.

Note

Note the session ID and generation number. To export the Answer generated from this conversation, send these attributes in the POST request body to the /api/rest/2.0/report/answer endpoint.

[
  {
    "session_identifier": "1290f8bc-415a-4ecb-ae3b-e1daa593eb24",
    "generation_number": 3,
    "message_type": "TSAnswer",
    "visualization_type": "Chart",
    "tokens": "[sales], [state], [item type], [region] = [region].'west', sort by [sales] descending"
  }
]

Ask follow-up questionsπŸ”—

The API retains the context of previous queries when you send follow-up questions. To verify this, you can send another API request with a follow-up question to drill down into the data.

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/03f48527-b973-4efa-81fd-a8568a4f9e78/converse'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
  "message": "which city has the better sales of jackets here?"
}'

The API retrains the context of the initial question and returns a response:

[
  {
    "session_identifier": "ee077665-08e1-4a9d-bfdf-7b2fe0ca5c79",
    "generation_number": 3,
    "message_type": "TSAnswer",
    "visualization_type": "Table",
    "tokens": "[sales], by [city], [state], [item type] = [item type].'jackets', [region] = [region].'west', sort by [sales] descending"
  }
]

Generate a single AnswerπŸ”—

To generate an Answer from a natural language search query, send a POST request to the /api/rest/2.0/ai/answer/create API endpoint. In the request body, include the query and the data source ID.

Request parametersπŸ”—

Form parameterDescription

query

String. Required. Specify the string as a natural language query. For example, Top performing products in the west coast.

metadata_identifier

String. Required. Specify the GUID of the data source object, for example, Model. The metadata object specified in the API request will be used as a data source for the follow-up conversation.

Example requestπŸ”—

In the following example, a query string and the model ID are included in the request body to set the context of the conversation.

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/answer/create'  \
  -H 'Accept: application/json' \
  -H 'accept-language: en-US', \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN} \
  --data-raw '{
  "query": "Top performing products in the west coast",
  "metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}'

API responseπŸ”—

If the API request is successful, the following data is sent in the API response:

  • session_identifier
    GUID of the Answer session.

  • generation_number
    Number assigned to the Answer session.

  • message_type Type of response received for the query. For example, TSAnswer (ThoughtSpot Answer).

  • visualization_type
    The data format of the generated Answer; for example, a chart or table. When you download this Answer, the data will be exported in the format indicated by the visualization_type.

  • tokens
    Tokens generated from the natural language search query specified in the API request. These tokens can be used as input to the /api/rest/2.0/ai/conversation/create endpoint to set the context for a conversation session.

Note

Note the session ID and generation number. To export the result generated from this API call, send these attributes in the POST request body to the /api/rest/2.0/report/answer endpoint.

[{
  "session_identifier": "57784fa1-10fa-431d-8d82-a1657d627bbe",
  "generation_number": 2,
  "message_type": "TSAnswer",
  "visualization_type": "Undefined",
  "tokens": "[product], [region] = [region].'west', sort by [sales] descending"
}]

Conversational analytics with Spotter agentπŸ”—

Spotter agent is an advanced, agentic version of Spotter, which supports context-aware interactions, data literacy features, and follow-up conversations for deeper analytics. Spotter agent can be used for complex reasoning and agentic interactions in an orchestrated framework.

Create a conversation session with Spotter agentπŸ”—

The /api/rest/2.0/ai/agent/conversation/create API endpoint allows you to initiate a new conversation session with Spotter Agent for different data contexts, such as Answers, Liveboards, or Models.

Note

Clients must have at least view access to the objects specified in the API request to create a conversation context and use it for subsequent queries.

Request parametersπŸ”—

To set the context for the conversation session, you must specify the metadata type and context in the POST request body. Optionally, you can also define additional parameters to refine the data context and generate precise responses.

Form parameterDescription

metadata_context

Defines the data context for the conversation.

  • type
    Metadata context type. The context type is mandatory. Select one of the following values:

    • data_source - To set a data source context for the conversation session.
      If the context type is data_source, you must define the data_source_context in the request payload.

    • answer - To use an existing Spotter-generated Answer as the object.
      If the context type is answer, you must define both data_source_context and answer_context in the request payload.

    • liveboard - To use an existing Liveboard as context.
      If the context type is liveboard, you must define data_source_context, liveboard_context, and answer_context in the request payload.

  • answer_context
    If the metadata type is set as answer or liveboard, specify the following attributes:

    • session_identifier: string, Unique ID representing the answer session.

    • generation_number: Integer. Specific generation/version number of the answer within a conversation session.

      The session identifier and generation numbers are assigned to the Answers generated from the Spotter AI REST APIs. These properties serve as the ID of the AI-generated Answer within the ThoughtSpot system.

  • liveboard_context
    If the metadata type is set as liveboard, specify the GUIDs of the Liveboard and visualization.

  • data_source_context
    Specify the GUID of the data source object. Required parameter for all types of metadata context.

conversation_settings

Optional. Defines additional parameters for the conversation context. You can set any of the following attributes as needed:

  • enable_contextual_change_analysis
    Boolean. When enabled, Spotter analyzes how context changes over time, that is comparing results from different queries.

  • enable_natural_language_answer_generation
    Boolean. Allows sending natural language queries to the conversation session.

  • enable_reasoning
    Boolean. Allows Spotter to use reasoning for deep analysis and precise responses.

Example requestπŸ”—

The following example shows the request payload for the data_source context type:

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_context": {
    "type": "data_source",
    "data_source_context": {
      "guid": "cd252e5c-b552-49a8-821d-3eadaa049cca"
    }
  },
  "conversation_settings": {
    "enable_contextual_change_analysis": false,
    "enable_natural_language_answer_generation": true,
    "enable_reasoning": false
  }
}'

The following example shows the request payload for the liveboard context type:

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_context": {
    "type": "liveboard",
    "answer_context": {
      "session_identifier": "c3a00fa7-fd01-4d58-8c84-0704df986d9d",
      "generation_number": 2
    },
    "liveboard_context": {
      "liveboard_identifier": "cffdc614-0214-42ba-9f57-cb6e8312fe5a",
      "visualization_identifier": "da0ed3da-ce1f-4071-8876-74d551b05faf"
    },
    "data_source_context": {
      "guid": "54beb173-d755-42e0-8f73-4d4ec768114f"
    }
  },
  "conversation_settings": {
    "enable_contextual_change_analysis": false,
    "enable_natural_language_answer_generation": true,
    "enable_reasoning": false
  }
}'

The following example shows the request payload for the answer context type:

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "metadata_context": {
    "type": "answer",
    "answer_context": {
      "session_identifier": "f131ca07-47e9-4f56-9e21-454120912ae1",
      "generation_number": 1
    },
    "data_source_context": {
      "guid": "cd252e5c-b552-49a8-821d-3eadaa049cca"
    }
  },
  "conversation_settings": {
    "enable_contextual_change_analysis": false,
    "enable_natural_language_answer_generation": true,
    "enable_reasoning": false
  }
}'

API responseπŸ”—

If the API request is successful, the API returns the conversation ID. You can use this ID to send follow-up questions to the conversation session.

{"conversation_id":"q9tZYf_6WnFC"}

Note the conversation ID for subsequent agentic interactions and API calls.

Send a question and generate streaming responsesπŸ”—

To send queries to an ongoing conversation session with Spotter agent and receive streaming responses, use the /api/rest/2.0/ai/agent/converse/sse API endpoint. This API endpoint uses the SSE protocol to deliver data incrementally as it becomes available, rather than waiting for the entire response to be generated before sending it to the client.

The /api/rest/2.0/ai/agent/converse/sse API can be used as an integrated tool for real-time streaming of conversational interactions between agents and the ThoughtSpot backend. It enables AI agents to send user queries and receive incremental, streamed responses that can be processed and sent to users. REST clients can also send a POST request with a conversation ID and query string to fetch streaming responses.

Request parametersπŸ”—

ParameterDescription

conversation_identifier

String. Specify the conversation ID received from the POST /api/rest/2.0/ai/agent/conversation/create API call.

messages

Array of Strings. Include at least one natural language query. For example, Sales data for Jackets, Top performing products in the west coast.

Example requestπŸ”—

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/converse/sse'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}' \
  --data-raw '{
  "conversation_identifier": "h2I_pTGaRQof",
  "messages": [
    "Net sales of Jackets"
  ]
}'

API responseπŸ”—

If the API request is successful, the response includes a stream of events, each containing a partial or complete message from the AI agent, rather than a single JSON object.

Each event is a simple text-based message in a specific format, data: <your_data>\n\n; <your_data>\n\n means that each message sent from the server to the client is prefixed with data: keyword, followed by the actual payload (<your_data>), and ends with two newline characters (\n\n).

The API uses this format so that the clients can reconstruct the AI-generated response as it streams in, chunk by chunk, and show the responses in real-time. In agentic workflows, the receiving client or agent listens to the SSE stream, parses each event, and assembles the full response for its users.

Example response
data: [{"type": "ack", "node_id": "BRxCtJ-aGt8l"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "I"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " understand"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " you're"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " interested"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " in"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " net"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " of"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " Jackets"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "."}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " I'll"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " retrieve"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " relevant"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " data"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " you"}]

data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "."}]

data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "metadata": {"title": "Net sales of Jackets"}, "code": "nls_start"}]

data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "QH", "message": "Fetching Worksheet Data"}]

data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "TML_GEN", "message": "Translating your query with the Reasoning Engine"}]

data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "ANSWER_GEN", "message": "Verifying results with the Trust Layer"}]

data: [{"id": "r24X7D99SROD", "type": "answer", "group_id": "o8dQ9SAWdtrL", "metadata": {"sage_query": "[sales] [item type] = [item type].'jackets'", "session_id": "b321b404-cbf1-4905-9b0c-b93ad4eedf89", "gen_no": 1, "transaction_id": "6874259d-13b1-478c-83cb-b3ed52628850", "generation_number": 1, "warning_details": null, "ambiguous_phrases": null, "query_intent": null, "assumptions": "You want to see the total sales amount for jackets item type.", "tml_phrases": ["[sales]", "[item type] = [item type].'jackets'"], "cached": false, "sub_queries": null, "title": "Net sales of Jackets", "worksheet_id": "cd252e5c-b552-49a8-821d-3eadaa049cca"}, "title": "Net sales of Jackets"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "The"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " net"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " Jackets"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " have"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " been"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " visual"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "ized"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " you"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "."}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " This"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " analysis"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " specifically"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " filtered"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " item"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " type"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "jackets"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "\""}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " and"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " calculated"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " total"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " amount"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " associated"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " with"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " those"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " products"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n\n"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "**"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "Summary"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " &"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " Insights"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ":"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "**\n"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " The"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " visualization"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " shows"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " total"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " net"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " all"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jacket"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " transactions"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " in"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " your"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " apparel"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " dataset"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " The"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " calculation"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " uses"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " only"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " amounts"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " where"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " item"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " type"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " is"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " \""}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "J"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "ackets"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\"\n"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " This"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " information"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " is"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " useful"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " understanding"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " revenue"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " contribution"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " of"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jackets"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " within"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " your"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " product"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " mix"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n\n"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "If"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " you'd"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " like"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " to"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " see"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " a"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " breakdown"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " by"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " region"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " state"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " time"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " period"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " or"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " compare"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jacket"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " to"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " other"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " product"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " types"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " please"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " let"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " me"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " know"}]

data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "!"}]

The messages in the API response include the following parts:

  • id
    A unique identifier for the message group

  • type Type of the message. Valid types are:

    • ack
      Confirms receipt of the request. For example, the type in the first message data: [{"type": "ack", "node_id": "BRxCtJ-aGt8l"}], which indicates that the server has received the client’s request and is acknowledging it.

    • text / text-chunk
      Content chunks, optionally formatted.

    • answer
      The final structured response with metadata and analytics

    • error
      Indicates a failure.

    • notification
      Notification messages.

  • group_id
    Groups related chunks together.

  • metadata: Indicates content format, for example, markdown.

  • content
    The actual text content sent incrementally. For example, "I", "understand", "you’re", "interested", "in", "the", "net", "sales", and so on.

The following example shows the response text contents for the answer message type.

[
  {
    "id": "r24X7D99SROD",
    "type": "answer",
    "group_id": "o8dQ9SAWdtrL",
    "metadata": {
      "sage_query": "[sales] [item type] = [item type].'jackets'",
      "session_id": "b321b404-cbf1-4905-9b0c-b93ad4eedf89",
      "gen_no": 1,
      "transaction_id": "6874259d-13b1-478c-83cb-b3ed52628850",
      "generation_number": 1,
      "warning_details": null,
      "ambiguous_phrases": null,
      "query_intent": null,
      "assumptions": "You want to see the total sales amount for jackets item type.",
      "tml_phrases": [
        "[sales]",
        "[item type] = [item type].'jackets'"
      ],
      "cached": false,
      "sub_queries": null,
      "title": "Net sales of Jackets",
      "worksheet_id": "cd252e5c-b552-49a8-821d-3eadaa049cca"
    },
    "title": "Net sales of Jackets"
  }
]

The session ID and generation number serve as the data context for the Answer. You can use this information to create a new conversation session using /api/rest/2.0/ai/agent/conversation/create, or download the answer via the /api/rest/2.0/report/answer API endpoint.

Process results generated from Spotter APIsπŸ”—

To export or download the Answer data generated by the Spotter APIs, use the Answer report API.

Note

Using tokens generated by the Spotter API in a Search Data API request can return invalid column errors, because these tokens may reference formulas or columns not present in the data model. Instead, use the Answer report API and include the session ID and generation number obtained from the Spotter API in your API request to retrieve the data.

Data literacy and query assistanceπŸ”—

The query assistance APIs help users who may need assistance with exploring and analyzing data effectively.

Get relevant questionsπŸ”—

The /api/rest/2.0/ai/relevant-questions/ API endpoint breaks down a user-submitted query into relevant sub-questions. It accepts the original query and optional additional context, then generates a set of related questions to help users explore their data comprehensively.

During agentic interactions, this API can be used as an integrated tool to decompose user queries and suggest relevant questions for a specific data context. REST clients can also call this API directly to fetch relevant questions via a POST request.

Request parametersπŸ”—

ParameterDescription

metadata_context

Required. Specify one of the following attributes to set the metadata context:

  • data_source_identifiers
    Array of strings. IDs of the data source object such as Models.

  • answer_identifiers
    Array of strings. GUIDs of the Answer objects that you want to use as metadata.

  • conversation_identifier
    String. ID of the conversation session.

  • liveboard_identifiers
    Array of strings. GUIDs of the Liveboards that you want to use as metadata.

query

String. Required parameter. Specify the query string that needs to be decomposed into smaller, analytical sub-questions.

limit_relevant_questions
Optional

Integer. Sets a limit on the number of sub-questions to return in the response. Default is 5.

bypass_cache
Optional

Boolean. When set to true, disables cache and forces fresh computation.

ai_context
Optional.

Additional context to guide the response. Define the following attributes as needed:

  • instructions
    Array of strings. Custom user instructions to influence how the AI interprets and processes the query.

  • content
    Array of strings. Additional input such as raw text or CSV-formatted data to enhance context and answer quality.

curl -X POST \
  --url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/relevant-questions/'  \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {AUTH_TOKEN}'
  --data-raw '{
  "metadata_context": {
    "data_source_identifiers": [
      "cd252e5c-b552-49a8-821d-3eadaa049cca"
    ]
  },
  "query": "Net sales of Jackets in west coast",
  "limit_relevant_questions": 3
}'

Example responseπŸ”—

If the request is successful, the API returns a set of questions related to the query and metadata context in the relevant_questions array. Each object in the relevant_questions array contains the following fields:

  • query
    A string containing the natural language (NL) sub-question.

  • data_source_identifier
    GUID of the data source object.

  • data_source_name
    Name of the associated data source object.

{
  "relevant_questions": [
    {
      "query": "What is the trend of sales by type over time?",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    },
    {
      "query": "Sales by item",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    },
    {
      "query": "Sales across regions",
      "data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
      "data_source_name": "(Sample) Retail - Apparel"
    }
  ]
}

Additional resourcesπŸ”—

Β© 2025 ThoughtSpot Inc. All Rights Reserved.