-
POST /api/rest/2.0/ai/agent/conversation/create
Creates a new AI-driven conversation session based on a specified data source. The resulting session sets the context for subsequent queries and responses.
Available on ThoughtSpot Cloud instances from 10.13.0.cl onwards. -
POST /api/rest/2.0/ai/conversation/create
Creates a conversation session.
This is a legacy API and will be deprecated in an upcoming release version.
Spotter AI APIs
ThoughtSpot provides a set of Spotter AI APIs Beta to create a conversation session with Spotter, ask follow-up questions, and generate Answers for their analytic queries. These APIs collectively enable natural language interaction, context-aware analytics, and guided data analysis.
|
Note
|
The Spotter AI APIs are in beta and disabled by default on ThoughtSpot instances. To enable these APIs on your instance, contact ThoughtSpot Support. |
Overviewπ
The AI APIs Beta enable agentic conversational analytics by allowing users and systems to interact with data using natural language. Each of these APIs serves a specific function:
| Purpose | API endpoints |
|---|---|
| |
| |
|
Locale settings for API requestsπ
When using the Single Answer and Send message APIs, the locale used for API requests depends on your applicationβs locale settings:
-
If your application is set to "Use browser language," the API will not apply the default locale. In this case, you must explicitly include the desired locale code in the
Accept-Languageheader of your API request. If you do not specify the locale, the API may not return responses in the expected language or format. -
If you have set a specific locale in your ThoughtSpot instance or user profile, the API will use this locale to generate responses, overriding the browser or OS locale
To ensure consistent localization, set the Accept-Language header in your API requests when relying on browser language detection, or configure the locale explicitly in the user profile settings in ThoughtSpot.
Create a conversation sessionπ
A conversation session acts as a container for maintaining continuity across user inputs, system responses, and agent-driven clarifications. Once created, users can send queries or ask follow-up questions to the conversation session to explore data and get further insights.
The following AI API endpoints allow you to initiate a conversation session with Spotter:
-
POST /api/rest/2.0/ai/conversation/create
This is a legacy API endpoint and will be deprecated in an upcoming release version.
Create a conversation session with Spotter agentπ
The /api/rest/2.0/ai/agent/conversation/create API endpoint allows you to initiate a new conversation session with ThoughtSpotβs AI Agent. Developers and system integrators embedding Spotter into agentic workflows, custom applications, or internal Model Context Protocol (MCP) servers, can use this API endpoint to create a conversation session from different data contexts such as Answers, Liveboards, or Models.
|
Note
|
Clients must have at least view access to the objects specified in the API request to create a conversation context and use it for subsequent queries. |
Request parametersπ
To set the context for the conversation session, you must specify the metadata type and context in the POST request body. Optionally, you can also define additional parameters to refine the data context and generate accurate and precise responses.
| Form parameter | Description |
|---|---|
| Defines the data context for the conversation.
|
| Optional. Defines additional parameters for the conversation context. You can set any of the following attributes as needed:
|
Example requestπ
The following example shows the request payload for the data_source context type:
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_context": {
"type": "data_source",
"data_source_context": {
"guid": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}
},
"conversation_settings": {
"enable_contextual_change_analysis": false,
"enable_natural_language_answer_generation": true,
"enable_reasoning": false
}
}'
The following example shows the request payload for the liveboard context type:
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_context": {
"type": "liveboard",
"answer_context": {
"session_identifier": "c3a00fa7-fd01-4d58-8c84-0704df986d9d",
"generation_number": 2
},
"liveboard_context": {
"liveboard_identifier": "cffdc614-0214-42ba-9f57-cb6e8312fe5a",
"visualization_identifier": "da0ed3da-ce1f-4071-8876-74d551b05faf"
},
"data_source_context": {
"guid": "54beb173-d755-42e0-8f73-4d4ec768114f"
}
},
"conversation_settings": {
"enable_contextual_change_analysis": false,
"enable_natural_language_answer_generation": true,
"enable_reasoning": false
}
}'
The following example shows the request payload for the answer context type:
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_context": {
"type": "answer",
"answer_context": {
"session_identifier": "f131ca07-47e9-4f56-9e21-454120912ae1",
"generation_number": 1
},
"data_source_context": {
"guid": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}
},
"conversation_settings": {
"enable_contextual_change_analysis": false,
"enable_natural_language_answer_generation": true,
"enable_reasoning": false
}
}'
API responseπ
If the API request is successful, the API returns the conversation ID. You can use this ID to send follow-up questions to the conversation session.
{"conversation_id":"q9tZYf_6WnFC"}
Note the conversation ID for subsequent agentic interactions and API calls.
Create a conversation session (legacy API endpoint)π
To create a conversation session, send a POST request body with the data source ID and search token string to the /api/rest/2.0/ai/conversation/create API endpoint.
Request parametersπ
| Form parameter | Description |
|---|---|
| String. Required. Specify the GUID of the data source objects such as ThoughtSpot Models. The metadata object specified in the API request will be used as a data source for the conversation. |
| String. To set the context for the conversation, you can specify a set of keywords as token string. For example, |
Example requestsπ
With tokens
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
"tokens": "[sales],[item type],[Jackets]"
}'
Without tokens
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}'
API responseπ
If the API request is successful, a conversation identifier is created. Note the GUID of the conversation and use it when sending follow-up queries.
{"conversation_identifier":"98f9b8b0-6224-4f9d-b61c-f41307bb6a89"}
Get relevant questionsπ
To discover follow-up or related questions that can be asked of a data model, ThoughtSpot provides the /api/rest/2.0/ai/relevant-questions/ REST API endpoint. This API endpoint supports both agentic workflows and direct user interaction, and generates contextually relevant questions for a given data context and user query.
The /api/rest/2.0/ai/relevant-questions/ API is exposed as the getRelevantQuestions tool in ThoughtSpotβs MCP server implementation. The MCP server can call this API directly to fetch relevant questions, which can then be used to generate reports or for further analysis and interactions. For more information, see MCP server integration.
You can also call this API directly from your REST client to fetch relevant questions by making a POST request. The API breaks the user-submitted query into a structured set of analytical sub-questions and returns these in the API response.
Request parametersπ
| Parameter | Description |
|---|---|
| Required. Specify one of the following attributes to set the metadata context:
|
| String. Required parameter. Specify the query string that needs to be decomposed into smaller, analytical sub-questions. |
| Integer. Sets a limit on the number of sub-questions to return in the response. Default is 5. |
| Boolean. When set to |
| Additional context to guide the response. Define the following attributes as needed:
|
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/relevant-questions/' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}'
--data-raw '{
"metadata_context": {
"data_source_identifiers": [
"cd252e5c-b552-49a8-821d-3eadaa049cca"
]
},
"query": "Net sales of Jackets in west coast",
"limit_relevant_questions": 3
}'
Example responseπ
If the request is successful, the API returns a set of questions related to the query and metadata context in the relevant_questions array. Each object in the relevant_questions array contains the following fields:
-
query
A string containing the natural language (NL) sub-question. -
data_source_identifier
GUID of the data source object that can be used as data context for the sub-question. -
data_source_name
Name of the associated data source object.
{
"relevant_questions": [
{
"query": "What is the trend of sales by type over time?",
"data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
"data_source_name": "(Sample) Retail - Apparel"
},
{
"query": "Sales by item",
"data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
"data_source_name": "(Sample) Retail - Apparel"
},
{
"query": "Sales across regions",
"data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
"data_source_name": "(Sample) Retail - Apparel"
}
]
}
Send a question to a conversation sessionπ
The following AI API endpoints allow you to send a follow-up query to an ongoing conversation:
-
POST /api/rest/2.0/ai/agent/converse/sse
Allows a client to send queries to an ongoing conversation session with the AI agent (Spotter) and uses the Server-Sent Events (SSE) protocol to stream responses for a real-time conversational experience. It returns a streaming response (using SSE) with the AI agentβs replies, allowing clients to receive incremental updates as the AI agent processes and generates its response.
ThePOST /api/rest/2.0/ai/agent/converse/sseAPI call supports only the agent sessions created via/api/rest/2.0/ai/agent/conversation/createAPI call. -
POST /api/rest/2.0/ai/conversation/{conversation_identifier}/converse
Sends query to an ongoing conversation session and generates Answer.
ThePOST /api/rest/2.0/ai/conversation/{conversation_identifier}/converseAPI call supports only the conversation sessions created using thePOST /api/rest/2.0/ai/conversation/createAPI call.
This is a legacy API endpoint and will be deprecated in an upcoming release version.
Send a question and generate streaming responsesπ
To send queries to an ongoing conversation session and receive streaming responses, ThoughtSpot provides the /api/rest/2.0/ai/agent/converse/sse API endpoint. This API endpoint uses the SSE protocol to deliver data incrementally as it becomes available, rather than waiting for the entire response to be generated before sending it to the client. This enables immediate feedback and a more interactive user experience for AI-generated responses.
This API can be called directly, either through the Multi-Component Protocol (MCP) server or by integrating it into your own agentic workflow. In the MCP context, the /api/rest/2.0/ai/agent/converse/sse API is used as a "tool" for real-time, streaming of conversational interactions between agents and the ThoughtSpot backend. It enables AI agents to send user queries and receive incremental, streamed responses, which can be processed and displayed to the users.
REST clients can also send a POST request with a conversation ID and query string to fetch streaming responses.
Request parametersπ
| Parameter | Description |
|---|---|
| String. Specify the GUID of the conversation received from the create conversation API call. |
| Array of Strings. Specify the query text in natural language format. For example, |
Example requestπ
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/converse/sse' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"conversation_identifier": "h2I_pTGaRQof",
"messages": [
"Net sales of Jackets"
]
}'
API responseπ
If the API request is successful, the response includes a stream of events, each containing a partial or complete message from the AI agent, rather than a single JSON object.
Each event is a simple text-based message in a specific format, data: <your_data>\n\n; <your_data>\n\n means that each message sent from the server to the client is prefixed with data: keyword, followed by the actual payload (<your_data>), and ends with two newline characters (\n\n).
The API uses this format so that clients can reconstruct the AI-generated response as it streams in, chunk by chunk, and show the responses in real-time. In agentic workflows and the MCP server context, the API response is processed by the MCP host or AI agent. The agent listens to the SSE stream, parses each event, and assembles the full response for the user.
Example response
data: [{"type": "ack", "node_id": "BRxCtJ-aGt8l"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "I"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " understand"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " you're"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " interested"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " in"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " net"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " of"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " Jackets"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "."}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " I'll"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " retrieve"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " relevant"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " data"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " you"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "."}]
data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "metadata": {"title": "Net sales of Jackets"}, "code": "nls_start"}]
data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "QH", "message": "Fetching Worksheet Data"}]
data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "TML_GEN", "message": "Translating your query with the Reasoning Engine"}]
data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "ANSWER_GEN", "message": "Verifying results with the Trust Layer"}]
data: [{"id": "r24X7D99SROD", "type": "answer", "group_id": "o8dQ9SAWdtrL", "metadata": {"sage_query": "[sales] [item type] = [item type].'jackets'", "session_id": "b321b404-cbf1-4905-9b0c-b93ad4eedf89", "gen_no": 1, "transaction_id": "6874259d-13b1-478c-83cb-b3ed52628850", "generation_number": 1, "warning_details": null, "ambiguous_phrases": null, "query_intent": null, "assumptions": "You want to see the total sales amount for jackets item type.", "tml_phrases": ["[sales]", "[item type] = [item type].'jackets'"], "cached": false, "sub_queries": null, "title": "Net sales of Jackets", "worksheet_id": "cd252e5c-b552-49a8-821d-3eadaa049cca"}, "title": "Net sales of Jackets"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "The"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " net"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " Jackets"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " have"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " been"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " visual"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "ized"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " you"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "."}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " This"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " analysis"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " specifically"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " filtered"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " item"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " type"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "jackets"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "\""}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " and"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " calculated"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " total"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " amount"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " associated"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " with"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " those"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " products"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n\n"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "**"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "Summary"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " &"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " Insights"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ":"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "**\n"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " The"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " visualization"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " shows"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " total"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " net"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " all"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jacket"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " transactions"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " in"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " your"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " apparel"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " dataset"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " The"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " calculation"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " uses"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " only"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " amounts"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " where"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " item"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " type"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " is"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " \""}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "J"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "ackets"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\"\n"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " This"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " information"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " is"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " useful"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " understanding"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " revenue"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " contribution"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " of"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jackets"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " within"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " your"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " product"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " mix"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n\n"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "If"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " you'd"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " like"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " to"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " see"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " a"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " breakdown"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " by"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " region"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " state"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " time"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " period"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " or"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " compare"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jacket"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " to"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " other"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " product"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " types"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " please"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " let"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " me"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " know"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "!"}]
The messages in the API response include the following parts:
-
id
A unique identifier for the message group -
typeType of the message. Valid types are:-
ack
Confirms receipt of the request. For example, the type in the first messagedata: [{"type": "ack", "node_id": "BRxCtJ-aGt8l"}], which indicates that the server has received the clientβs request and is acknowledging it. -
text / text-chunk
Content chunks, optionally formatted. -
answer
The final structured response with metadata and analytics -
error
Indicates a failure. -
notification
Notification messages.
-
-
group_id
Groups related chunks together. -
metadata: Indicates content format, for example, markdown. -
content
The actual text content sent incrementally. For example,"I","understand","youβre","interested","in","the","net","sales", and so on.
The following example shows the response text contents for the answer message type.
[
{
"id": "r24X7D99SROD",
"type": "answer",
"group_id": "o8dQ9SAWdtrL",
"metadata": {
"sage_query": "[sales] [item type] = [item type].'jackets'",
"session_id": "b321b404-cbf1-4905-9b0c-b93ad4eedf89",
"gen_no": 1,
"transaction_id": "6874259d-13b1-478c-83cb-b3ed52628850",
"generation_number": 1,
"warning_details": null,
"ambiguous_phrases": null,
"query_intent": null,
"assumptions": "You want to see the total sales amount for jackets item type.",
"tml_phrases": [
"[sales]",
"[item type] = [item type].'jackets'"
],
"cached": false,
"sub_queries": null,
"title": "Net sales of Jackets",
"worksheet_id": "cd252e5c-b552-49a8-821d-3eadaa049cca"
},
"title": "Net sales of Jackets"
}
]
-
The session ID and generation number serve as the context data for Answer. You can use this information to create a new conversation session using
/api/rest/2.0/ai/agent/conversation/createor download the answer via/api/rest/2.0/report/answeroperations. -
The tokens and TML phrases returned in the response can be used as inputs for the search data API call to get an Answer.
Send a question to generate answerπ
To send a question to an ongoing conversation session or ask follow-up questions, send a POST request body with conversation ID and query text to the POST /api/rest/2.0/ai/conversation/{conversation_identifier}/converse API endpoint.
Request parametersπ
| Parameter | Type | Description |
|---|---|---|
| Path parameter | String. Required. Specify the GUID of the conversation received from the create conversation API call. |
| Form parameter | String. Required. Specify the GUID of the data source object, for example, Model. The metadata object specified in the API request will be used as a data source for the follow-up conversation. |
| Form parameter | String. Required. Specify a natural language query string. For example, |
Example requestπ
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/03f48527-b973-4efa-81fd-a8568a4f9e78/converse' \
-H 'Accept: application/json' \
-H 'accept-language: en-US', \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
"message": "Top performing products in the west coast"
}'
API responseπ
If the API request is successful, the following data is sent in the API response:
-
session_identifier
GUID of the Answer session. -
generation_number
Number assigned to the Answer session. -
message_type
Type of response received for the query. For example,TSAnswer(ThoughtSpot Answer). -
visualization_type
The data format of the generated Answer, for example, chart or table. When you download this Answer, the data will be exported in the format indicated by thevisualization_type. -
tokens
Tokens generated from the natural language search query string specified in the API request. You can use these tokens as input forquery_stringin your API request to/api/rest/2.0/searchdataand export the raw data of the query, or as input toPOST /api/rest/2.0/ai/conversation/createto initiate a new conversation with a new context.
|
Note
|
Note the session ID and generation number. To export the Answer generated from this conversation, send these attributes in the |
[
{
"session_identifier": "1290f8bc-415a-4ecb-ae3b-e1daa593eb24",
"generation_number": 3,
"message_type": "TSAnswer",
"visualization_type": "Chart",
"tokens": "[sales], [state], [item type], [region] = [region].'west', sort by [sales] descending"
}
]
Ask follow-up questionsπ
The API retains the context of previous queries when you send follow-up questions. To verify this, you can send another API request with a follow-up question to drill down into the data.
Generate a single Answerπ
To generate an Answer from a natural language search query, send a POST request to the /api/rest/2.0/ai/answer/create API endpoint. In the request body, include the query and the data source ID.
Request parametersπ
| Form parameter | Description |
|---|---|
| String. Required. Specify the string as a natural language query. For example, |
| String. Required. Specify the GUID of the data source object, for example, Model. The metadata object specified in the API request will be used as a data source for the follow-up conversation. |
Example requestπ
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/answer/create' \
-H 'Accept: application/json' \
-H 'accept-language: en-US', \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN} \
--data-raw '{
"query": "Top performing products in the west coast",
"metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}'
API responseπ
If the API request is successful, the following data is sent in the API response:
-
session_identifier
GUID of the Answer session. -
generation_number
Number assigned to the Answer session. -
message_typeType of response received for the query. For example,TSAnswer(ThoughtSpot Answer). -
visualization_type
The data format of the generated Answer; for example, chart or table. When you download this Answer, the data will be exported in the format indicated by thevisualization_type. -
tokens
Tokens generated from the natural language search query string specified in the API request. You can use these tokens as input forquery_stringin your API request to/api/rest/2.0/searchdataand export the raw data of the query, or as input toPOST /api/rest/2.0/ai/conversation/createto initiate a new conversation with a new context.
|
Note
|
Note the session ID and generation number. To export the result generated from this API call, send these attributes in the |
[{
"session_identifier": "57784fa1-10fa-431d-8d82-a1657d627bbe",
"generation_number": 2,
"message_type": "TSAnswer",
"visualization_type": "Undefined",
"tokens": "[product], [region] = [region].'west', sort by [sales] descending"
}]
Process results generated from Spotter APIsπ
To generate an Answer using the data returned from the Spotter APIs, use the following options:
-
Download the generated Answer using the session ID and generation number via api/rest/2.0/report/answer API endpoint.
-
Use tokens generated from Spotter API requests as raw data in query strings and generate an Answer via /api/rest/2.0/searchdata API endpoint.
Additional resourcesπ
-
See REST API v2 Playground to verify the request and response workflows
-
For information about MCP tools, see MCP server integration