Legacy MCP Server architecture and tools

Legacy MCP Server architecture and tools

The legacy MCP server setup uses synchronous architecture. It does not support advanced analytics, session context retention, automatic data source selection, streaming responses, and the full answer is always returned inline. All context and data source management must be handled by the client on every request.

Functional capabilities

Limited capabilities for complex analysis and context integration.

When to use

It is recommended only for maintaining existing integrations, not for new development or advanced use cases.

Integration pattern and session model

The integration pattern is synchronous and stateless. Each tool call is independent and there is no persistent session. Any required context must be manually injected with every follow-up call, as the server does not retain context between requests.

Data source selection

Data source suggestions are not built-in. A separate tool call (getDataSourceSuggestions) is required to retrieve possible data sources for each query.

Response delivery

The server returns the full response in a single, synchronous call. There is no support for streaming or incremental updates.

Follow-up questions

Every follow-up call requires the prior context to be manually included, as the server does not maintain conversational state.

MCP architecture legacy

Tool calls and workflow processingđź”—

The workflow in the legacy MCP Server setup typically includes the following stages:

  1. User asks a question
    A user sends a query in the chat interface to get data. For example, What were the total sales of Jackets and Bags in the Northeast last year?
    Optionally, the user can specify the data context to generate a response.

  2. Agent calls getDataSourceSuggestions (optional)
    If the user’s question doesn’t specify a data source, the agent can call getDataSourceSuggestions to retrieve a list of relevant ThoughtSpot data sources. ThoughtSpot returns candidate data sources (models) with confidence scores and reasoning.

  3. User’s query is decomposed into sub-questions
    To break the user’s query into sub-questions, the agent calls getRelevantQuestions. ThoughtSpot returns the AI-suggested, schema-aware questions that are easier to execute analytically.

  4. The query is processed for generating answers
    For each suggested or chosen question, the agent calls getAnswer. ThoughtSpot returns the following:

    • Preview data for LLM reasoning.

    • Visualization metadata, including an embeddable frame_url.

    • session_identifier and generation_number for charts that are used as input for creating a Liveboard.

  5. A Liveboard is generated from the results (optional)
    To save answers from the conversation sessions in a ThoughtSpot Liveboard, the agent extracts the question, session_identifier, and generation_number from each getAnswer response and calls createLiveboard.
    ThoughtSpot creates a persistent Liveboard from the session’s answers and returns identifiers and a frame_url for the Liveboard.

In the legacy MCP Server setup, to ask a follow-up question, the agent calls the getRelevantQuestions again, because the server doesn’t retain context. For follow-up questions, the agent must pass the context explicitly via additionalContext.

For more information about the tool calls, input parameters, and response output, see MCP tool reference guide.

Additional resourcesđź”—

© 2026 ThoughtSpot Inc. All Rights Reserved.