MCPs
The Model Context Protocol (MCP) is an open standard developed by Anthropic that enables large language models (LLMs) to interact with external tools, systems, and data sources in a standardized way. Instead of writing custom logic to expose internal APIs to different AI assistants, you can define tools like getCustomerById
using MCP. Any AI assistant that supports MCP can then discover and use this functionality without custom integrations.
How MCP Works
MCP operates on a client-server architecture composed of the following components:
- Host Application (MCP Host): The application or environment where the user interacts with an AI. Examples include chat interfaces (Claude Desktop, ChatGPT), AI-enhanced IDE, or a custom AI assistant app. The Host initiates connections to one or more MCP servers to extend the capabilities of the AI.
- MCP Client: A connector inside the Host app that manages the connection to MCP servers. It handles session setup, capability negotiation, request sending, and response handling.
- MCP Server: External programs or services that wrap access to tools or data sources and expose them through the MCP protocol. They listen for requests from the MCP client and respond with actions or data, integrating anything from cloud APIs to internal business systems.
liblab can automatically generate an MCP Server
using the same OpenAPI spec it uses to automatically generate SDKs. LLM providers handle the responsibility of being the Host Application
and MCP Client
MCP User Flow
Here’s a simple Mermaid diagram that visually represents the flow of the Host Application, MCP Client, and MCP Server. In this diagram a User
sends a message to the Host Application
. The Host Application
determines if it should use any predefined MCP Servers
in its response to the User
. If so it sends a request through its MCP Client
to the MCP Server
. The MCP Server
then does whatever operation was asked of it and sends back a response with the result of the operation to the MCP Client
. Lastly the Host
then takes that response and uses it when generating the a response to the user.
Protocol Basics
Model Context Protocol defines three core constructs that govern how AI systems interact with external functionality:
- Tools
- Resources
- Prompts
Tools (Model-controlled)
Tools are functions or operations that the AI can choose to call during a conversation. They often perform actions or retrieve specific information based on the user's query. Tools typically have side effects (e.g., getting information, updating the database) or require computation.
Tools are described to the AI in a standardized way, usually including the name, purpose, and input parameters. The MCP client translates these descriptions into the format expected by the AI model (e.g., JSON schema for OpenAI's function calling).
Examples of tools include:
searchDocuments(query)
– Send aquery
to a database and retrieve matching documentssendEmail(recipient, content)
– Trigger an outbound email to arecipient
that contains the providedcontent
When you generate an MCP using liblab, the tools available are automatically associated with the methods from the generated SDK.
Resources (Application-controlled)
Resources are structured pieces of information that the application (not the AI) decides to provide as context. They are read-only and are used to enhance the AI's understanding.
Resources do not have side effects. They are not called directly by the model but are injected into the conversation when relevant. For example, the Host might fetch a user's profile at the beginning of a chat session in order to have additional context about that user.
The following are examples of resources:
- Customer profile from a CRM
- List of support tickets
- Document content or knowledge base entries
Prompts (User-controlled)
Prompts are predefined templates or workflows that guide the AI's behavior. They are often selected by users or developers to establish a particular interaction style or task flow. Prompts may specify how to combine resources and tools in a repeatable way.
Prompts make it easier to reuse structured instructions. For example, a prompt like "summarize this document" can wrap the logic for using a document resource and calling a summarization tool, so the user doesn’t have to instruct the AI from scratch every time.
The following are examples of prompts:
- A Q&A workflow using a selected document
- A meeting note summarizer
- A guided task planner
MCP interaction flow
The following sequence diagram illustrates the interaction flow between the user, host application, MCP client, MCP server, SDK/business logic, and external API/data source:
This diagram illustrates how AI models interact with external systems using the MCP protocol, from user queries to external API calls, ensuring a standardized and efficient process. The diagram sequence is described below:
- User Query: The user inputs a query into the host application (e.g., an IDE or chatbot) that requires external data or action.
- Connection Initialization: The host application initializes a connection with the MCP client.
- Capability Discovery: The MCP client queries the MCP server to discover available tools, resources, and prompts.
- Capability Listing: The MCP server responds with a list of capabilities.
- Capabilities Provided: The MCP client relays the available capabilities back to the host application.
- Tool Request: The host application requests the MCP client to invoke a specific tool.
- Tool Invocation: The MCP client sends the tool invocation request with parameters to the MCP server.
- SDK Method Call: The MCP server processes the request by calling the appropriate method in the SDK or business logic layer.
- External API Request: TThe SDK interacts with the external API or data source to perform the requested logic.
- Data Retrieval: The external API returns the requested data or results to the SDK.
- Result Returned: The SDK passes the data or result back to the MCP server.
- Server Response: The MCP server sends the response back to the MCP client.
- Client Delivery: The MCP client delivers the result to the host application.
- Final Output: The host application presents the final response to the user.
Benefits for Developers and Companies
Implementing MCP unlocks powerful new capabilities across a wide variety of AI use cases, offering both technical and strategic advantages:
-
Standardized Integration: Developers no longer need to write plugins or integration logic for each AI tool and API. MCP provides a universal protocol that works across AI platforms.
-
Enhanced AI Capabilities: AI assistants can interact with dynamic business data, such as documents, user profiles, codebases, or tickets, in real-time. This leads to more accurate, helpful, and context-aware responses.
-
Increased Developer Velocity: Teams can use off-the-shelf MCP servers for popular tools (Slack, GitHub, Jira, Google Drive) or create custom ones. Developers can focus on core business logic rather than reinventing integration layers.
-
Cross-System Orchestration: A single AI can use multiple tools together. For example, a coding assistant might retrieve test results, reference the style guide, and refactor code in a single flow.
-
Security and Control: MCP clients support permissioned access to tools and resources. Admins can limit scope, define granular permissions, and enforce user approvals for sensitive actions.
-
Future-Proofed AI Stack: Because MCP is model-agnostic, the same integration can power multiple AI models. A company can easily switch between Claude, ChatGPT, Gemini, or others without changing how the AI talks to tools.
liblab's solution for MCP
liblab helps companies integrate their APIs more easily by automatically generating SDKs, making it simple for developers to use their services. liblab has been working to help companies expose their APIs in a developer-friendly way by generating fully documented SDKs from OpenAPI specifications. These SDKs make it easier for developers to integrate APIs without manual boilerplate or custom tooling.
Now, liblab extends this value by also generating MCP servers alongside the SDKs. Each method in the generated SDK becomes a corresponding MCP tool, allowing AI models to access and invoke those API methods using the MCP protocol. This means companies can go from OpenAPI spec to AI-ready interface in a single step.
By combining SDK generation with MCP support, liblab simplifies integration for developers and makes it easier for companies to expose their services to the new ecosystem of AI tools. It's a plug-and-play path to building intelligent, AI-connected APIs.
Next steps
Learn how to generate your own MCP server using liblab:
Once your server is ready, explore how to connect it to different AI code agents: