With AI going forward each day more and more, we get to hear new concepts and innovations every day. One of the major innovations was the Model Context Protocol (MCP), which emerged in November 2024 when it was launched by Anthropic. It was later officially adopted by OpenAI in March 2025.
MCP Definition
You might be wondering, what exactly is MCP, and what is its role in the broader AI landscape?
Well, according to AWS, the Model Context Protocol is an open standard that creates a universal language for AI systems to communicate with external data sources, tools, and services. You can think of it as a USB-C port for AI applications, as it provides a standardized way to connect AI models to different tools and data resources.
Parts of MCP
When talking about MCP’s architecture, we can break it down into three main parts:
- MCP Hosts
- MCP Clients
- MCP Servers
These parts work together to provide a seamless communication between AI models and the tools or data provided to them. They follow a hierarchical structure and use JSON-RPC 2.0 messages between them.
MCP Hosts
MCP Hosts are applications that integrate AI and need access to external data. This is essentially any application that integrates a Large Language Model (LLM), such as Claude Desktop or Cursor. The host component contains the orchestration logic and can connect each client to a server.
MCP Clients
MCP Clients reside within the hosts and act as the links that maintain secure connections between hosts and servers. To achieve isolation, each client is dedicated to a specific server (a 1:1 connection).
MCP Servers
MCP Servers are the external programs that provide specific functionality and connect to various sources like Google Drive, Slack, GitHub, or databases. MCP Servers are versatile and allow for both internal and external resources and tools.
The MCP Communication Flow
The MCP communication flow has four defined stages:
- Discovery Phase: The host identifies which MCP servers are available in its environment.
- Declaration: The MCP servers declare their available functionalities—tools, resources, and prompts.
- Request: The AI requests to use a specific tool. This happens when the user asks a question that requires external data.
- Execution and Return: The server executes the requested action (e.g., web searching) and returns the results to the AI, which then proceeds to give the end-user a final response.
Usage Example: Business Analysis
MCP can be powerfully used in a business or data analysis context, connecting to internal company data.
Let's imagine you are a business analyst using a Claude Desktop application to query sales data from your company's internal database. Your goal is to find the total sales revenue for the last quarter without writing any SQL code.
| MCP Component | Example in Scenario | Role |
|---|---|---|
| MCP Host | Claude Desktop application | The application where the user interacts. |
| MCP Client | Dedicated "Company Analytics Client" | Acts as the link to the specific server. |
| MCP Server | IT department server with read-only access to the sales database | Provides the external functionality and data access. |
The Flow:
- Prompt: You write, "What was our total sales revenue for Q1 2025?".
- Host Action: The chat application (the Host) recognizes that this question requires internal data. It would identify that the "Company Analytics" tool is the correct one for this job and it would activate its corresponding MCP Client.
- Client Request: After that, the Client would send a request to the MCP Server. The request needs to specify the tool (
query_sales_data) and the parameters it has extracted from the prompt (timeframe: "Q1 2025"). -
Server Action: Then the Server would receive the request and translate it into a precise SQL query, for example:
sql SELECT SUM(revenue) FROM sales WHERE date BETWEEN '2025-01-01' AND '2025-03-31'It would run the query above against the company's sales database. 5. Return & Response: Finally, in the execution and return stage, the database would return a single value (e.g., "$5,450,000"). The Server would then package this result and send it back to the Client. The Host then uses this data to formulate a natural language response for the user: "The total sales revenue for Q1 2025 was $5,450,000."
Who is MCP for?
While anyone can use MCP, certain professional groups benefit significantly from it:
- Enterprise Developers: Those building AI assistants that need to connect with internal software, databases, and CRMs. MCP saves time by providing a quicker solution instead of writing custom integrations for every tool.
- AI Application Builders: Developers of AI code editors and similar applications (the Host part of MCP). MCP provides a much faster solution for integrating many different tools than building each one individually.
- SaaS Companies and Tool Providers (e.g., Slack, GitHub): These companies want their services available to AI agents. By building an MCP server, they make their platform's functionalities available to any MCP-compliant application, keeping them relevant in the new age of AI agents.
Pros and Cons
Pros: Solving the M x N Problem
The biggest pro is that MCP solves the M x N problem. Before MCP, developers had to write a custom integration for every combination of an AI model ($\mathbf{M}$) and an external tool ($\mathbf{N}$).
MCP transforms this problem into M + N, meaning you only have to build one implementation for each client and server. This leads to:
- Significantly less time spent on development.
- Better context for LLMs, as tools can be called dynamically and access more relevant, up-to-date external data instead of only static training data.
Cons: Security Risks
As a new protocol, there are various questions raised in terms of security:
- Credential Exposure: MCP servers connect to external systems, often via API keys. Even operating locally, this represents another instance of credential storage that can lead to exposure.
- Prompt Injection: A malicious MCP server could contain prompts that make a coding agent write insecure code or, worse, make an agent perform modifications on a database without a user's permission.
Mitigation Steps
To protect yourself, it is strongly recommended to:
- Log all application prompts and monitor the exposed keys in MCP configuration files.
- Establish governance procedures for new MCP servers, including an approval process, security reviews, and source verification, while maintaining an inventory of approved servers.
Conclusion
While the risks we’ve mentioned must be kept in mind, with the rise of AI agents and their need to perform multi-step, complex actions that require up-to-date data, we will definitely be seeing more MCP. AI agents need a standardized way to work, and the Model Context Protocol is a strong, new standard that will likely play a very important role in the next wave of AI.