Published: April 12, 2026
As Large Language Models (LLMs) like Claude, GPT-4, and others continue to evolve, the need for standardized ways to connect these models with external data sources and tools has become increasingly apparent. When LLMs need to access specific information or perform actions outside their training data, developers have traditionally had to create custom integrations for each use case.
The Model Context Protocol (MCP) emerged as a solution to this challenge. Created as an open standard initially developed by Anthropic and now supported by major players in the AI industry, MCP provides a universal way to connect LLMs to various data sources and tools, enabling more powerful and flexible AI applications.
Key Takeaway:
MCP standardizes how LLMs interact with external data and functionality, just as USB-C standardizes how devices connect to peripherals.
The Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
MCP helps you build agents and complex workflows on top of LLMs by providing:
With MCP, developers can create server implementations that expose data and functionality to LLM applications in a secure, standardized way. These servers act as bridges between LLMs and specific data sources or toolsets, similar to how APIs connect different software systems.
At its core, MCP follows a client-server architecture with several key components working together:
Applications like Claude Desktop, IDEs, or AI tools that want to access data through MCP
Protocol clients that maintain 1:1 connections with servers
Lightweight programs that expose specific capabilities through the standardized Model Context Protocol
Local files, databases, services, or remote APIs that MCP servers can securely access
MCP's architecture consists of two main layers:
All transports use JSON-RPC 2.0 to exchange messages, with these main types:
The MCP connection follows a defined lifecycle:
initialize request with protocol version and capabilitiesinitialized notification as acknowledgmentMCP provides numerous advantages for AI developers and users:
Provides a consistent way to specify tools (functions) across any AI system, creating a universal, plug-and-play format.
Data stays within your infrastructure, giving you complete control over security and access patterns.
Switch between different LLM providers without changing your data integration code.
Access a growing library of pre-built integrations for common data sources and tools.
Utilize existing MCP servers instead of building custom integrations for each data source.
As the protocol evolves, your existing integrations will continue to work with newer models.
MCP defines three main types of capabilities that servers can expose to clients:
Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects.
Resources are identified by URIs that follow a specific scheme, such as
file://path/to/document.txt or database://table/record. They represent
data that can be loaded into the LLM's context.
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("My App")
@mcp.resource("config://app")
def get_config() -> str:
"""Static configuration data"""
return "App configuration here"
@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: str) -> str:
"""Dynamic user data"""
return f"Profile data for user {user_id}"
When an LLM needs information, it can request these resources through the MCP client. The server handles retrieving the data and returning it in a format the LLM can understand.
Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects.
Tools are functions that the LLM can invoke to perform specific actions, such as calculating values, fetching real-time data, or modifying external systems. They are similar to POST endpoints in a REST API.
import httpx
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("My App")
@mcp.tool()
def calculate_bmi(weight_kg: float, height_m: float) -> float:
"""Calculate BMI given weight in kg and height in meters"""
return weight_kg / (height_m**2)
@mcp.tool()
async def fetch_weather(city: str) -> str:
"""Fetch current weather for a city"""
async with httpx.AsyncClient() as client:
response = await client.get(f"https://api.weather.com/{city}")
return response.text
Tools allow the LLM to extend its capabilities beyond simply processing and generating text, enabling it to interact with external systems and perform dynamic calculations.
Prompts are reusable templates that help LLMs interact with your server effectively, creating consistent interaction patterns.
Prompts are predefined templates that can be used to structure interactions with the LLM. They help ensure consistency in how the LLM processes certain types of requests.
from mcp.server.fastmcp import FastMCP
from mcp.server.fastmcp.prompts import base
mcp = FastMCP("My App")
@mcp.prompt()
def review_code(code: str) -> str:
return f"Please review this code:\n\n{code}"
@mcp.prompt()
def debug_error(error: str) -> list[base.Message]:
return [
base.UserMessage("I'm seeing this error:"),
base.UserMessage(error),
base.AssistantMessage("I'll help debug that. What have you tried so far?"),
]
Prompts can be simple strings or complex message sequences that guide the LLM in specific tasks, ensuring consistent and effective responses.
Let's explore some practical examples of implementing MCP servers for different use cases.
Here's a simple example of a basic MCP server that provides both a resource and a tool:
# server.py
from mcp.server.fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("Demo")
# Add an addition tool
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
# Add a dynamic greeting resource
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Get a personalized greeting"""
return f"Hello, {name}!"
if __name__ == "__main__":
mcp.run()
This simple server provides:
add that adds two numbersgreeting://NAME
When an LLM connected to this server needs to add numbers or retrieve a greeting, it can use these capabilities through the MCP protocol.
Here's a more practical example of an MCP server that provides weather information:
# weather.py
import httpx
from typing import List, Dict, Any
from mcp.server.fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("Weather")
# Helper functions for National Weather Service API
async def fetch_points(lat: float, lon: float) -> Dict[str, Any]:
"""Fetch grid points from NWS API"""
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://api.weather.gov/points/{lat},{lon}",
headers={"User-Agent": "MCP Weather Server Example"},
)
return response.json()
async def fetch_forecast(grid_id: str, grid_x: int, grid_y: int) -> Dict[str, Any]:
"""Fetch forecast data from NWS API"""
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://api.weather.gov/gridpoints/{grid_id}/{grid_x},{grid_y}/forecast",
headers={"User-Agent": "MCP Weather Server Example"},
)
return response.json()
async def fetch_alerts(state: str) -> Dict[str, Any]:
"""Fetch weather alerts for a state"""
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://api.weather.gov/alerts/active?area={state}",
headers={"User-Agent": "MCP Weather Server Example"},
)
return response.json()
# Tool implementations
@mcp.tool()
async def get_forecast(city: str, state: str) -> str:
"""Get the weather forecast for a location"""
# In a real app, we would use a geocoding service here
# For simplicity, using hardcoded values for Sacramento
lat, lon = 38.5816, -121.4944
# Get grid points
points_data = await fetch_points(lat, lon)
grid_id = points_data["properties"]["gridId"]
grid_x = points_data["properties"]["gridX"]
grid_y = points_data["properties"]["gridY"]
# Get forecast
forecast_data = await fetch_forecast(grid_id, grid_x, grid_y)
periods = forecast_data["properties"]["periods"]
# Format response
result = f"Weather forecast for {city}, {state}:\n\n"
for period in periods[:3]:
result += f"{period['name']}: {period['temperature']}°{period['temperatureUnit']} - {period['shortForecast']}\n"
result += f"Wind: {period['windSpeed']} {period['windDirection']}\n\n"
return result
@mcp.tool()
async def get_alerts(state: str) -> str:
"""Get active weather alerts for a state"""
alerts_data = await fetch_alerts(state)
features = alerts_data.get("features", [])
if not features:
return f"No active weather alerts for {state}."
result = f"Active weather alerts for {state}:\n\n"
for feature in features:
props = feature["properties"]
result += f"- {props['headline']}\n"
result += f" Severity: {props['severity']}\n"
result += f" Urgency: {props['urgency']}\n"
result += f" Areas: {props['areaDesc']}\n\n"
return result
if __name__ == "__main__":
mcp.run()
This weather server provides tools to:
An LLM connected to this server can now answer questions about weather without needing direct API access itself.
Here's an example of an MCP server that provides access to a SQLite database:
# database.py
import sqlite3
from typing import List, Dict, Any
from mcp.server.fastmcp import FastMCP, Context
# Create an MCP server
mcp = FastMCP("SQLite Explorer")
# Helper function to ensure safe queries
def is_safe_query(query: str) -> bool:
"""Simple check to prevent dangerous queries"""
query = query.lower()
return not any(keyword in query for keyword in [
"drop", "delete", "update", "insert", "alter", "attach"
])
# Expose the database schema as a resource
@mcp.resource("schema://main")
def get_schema() -> str:
"""Provide the database schema as a resource"""
conn = sqlite3.connect("example.db")
try:
cursor = conn.cursor()
tables = cursor.execute(
"SELECT name FROM sqlite_master WHERE type='table'"
).fetchall()
schema = []
for table in tables:
table_name = table[0]
columns = cursor.execute(f"PRAGMA table_info({table_name})").fetchall()
schema.append(f"Table: {table_name}")
for col in columns:
schema.append(f" - {col[1]} ({col[2]})")
schema.append("")
return "\n".join(schema)
finally:
conn.close()
# Provide a tool to execute read-only queries
@mcp.tool()
def query_data(sql: str, ctx: Context) -> str:
"""Execute read-only SQL queries"""
if not is_safe_query(sql):
return "Error: Only SELECT queries are allowed for security reasons."
conn = sqlite3.connect("example.db")
try:
cursor = conn.cursor()
ctx.info(f"Executing query: {sql}")
result = cursor.execute(sql).fetchall()
if not result:
return "Query executed successfully but returned no results."
# Get column names
column_names = [description[0] for description in cursor.description]
# Format as text table
output = [" | ".join(column_names)]
output.append("-" * len(output[0]))
for row in result:
output.append(" | ".join(str(cell) for cell in row))
return "\n".join(output)
except Exception as e:
return f"Error executing query: {str(e)}"
finally:
conn.close()
if __name__ == "__main__":
# Create a sample database if it doesn't exist
conn = sqlite3.connect("example.db")
cursor = conn.cursor()
cursor.execute("CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT, email TEXT)")
cursor.execute("CREATE TABLE IF NOT EXISTS products (id INTEGER PRIMARY KEY, name TEXT, price REAL)")
# Add sample data
cursor.execute("INSERT OR IGNORE INTO users VALUES (1, 'Alice', 'alice@example.com')")
cursor.execute("INSERT OR IGNORE INTO users VALUES (2, 'Bob', 'bob@example.com')")
cursor.execute("INSERT OR IGNORE INTO products VALUES (1, 'Laptop', 999.99)")
cursor.execute("INSERT OR IGNORE INTO products VALUES (2, 'Phone', 699.99)")
conn.commit()
conn.close()
# Run the MCP server
mcp.run()
This database server provides:
With this server, an LLM can now analyze data in a SQLite database without direct database access.
Ready to get started with MCP? Here's how:
Install the MCP Python SDK using uv (recommended) or pip:
# Using uv
uv pip install mcp
# Using pip
pip install mcp
Build a simple echo server to test your setup:
# echo.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Echo")
@mcp.tool()
def echo(message: str) -> str:
"""Echo back the provided message"""
return f"Echo: {message}"
if __name__ == "__main__":
mcp.run()
Test your server using the built-in MCP Inspector:
mcp dev echo.py
To use your server with Claude Desktop:
{
"mcpServers": [
{
"name": "echo",
"command": "uv run echo.py",
"workingDirectory": "/path/to/your/project"
}
]
}
Note: The configuration file is typically located at
~/Library/Application Support/Claude/claude_desktop_config.json on macOS.
When implementing MCP servers, security should be a top priority:
Important Security Warning:
When implementing MCP servers that access sensitive data or systems, always apply the principle of least privilege. Only expose the minimum necessary functionality and validate all inputs thoroughly.
The MCP ecosystem is growing rapidly, with many ready-to-use servers available for common use cases:
These official reference servers demonstrate core MCP features:
Many companies are creating official MCP servers for their platforms:
A growing ecosystem of community-developed servers extends MCP's capabilities:
Additional Resources:
The Model Context Protocol represents a significant advancement in how LLMs interact with external data and tools. By providing a standardized way for AI models to access information and functionality, MCP enables more powerful, flexible, and secure AI applications.
Key takeaways from this article:
As the MCP ecosystem continues to grow, we can expect even more powerful integrations that extend what's possible with LLMs. By embracing this open standard, developers can create more capable AI applications while maintaining control over their data and infrastructure.
Ready to get started with MCP? Check out the official documentation and join the community discussions to learn more.