Model Context Protocol (MCP)
In December 2024, Anthropic open-sourced something quietly revolutionary: a protocol that promised to do for AI tools what USB did for peripherals. Within months, MCP had been adopted by Claude Desktop, VS Code Copilot, Cursor, and dozens of other AI applications. Google's ADK added native support. The fragmented world of AI tooling was finally getting a common language.
This chapter explores MCP (Model Context Protocol)βthe emerging standard that lets you write a tool once and use it everywhere.
1. The Tool Portability Problem
As you build more tools for your agents, a painful pattern emerges:
# For Gemini (Google ADK)
def get_weather(city: str) -> dict:
"""Gets the weather for a city."""
return {"temp": 25, "condition": "sunny"}
# For OpenAI
weather_tool = {
"type": "function",
"function": {
"name": "get_weather",
"description": "Gets the weather for a city",
"parameters": {...}
}
}
# For Anthropic Claude
weather_tool = {
"name": "get_weather",
"description": "Gets the weather for a city",
"input_schema": {...} # Slightly different schema format!
}You're writing the same tool three times. And when you want to use that database connector you built for your Gemini agent in Claude Desktop? You rewrite it again.
MCP solves this. Write a tool once as an MCP server, and any MCP-compatible client can use it.
2. What is MCP?
MCP (Model Context Protocol) is an open standard that defines how AI applications communicate with external tools and data sources. Think of it as:
- USB-C for AI tools: One connector, universal compatibility
- A client-server architecture: Tools are "servers," AI apps are "clients"
- Transport-agnostic: Works over stdio, HTTP, WebSocket
The Three Primitives
MCP servers can expose three types of capabilities:
| Primitive | Description | Example |
|---|---|---|
| Tools | Functions the AI can call | search_database, send_email, create_issue |
| Resources | Data the AI can read | Files, database records, API responses |
| Prompts | Reusable prompt templates | "Summarize this document", "Code review template" |
For agent engineering, Tools are the most important. They're what give your agent hands.
3. MCP Architecture
Server and Client
- MCP Server: A program that exposes tools, resources, or prompts. It runs as a separate process.
- MCP Client: An AI application (like Claude Desktop or your ADK agent) that connects to servers and uses their capabilities.
Transport Layers
MCP supports multiple ways for clients and servers to communicate:
| Transport | How It Works | Best For |
|---|---|---|
| Stdio | Communication over stdin/stdout | Local development, desktop apps |
| HTTP + SSE | HTTP requests, Server-Sent Events for streaming | Remote servers, cloud deployment |
| WebSocket | Bidirectional real-time connection | Low-latency applications |
For local development, stdio is simplestβthe client spawns the server as a subprocess and they communicate through pipes.
4. Building Your First MCP Server
Let's build a practical MCP server: a Notes Manager that any AI client can use to create, list, and retrieve notes.
4.1 Setup
First, install the MCP Python SDK:
pip install mcp4.2 The Server Code
# notes_server.py
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
import json
from pathlib import Path
from datetime import datetime
# Persistent storage
NOTES_FILE = Path("notes.json")
def load_notes() -> dict:
"""Load notes from disk."""
if NOTES_FILE.exists():
return json.loads(NOTES_FILE.read_text())
return {}
def save_notes(notes: dict):
"""Save notes to disk."""
NOTES_FILE.write_text(json.dumps(notes, indent=2))
# Create the MCP server instance
server = Server("notes-server")
@server.list_tools()
async def list_tools() -> list[Tool]:
"""Declare the tools this server provides."""
return [
Tool(
name="create_note",
description="Creates a new note with a title and content. Use this when the user wants to save information for later.",
inputSchema={
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "A short, descriptive title for the note"
},
"content": {
"type": "string",
"description": "The full content/body of the note"
}
},
"required": ["title", "content"]
}
),
Tool(
name="list_notes",
description="Lists all saved notes by title. Use this to see what notes exist.",
inputSchema={
"type": "object",
"properties": {},
"required": []
}
),
Tool(
name="get_note",
description="Retrieves the full content of a specific note by its title.",
inputSchema={
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "The exact title of the note to retrieve"
}
},
"required": ["title"]
}
),
Tool(
name="delete_note",
description="Permanently deletes a note by title.",
inputSchema={
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "The title of the note to delete"
}
},
"required": ["title"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
"""Handle incoming tool calls."""
notes = load_notes()
if name == "create_note":
title = arguments["title"]
content = arguments["content"]
notes[title] = {
"content": content,
"created_at": datetime.now().isoformat()
}
save_notes(notes)
return [TextContent(
type="text",
text=f"β
Note '{title}' created successfully."
)]
elif name == "list_notes":
if not notes:
return [TextContent(type="text", text="No notes found. Create one first!")]
note_list = "\n".join(f"β’ {title}" for title in sorted(notes.keys()))
return [TextContent(
type="text",
text=f"π Your notes ({len(notes)} total):\n{note_list}"
)]
elif name == "get_note":
title = arguments["title"]
if title in notes:
note = notes[title]
return [TextContent(
type="text",
text=f"# {title}\n\n{note['content']}\n\n---\n_Created: {note.get('created_at', 'unknown')}_"
)]
return [TextContent(
type="text",
text=f"β Note '{title}' not found. Use list_notes to see available notes."
)]
elif name == "delete_note":
title = arguments["title"]
if title in notes:
del notes[title]
save_notes(notes)
return [TextContent(type="text", text=f"ποΈ Note '{title}' deleted.")]
return [TextContent(type="text", text=f"β Note '{title}' not found.")]
return [TextContent(type="text", text=f"Unknown tool: {name}")]
async def main():
"""Run the MCP server over stdio."""
async with stdio_server() as (read, write):
await server.run(read, write)
if __name__ == "__main__":
import asyncio
asyncio.run(main())4.3 Testing with Claude Desktop
The easiest way to test your MCP server is with Claude Desktop:
-
Open Claude Desktop settings β Developer β Edit Config
-
Add your server to
claude_desktop_config.json:
{
"mcpServers": {
"notes": {
"command": "python",
"args": ["/absolute/path/to/notes_server.py"]
}
}
}-
Restart Claude Desktop. You'll see a π§ icon indicating MCP tools are available.
-
Try it out:
- "Create a note titled 'Project Ideas' with my brainstorming"
- "List all my notes"
- "What's in the 'Project Ideas' note?"
When using stdio transport, the MCP client (Claude Desktop, your ADK agent) spawns the server as a subprocess. The server runs as long as the client needs it, then shuts down automatically.
5. Using MCP with Google ADK
Now let's connect our MCP server to a Google ADK agent using McpToolset.
5.1 Basic Integration
# agent.py
import os
from google.adk.agents import LlmAgent
from google.adk.tools.mcp_tool import McpToolset
from google.adk.tools.mcp_tool.mcp_session_manager import StdioConnectionParams
from mcp import StdioServerParameters
# Create an agent that uses our Notes MCP server
root_agent = LlmAgent(
model="gemini-2.0-flash",
name="notes_assistant",
description="An assistant that can manage your notes.",
instruction="""You are a helpful assistant that manages the user's notes.
Use the available tools to create, list, and retrieve notes.
When creating notes, use clear, descriptive titles.
When the user asks about their notes, list them first to see what's available.""",
tools=[
McpToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command="python",
args=["/path/to/notes_server.py"]
)
)
# Optional: only expose specific tools
# tool_filter=["create_note", "list_notes", "get_note"]
)
]
)5.2 Running with adk web
The simplest way to run your ADK agent:
- Create the project structure:
my_agent/
__init__.py
agent.py
.env- Add the init file:
# my_agent/__init__.py
from . import agent- Launch the dev UI:
cd parent_of_my_agent
adk web- Select your agent and start chatting!
5.3 Running Programmatically
For production use or testing:
import asyncio
import os
from dotenv import load_dotenv
from google.genai import types
from google.adk.agents import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.adk.tools.mcp_tool import McpToolset
from google.adk.tools.mcp_tool.mcp_session_manager import StdioConnectionParams
from mcp import StdioServerParameters
load_dotenv()
async def main():
# Create the MCP toolset
notes_toolset = McpToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command="python",
args=["notes_server.py"]
)
)
)
# Create the agent
agent = LlmAgent(
model="gemini-2.0-flash",
name="notes_assistant",
instruction="You help users manage their notes.",
tools=[notes_toolset]
)
# Set up session management
session_service = InMemorySessionService()
session = await session_service.create_session(
app_name="notes_app",
user_id="user_1"
)
# Create runner
runner = Runner(
agent=agent,
app_name="notes_app",
session_service=session_service
)
# Interact with the agent
queries = [
"Create a note titled 'MCP Tutorial' with content about what I learned today",
"List all my notes",
"What's in the MCP Tutorial note?"
]
for query in queries:
print(f"\nπ€ User: {query}")
content = types.Content(
role="user",
parts=[types.Part(text=query)]
)
# Use async for with run_async
async for event in runner.run_async(
user_id="user_1",
session_id=session.id,
new_message=content
):
if event.content and event.content.parts:
final_text = event.content.parts[0].text
if final_text:
print(f"π€ Agent: {final_text}")
# Clean up
await notes_toolset.close()
if __name__ == "__main__":
asyncio.run(main())6. Advanced MCP Patterns
6.1 Remote MCP Servers
For production, you often want MCP servers running on remote infrastructure. Use SSE (Server-Sent Events) transport:
from google.adk.tools.mcp_tool import McpToolset
from google.adk.tools.mcp_tool.mcp_session_manager import SseConnectionParams
# For HTTP+SSE remote servers
remote_toolset = McpToolset(
connection_params=SseConnectionParams(
url="https://your-mcp-server.com/sse",
headers={"Authorization": "Bearer your-token"}
)
)6.2 Combining Multiple MCP Servers
One agent can connect to multiple MCP servers:
import os
from google.adk.agents import LlmAgent
from google.adk.tools.mcp_tool import McpToolset
from google.adk.tools.mcp_tool.mcp_session_manager import StdioConnectionParams
from mcp import StdioServerParameters
# Create an agent with multiple MCP servers
super_agent = LlmAgent(
model="gemini-2.0-flash",
name="super_assistant",
instruction="You can manage notes, files, and GitHub repos.",
tools=[
# Notes server
McpToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command="python",
args=["notes_server.py"]
)
)
),
# Filesystem server (community)
McpToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", "/allowed/path"]
)
),
tool_filter=["read_file", "list_directory"] # Security: limit tools
),
# GitHub server (community)
McpToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-github"],
env={"GITHUB_TOKEN": os.environ["GITHUB_TOKEN"]}
)
)
)
]
)6.3 Tool Filtering for Security
Always filter tools in production to limit what the agent can do:
McpToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", "/safe/directory"]
)
),
# Only allow read operations, not write/delete
tool_filter=["read_file", "read_multiple_files", "list_directory", "search_files"]
)MCP servers can execute arbitrary code and access sensitive resources. Before using any MCP server:
- Audit the source code of third-party servers before running them
- Use
tool_filterto expose only the tools you actually need - Restrict file paths to specific directories, never the entire filesystem
- Rotate credentials passed via environment variables regularly
- Run in sandboxes (containers, VMs) for untrusted servers
7. When NOT to Use MCP
MCP is powerful, but it's not always the right choice. Here's when to skip it:
7.1 Single-Framework Projects
If you're only building for one framework and have no plans to share tools:
# Just use native ADK tools - simpler, faster, less overhead
def get_weather(city: str) -> dict:
"""Gets the weather for a city."""
return fetch_weather_api(city)
agent = LlmAgent(
model="gemini-2.0-flash",
tools=[get_weather] # Direct function, no MCP needed
)Why: MCP adds a process boundary, serialization overhead, and complexity. Native tools are faster and easier to debug.
7.2 High-Performance, Low-Latency Scenarios
MCP's stdio/HTTP transport adds latency:
| Approach | Typical Latency |
|---|---|
| Native function call | < 1ms |
| Stdio MCP (local) | 10-50ms |
| HTTP MCP (remote) | 50-200ms |
For real-time voice agents or high-frequency tool calls, this overhead matters.
7.3 Stateful Tools That Need Agent Context
MCP servers run as separate processes and don't have access to your agent's internal state:
# This WON'T work well with MCP - needs agent context
@server.call_tool()
async def tool_that_needs_context(name: str, args: dict):
# β Can't access: agent.session, agent.state, agent.memory
# β Can't access: user authentication, conversation history
passIf your tool needs deep integration with session state, memory, or auth context, keep it as a native ADK tool.
7.4 Simple Scripts or One-Off Tasks
For quick prototypes or scripts you'll run once:
# Don't build an MCP server for this
# Just call the API directly
import requests
def quick_lookup():
response = requests.get("https://api.example.com/data")
return response.json()7.5 Decision Framework
Rule of thumb: Start with native tools. Graduate to MCP when you have a proven tool that multiple applications need to share.
8. The MCP Ecosystem
You don't have to build everything from scratch. The MCP ecosystem is growing rapidly.
8.1 Official & Community Servers
| Category | Servers |
|---|---|
| Databases | PostgreSQL, SQLite, MongoDB, Redis |
| SaaS | Slack, GitHub, Linear, Notion, Google Drive |
| Dev Tools | Git, Docker, Kubernetes, AWS |
| Utilities | Filesystem, Memory, Fetch (HTTP) |
| Browsers | Puppeteer, Playwright |
Browse available servers:
- MCP.run - Server registry
- GitHub: modelcontextprotocol - Official repos
8.2 Example: Using the Filesystem Server
# Install the filesystem MCP server
npx -y @modelcontextprotocol/server-filesystem /path/to/allowed/directoryThen in your ADK agent:
McpToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", os.getcwd()]
)
)
)Your agent can now read files, list directories, and search within the allowed path.
9. π¨ Project: Research Assistant with MCP
Let's build a research assistant that combines MCP tools for notes and web browsing:
import os
import asyncio
from dotenv import load_dotenv
from google.genai import types
from google.adk.agents import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.adk.tools.google_search_tool import google_search
from google.adk.tools.mcp_tool import McpToolset
from google.adk.tools.mcp_tool.mcp_session_manager import StdioConnectionParams
from mcp import StdioServerParameters
load_dotenv()
# Research assistant that can search and save findings
research_agent = LlmAgent(
model="gemini-2.0-flash",
name="research_assistant",
description="A research assistant that can search the web and save notes.",
instruction="""You are a research assistant. Your workflow:
1. When the user asks about a topic, search for information
2. Summarize your findings clearly
3. Offer to save important findings as notes for later reference
Be thorough in research but concise in summaries.""",
tools=[
google_search, # Built-in web search
McpToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command="python",
args=["notes_server.py"]
)
)
)
]
)
async def run_research_session():
session_service = InMemorySessionService()
session = await session_service.create_session(
app_name="research_app",
user_id="researcher"
)
runner = Runner(
agent=research_agent,
app_name="research_app",
session_service=session_service
)
# Research session
queries = [
"Research the current state of fusion energy technology",
"Save the key points as a note titled 'Fusion Energy 2026'",
"List my research notes"
]
for query in queries:
print(f"\nπ€ {query}")
content = types.Content(role="user", parts=[types.Part(text=query)])
async for event in runner.run_async(
user_id="researcher",
session_id=session.id,
new_message=content
):
if event.content and event.content.parts:
text = event.content.parts[0].text
if text:
print(f"π¬ {text}")
if __name__ == "__main__":
asyncio.run(run_research_session())This project demonstrates:
- Hybrid tooling: Combining ADK built-in tools (
google_search) with MCP servers - Practical workflow: Research β Summarize β Save pattern
- MCP integration: Using
McpToolsetfor persistent note storage
Summary
MCP is becoming the standard for AI tool portability. You learned:
- The Problem: Every AI platform has its own tool formatβMCP provides universal compatibility
- The Architecture: Servers expose tools, clients consume them, connected via stdio/HTTP/WebSocket
- Building Servers: Use the
mcpPython SDK with@server.list_tools()and@server.call_tool()decorators - ADK Integration: Use
McpToolsetwithStdioConnectionParamsorSseConnectionParams - The Ecosystem: Dozens of pre-built servers for databases, SaaS, dev tools, and more
- Security: Always use
tool_filterto limit exposed capabilities in production
MCP is still young, but adoption is accelerating. Tools you build today will work with tomorrow's AI clients.
References
- Model Context Protocol β Official Documentation
- MCP Specification
- MCP Python SDK
- MCP Servers Registry
- Google ADK β MCP Tools
- Anthropic β Introducing MCP
Next Chapter: We'll explore RAG (Retrieval-Augmented Generation)βhow to give your agent access to your private documents and databases, solving the knowledge cutoff problem.