Overview

mcp-agent is a Python framework for building AI agents using the Model Context Protocol (MCP). It implements patterns from Anthropic’s Building Effective Agents guide and provides integration with any MCP-compatible server. The framework handles:
  • MCP server lifecycle management and connections
  • Agent workflow patterns (parallel, routing, orchestration, etc.)
  • Multi-agent coordination
  • Integration with multiple LLM providers
  • Deployment as MCP servers

Key Features

MCP Protocol Support

  • Connect to any MCP server via stdio, SSE, WebSocket, or HTTP
  • Access tools, resources, prompts, and file system roots
  • Automatic tool discovery and integration
  • Sampling, elicitation, and notifications

Agent Patterns

Implementation of all patterns from Anthropic’s research:

Execution Engines

  • asyncio - In-memory execution for development and simple deployments
  • Temporal - Durable execution with automatic retries, pause/resume, and workflow history

LLM Support

Works with:
  • OpenAI (GPT-4, GPT-4o)
  • Anthropic (Claude 3, Claude 3.5)
  • Google (Gemini)
  • Azure OpenAI
  • AWS Bedrock
  • Local models via Ollama

Quick Example

main.py
import asyncio
from mcp_agent.app import MCPApp
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM

app = MCPApp(name="example_agent")

@app.tool
async def research()->str:
    '''
    Research quatum computing function / tool call
    '''
    result=""
    async with app.run() as mcp_agent_app:

        # Create agent with MCP server access
        agent = Agent(
            name="researcher",
            instruction="Research topics using available tools",
            server_names=["fetch", "filesystem"]
        )
        
        async with agent:
            # List available tools from MCP servers
            tools = await agent.list_tools()
            print(f"Available tools: {[t.name for t in tools.tools]}")
            
            # Attach LLM for interaction
            llm = await agent.attach_llm(OpenAIAugmentedLLM)
            
            # Agent can now use MCP server tools
            result = await llm.generate_str("Research quantum computing")
            print(result)
    return result

if __name__ == "__main__":
    asyncio.run(research())

Example Applications

Explore working examples in the examples directory:

Installation

Using uv (recommended):
uv add mcp-agent
Using pip:
pip install mcp-agent
Install with specific LLM provider:
# OpenAI
uv add "mcp-agent[openai]"

# Anthropic
uv add "mcp-agent[anthropic]"

# All providers
uv add "mcp-agent[openai,anthropic,azure,bedrock,google]"

Configuration

mcp-agent uses two configuration files: mcp_agent.config.yaml - Application configuration:
mcp_agent.config.yaml
execution_engine: asyncio  # or temporal
logger:
  transports: [console]
  level: info

mcp:
  servers:
    fetch:
      command: "uvx"
      args: ["mcp-server-fetch"]
    filesystem:
      command: "npx"
      args: ["-y", "@modelcontextprotocol/server-filesystem", "."]

openai:
  default_model: gpt-4o
mcp_agent.secrets.yaml - API keys and secrets:
mcp_agent.secrets.yaml
openai:
  api_key: "sk-..."

Project Structure

your-project/
├── agent.py               # Your agent code
├── mcp_agent.config.yaml  # Configuration
├── mcp_agent.secrets.yaml # API keys (gitignored)
└── logs/                  # Execution logs

Deployment Options

Local Development

Run agents locally with asyncio execution engine for rapid development.

Production with Temporal

Use Temporal for durable execution, automatic retries, and workflow management.

As a MCP Server

Expose your agents as MCP servers that can be used by Claude Desktop, VS Code, or other MCP clients.

mcp-agent cloud

Deploy agents to managed cloud infrastructure with one command.

Examples

The examples directory contains 30+ working examples:
  • Basic agents - Simple patterns and MCP server usage
  • Workflow patterns - All patterns from Anthropic’s guide
  • Integrations - Claude Desktop, Streamlit, Jupyter
  • MCP servers - Agents exposed as MCP servers
  • Temporal - Durable execution examples

Next Steps

Resources