Skip to main content

What is mcp-agent?

mcp-agent is a Python framework for building AI agents using the Model Context Protocol (MCP). It provides a simple, composable way to build effective agents by combining standardized MCP servers with proven workflow patterns.

Anatomy of an MCP Agent

The quickest way to internalise the stack is to walk through the basic finder agent. Each step maps directly to a core SDK concept:

1. Configure servers and models

mcp_agent.config.yaml
execution_engine: asyncio

mcp:
  servers:
    fetch:
      command: "uvx"
      args: ["mcp-server-fetch"]
    filesystem:
      command: "npx"
      args: ["-y", "@modelcontextprotocol/server-filesystem"]

openai:
  default_model: gpt-4o-mini
This defines the transports the agent can call and the model preferences it should use.

2. Bootstrap the application

main.py
from mcp_agent.app import MCPApp

app = MCPApp(name="finder_app")
MCPApp loads the config/secrets, prepares logging and tracing, and manages server connections.

3. Describe the agent

finder_agent.py
from mcp_agent.agents.agent import Agent

finder = Agent(
    name="finder",
    instruction="Fetch web pages or read files to answer questions.",
    server_names=["fetch", "filesystem"],
)
The agent couples instructions with the set of MCP servers it is allowed to use. When async with finder: runs, the agent initialises those connections via the app’s server registry.

4. Attach an augmented LLM

from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM

async with finder:
    llm = await finder.attach_llm(OpenAIAugmentedLLM)
    response = await llm.generate_str("Summarise README.md")
The augmented LLM automatically surfaces the agent’s tools (fetch, read_text_file, etc.) during generation.

5. Run inside the app context

async def main():
    async with app.run():
        async with finder:
            llm = await finder.attach_llm(OpenAIAugmentedLLM)
            result = await llm.generate_str("List key files in this repo")
            print(result)
You gain uniform logging, token accounting, and graceful shutdown by executing inside app.run(). From here, layer in more sophisticated patterns: With these building blocks you can mix and match—swap models, add workflow decorators, run inside Temporal, or expose the whole app as an MCP server.

Core Architecture

mcp-agent consists of four main layers:

MCP Integration

Connect to any MCP server and automatically discover tools, resources, and prompts

Agent Layer

Agents that combine instructions with MCP server capabilities

LLM Integration

Augmented LLMs that can use tools and maintain conversation context

Workflow Patterns

Composable patterns for orchestrating agents and tasks

Key Components

MCPApp

The MCPApp is the central application context that manages configuration, logging, and server connections:
from mcp_agent.app import MCPApp

app = MCPApp(name="my_agent_app")

# Use as context manager
async with app.run() as mcp_agent_app:
    logger = mcp_agent_app.logger
    # Your agent code here
Learn more about MCPApp →

Agents

Agents are entities with specific instructions and access to MCP servers:
from mcp_agent.agents.agent import Agent

agent = Agent(
    name="researcher",
    instruction="Research topics using web and filesystem access",
    server_names=["fetch", "filesystem"]
)

async with agent:
    # Agent automatically connects to servers and discovers tools
    tools = await agent.list_tools()
Learn more about Agents →

AugmentedLLM

AugmentedLLMs are LLMs enhanced with tools from MCP servers:
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM

async with agent:
    llm = await agent.attach_llm(OpenAIAugmentedLLM)

    # LLM can now use tools from connected MCP servers
    result = await llm.generate_str("Research quantum computing")
Learn more about AugmentedLLM →

MCP Servers

MCP servers provide tools, resources, and other capabilities to agents:
mcp_agent.config.yaml
mcp:
  servers:
    fetch:
      command: "uvx"
      args: ["mcp-server-fetch"]
    filesystem:
      command: "npx"
      args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
Learn more about MCP Servers →

Workflows

Workflows are composable patterns for orchestrating agents:
from mcp_agent.workflows.parallel.parallel_llm import ParallelLLM

# Fan out to multiple agents in parallel
parallel = ParallelLLM(
    fan_in_agent=grader,
    fan_out_agents=[proofreader, fact_checker, style_enforcer],
    llm_factory=OpenAIAugmentedLLM,
)

result = await parallel.generate_str("Review this essay...")
Learn more about Workflows →

Execution Engines

Execution engines determine how workflows run:
  • asyncio: In-memory execution for development
  • Temporal: Durable execution with pause/resume capabilities
mcp_agent.config.yaml
execution_engine: temporal  # or asyncio
Learn more about Execution Engines →

Workflow Patterns

mcp-agent implements all patterns from Anthropic’s Building Effective Agents:

Model Context Protocol

mcp-agent provides full support for MCP capabilities:

Tools

Execute functions and produce side effects

Resources

Access data and load context

Prompts

Reusable templates for interactions

Sampling

Request LLM completions from clients
Learn more about MCP Support →

Next Steps

I