Configuration

Learn how to configure mcp-agent using configuration files to control logging, execution, model providers, and MCP server connections.

Configuration Files

mcp-agent uses two configuration files:

mcp_agent.config.yaml

Application settings, logging, and server configurations

mcp_agent.secrets.yaml

API keys and sensitive information (should be gitignored)

Basic Configuration

Configuration Reference

Execution Engine

Controls how mcp-agent executes workflows:
execution_engine: asyncio
Standard async execution for most use cases.

Logging Configuration

MCP Server Configuration

Define MCP servers your agents can connect to:
mcp:
  servers:
    server_name:
      command: "command_to_run"
      args: ["arg1", "arg2"]
      description: "Optional description"

Common MCP Servers

Fetch Server

fetch:
  command: "uvx"
  args: ["mcp-server-fetch"]

Filesystem Server

filesystem:
  command: "npx"
  args: ["-y", "@modelcontextprotocol/server-filesystem", "."]

SQLite Server

sqlite:
  command: "npx"
  args: ["-y", "@modelcontextprotocol/server-sqlite", "database.db"]

Git Server

git:
  command: "uvx"
  args: ["mcp-server-git", "--repository", "."]

Model Provider Configuration

OpenAI

openai:
  default_model: gpt-4o
  max_tokens: 4096
  temperature: 0.7

Anthropic

anthropic:
  default_model: claude-3-5-sonnet-20241022
  max_tokens: 4096
  temperature: 0.7

Azure OpenAI

Configure Azure OpenAI with different endpoint types:
azure:
  default_model: gpt-4o-mini
  api_version: "2025-01-01-preview"
Use api_version: "2025-01-01-preview" for structured outputs support.

AWS Bedrock

bedrock:
  default_model: anthropic.claude-3-5-sonnet-20241022-v2:0
  region: us-east-1

Google Gemini

Configure Google Gemini with different authentication methods:
google:
  default_model: gemini-2.0-flash
  temperature: 0.7
  vertexai: false
Use the Gemini Developer API with your API key. Set vertexai: false (default).

Advanced Configuration

mcp-agent validates configuration against a schema. Check the configuration schema for all available options.

Configuration Examples

Basic Web Agent

execution_engine: asyncio
logger:
  transports: [console]
  level: info

mcp:
  servers:
    fetch:
      command: "uvx"
      args: ["mcp-server-fetch"]

openai:
  default_model: gpt-4o

File Processing Agent

execution_engine: asyncio
logger:
  transports: [file]
  level: info
  path: "logs/file-processor.jsonl"

mcp:
  servers:
    filesystem:
      command: "npx"
      args: ["-y", "@modelcontextprotocol/server-filesystem", "/data"]
    
    sqlite:
      command: "npx"
      args: ["-y", "@modelcontextprotocol/server-sqlite", "results.db"]

anthropic:
  default_model: claude-3-5-sonnet-20241022

Multi-Provider Agent

execution_engine: asyncio
logger:
  transports: [console, file]
  level: info
  path: "logs/multi-provider.jsonl"

mcp:
  servers:
    fetch:
      command: "uvx"
      args: ["mcp-server-fetch"]
    
    filesystem:
      command: "npx"
      args: ["-y", "@modelcontextprotocol/server-filesystem", "."]

# Configure multiple providers
openai:
  default_model: gpt-4o

anthropic:
  default_model: claude-3-5-sonnet-20241022

azure:
  default_model: gpt-4o
  api_version: "2024-02-15-preview"

Troubleshooting

Next Steps