Configuration
Learn how to configure mcp-agent using configuration files to control logging, execution, model providers, and MCP server connections.
Configuration Files
mcp-agent uses two configuration files:
mcp_agent.config.yaml
Application settings, logging, and server configurations
mcp_agent.secrets.yaml
API keys and sensitive information (should be gitignored)
Basic Configuration
Create config file
Create mcp_agent.config.yaml
in your project root:execution_engine: asyncio
logger:
transports: [console]
level: info
mcp:
servers:
fetch:
command: "uvx"
args: ["mcp-server-fetch"]
filesystem:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
openai:
default_model: gpt-4o
Create secrets file
Create mcp_agent.secrets.yaml
for sensitive data:openai:
api_key: "your-openai-api-key"
Add mcp_agent.secrets.yaml
to your .gitignore
file to avoid committing API keys.
Load configuration
mcp-agent automatically loads these files when you create an MCPApp
:from mcp_agent.app import MCPApp
# Configuration is loaded automatically
app = MCPApp(name="my_agent")
Configuration Reference
Execution Engine
Controls how mcp-agent executes workflows:
execution_engine: asyncio
Standard async execution for most use cases.
Logging Configuration
logger:
transports: [console]
level: debug # debug, info, warning, error
logger:
transports: [file]
level: info
path: "logs/mcp-agent.jsonl"
logger:
transports: [console, file]
level: info
path: "logs/mcp-agent.jsonl"
logger:
transports: [file]
level: info
path_settings:
path_pattern: "logs/mcp-agent-{unique_id}.jsonl"
unique_id: "timestamp" # or "session_id"
timestamp_format: "%Y%m%d_%H%M%S"
MCP Server Configuration
Define MCP servers your agents can connect to:
mcp:
servers:
server_name:
command: "command_to_run"
args: ["arg1", "arg2"]
description: "Optional description"
Common MCP Servers
Fetch Server
fetch:
command: "uvx"
args: ["mcp-server-fetch"]
Filesystem Server
filesystem:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
SQLite Server
sqlite:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-sqlite", "database.db"]
Git Server
git:
command: "uvx"
args: ["mcp-server-git", "--repository", "."]
Model Provider Configuration
OpenAI
openai:
default_model: gpt-4o
max_tokens: 4096
temperature: 0.7
Anthropic
anthropic:
default_model: claude-3-5-sonnet-20241022
max_tokens: 4096
temperature: 0.7
Azure OpenAI
Configure Azure OpenAI with different endpoint types:
azure:
default_model: gpt-4o-mini
api_version: "2025-01-01-preview"
Use api_version: "2025-01-01-preview"
for structured outputs support.
AWS Bedrock
bedrock:
default_model: anthropic.claude-3-5-sonnet-20241022-v2:0
region: us-east-1
Google Gemini
Configure Google Gemini with different authentication methods:
google:
default_model: gemini-2.0-flash
temperature: 0.7
vertexai: false
Use the Gemini Developer API with your API key. Set vertexai: false
(default).
Groq
Groq provides fast inference for open-source models through an OpenAI-compatible API:
openai:
base_url: "https://api.groq.com/openai/v1"
default_model: llama-3.3-70b-versatile
Groq uses OpenAI-compatible endpoints. Popular models include llama-3.3-70b-versatile
, llama-4-maverick-17b-128e-instruct
, and kimi-k2-instruct
.
Together AI
Together AI provides access to various open-source models through an OpenAI-compatible API:
openai:
base_url: "https://api.together.xyz/v1"
default_model: meta-llama/Llama-3.3-70B-Instruct-Turbo
Ollama
Ollama provides local model inference with OpenAI-compatible endpoints:
openai:
base_url: "http://localhost:11434/v1"
api_key: "ollama" # Required but can be any value
default_model: llama3.2:3b
Ollama runs locally and doesn’t require a real API key. The framework includes specialized OllamaAugmentedLLM
for better integration.
Advanced Configuration
Temporal Configuration
Configure Temporal for durable workflow execution:
execution_engine: temporal
temporal:
host: "localhost:7233" # Temporal server host
namespace: "default"
task_queue: "mcp-agent"
max_concurrent_activities: 20
timeout_seconds: 60
tls: false # Enable TLS for production
api_key: "${TEMPORAL_API_KEY}" # Optional API key
id_reuse_policy: "allow_duplicate" # Options: allow_duplicate, allow_duplicate_failed_only, reject_duplicate, terminate_if_running
rpc_metadata: # Optional metadata for RPC calls
custom-header: "value"
Observability Configuration
Enable tracing with OpenTelemetry:
otel:
enabled: true
service_name: "mcp-agent"
service_version: "1.0.0"
service_instance_id: "instance-1"
sample_rate: 1.0 # Sample all traces
# Multiple exporters can be configured
exporters:
- type: "console" # Print to console
- type: "file"
path: "traces/mcp-agent.jsonl"
path_settings:
path_pattern: "traces/mcp-agent-{unique_id}.jsonl"
unique_id: "timestamp" # or "session_id"
timestamp_format: "%Y%m%d_%H%M%S"
- type: "otlp"
endpoint: "http://localhost:4317"
headers:
Authorization: "Bearer ${OTEL_TOKEN}"
MCP Server Transport Options
mcp-agent supports multiple MCP server transport mechanisms:
mcp:
servers:
filesystem:
transport: stdio # Default, can be omitted
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
env:
DEBUG: "true"
Standard input/output transport for local server processes.
MCP Server Advanced Configuration
Complete configuration options for MCP servers:
mcp:
servers:
advanced_server:
# Basic configuration
name: "Advanced Server"
description: "Server with all options configured"
# Transport configuration
transport: "streamable_http" # Options: stdio, sse, streamable_http, websocket
url: "https://api.example.com/mcp"
# Authentication (simple structure)
auth:
api_key: "${API_KEY}"
# Timeout settings
http_timeout_seconds: 30 # HTTP request timeout
read_timeout_seconds: 120 # Event read timeout
terminate_on_close: true # For streamable HTTP
# Headers for HTTP-based transports
headers:
Authorization: "Bearer ${API_TOKEN}"
User-Agent: "mcp-agent/1.0"
# Roots configuration (file system access)
roots:
- uri: "file:///workspace"
name: "Workspace"
server_uri_alias: "file:///data" # Optional alias
- uri: "file:///shared/resources"
name: "Shared Resources"
# Environment variables for stdio transport
env:
DEBUG: "true"
LOG_LEVEL: "info"
Environment Variable Substitution
mcp-agent supports environment variable substitution using ${VARIABLE_NAME}
syntax:
# Config file
openai:
api_key: "${OPENAI_API_KEY}" # Resolved from environment
base_url: "${OPENAI_BASE_URL:-https://api.openai.com/v1}" # With default
mcp:
servers:
database:
url: "${DATABASE_URL}"
headers:
Authorization: "Bearer ${DB_TOKEN}"
temporal:
host: "${TEMPORAL_SERVER_URL:-localhost:7233}"
api_key: "${TEMPORAL_API_KEY}"
Use ${VAR:-default}
syntax to provide fallback values when environment variables are not set.
Secrets Management
Keep sensitive configuration in separate secrets files:
Create mcp_agent.secrets.yaml
alongside your config:# mcp_agent.secrets.yaml
openai:
api_key: "sk-..."
anthropic:
api_key: "sk-ant-..."
temporal:
api_key: "..."
Always add mcp_agent.secrets.yaml
to your .gitignore
file.
Subagent Configuration
Load subagents from Claude Code format or other sources:
agents:
enabled: true
search_paths:
- ".claude/agents" # Project-level agents
- "~/.claude/agents" # User-level agents
- ".mcp-agent/agents" # MCP Agent specific
- "~/.mcp-agent/agents"
pattern: "**/*.*" # Glob pattern for agent files
definitions: # Inline agent definitions
- name: "code-reviewer"
description: "Reviews code for best practices"
instruction: "Review code and provide feedback"
Schema Validation
mcp-agent validates configuration against a schema. Check the configuration schema for all available options.
Deployment Scenarios
Development Environment
Local development configuration:
# mcp_agent.config.yaml
execution_engine: asyncio
logger:
transports: [console]
level: debug
progress_display: true # Show progress indicators
mcp:
servers:
filesystem:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
env:
DEBUG: "true"
openai:
default_model: gpt-4o-mini # Use cheaper model for dev
api_key: "${OPENAI_API_KEY}"
# Enable telemetry for debugging
usage_telemetry:
enabled: true
enable_detailed_telemetry: false # Set to true for detailed debugging
Production Environment
Production deployment with Temporal and monitoring:
# mcp_agent.config.yaml
execution_engine: temporal
temporal:
host: "${TEMPORAL_SERVER_URL}"
namespace: "production"
task_queue: "mcp-agent-prod"
max_concurrent_activities: 50
timeout_seconds: 300
tls: true
api_key: "${TEMPORAL_API_KEY}"
logger:
transports: [file]
level: info
path: "/var/log/mcp-agent/app.jsonl"
path_settings:
path_pattern: "logs/mcp-agent-{unique_id}.jsonl"
unique_id: "timestamp"
timestamp_format: "%Y%m%d"
otel:
enabled: true
service_name: "mcp-agent-prod"
exporters:
- type: "otlp"
endpoint: "${OTEL_ENDPOINT}"
headers:
Authorization: "Bearer ${OTEL_TOKEN}"
mcp:
servers:
database:
transport: "streamable_http"
url: "${DATABASE_SERVER_URL}"
headers:
Authorization: "Bearer ${DATABASE_API_TOKEN}"
http_timeout_seconds: 30
read_timeout_seconds: 120
anthropic:
default_model: claude-3-5-sonnet-20241022
api_key: "${ANTHROPIC_API_KEY}"
Testing Environment
Configuration for automated testing:
# mcp_agent.config.test.yaml
execution_engine: asyncio
logger:
transports: [file]
level: debug
path: "test-logs/test-run.jsonl"
# Mock servers for testing
mcp:
servers:
mock_fetch:
command: "python"
args: ["-m", "tests.mock_servers.fetch"]
mock_filesystem:
command: "python"
args: ["-m", "tests.mock_servers.filesystem"]
# Use test models with deterministic outputs
openai:
default_model: gpt-3.5-turbo
temperature: 0 # Deterministic outputs
seed: 42
Configuration Examples
Basic Web Agent
execution_engine: asyncio
logger:
transports: [console]
level: info
mcp:
servers:
fetch:
command: "uvx"
args: ["mcp-server-fetch"]
openai:
default_model: gpt-4o
reasoning_effort: "medium" # For o-series models: low, medium, high
api_key: "${OPENAI_API_KEY}"
File Processing Agent
execution_engine: asyncio
logger:
transports: [file]
level: info
path: "logs/file-processor.jsonl"
mcp:
servers:
filesystem:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "/data"]
sqlite:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-sqlite", "results.db"]
anthropic:
default_model: claude-3-5-sonnet-20241022
Multi-Provider Agent
execution_engine: asyncio
logger:
transports: [console, file]
level: info
path: "logs/multi-provider.jsonl"
mcp:
servers:
fetch:
command: "uvx"
args: ["mcp-server-fetch"]
filesystem:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
# Configure multiple providers
openai:
default_model: gpt-4o
api_key: "${OPENAI_API_KEY}"
anthropic:
default_model: claude-3-5-sonnet-20241022
api_key: "${ANTHROPIC_API_KEY}"
provider: "anthropic" # Options: anthropic, bedrock, vertexai
azure:
endpoint: "${AZURE_OPENAI_ENDPOINT}"
api_key: "${AZURE_OPENAI_API_KEY}"
credential_scopes:
- "https://cognitiveservices.azure.com/.default"
google:
api_key: "${GOOGLE_API_KEY}" # For Gemini API
vertexai: false # Set to true for Vertex AI
project: "${GOOGLE_CLOUD_PROJECT}" # For Vertex AI
location: "us-central1" # For Vertex AI
bedrock:
aws_access_key_id: "${AWS_ACCESS_KEY_ID}"
aws_secret_access_key: "${AWS_SECRET_ACCESS_KEY}"
aws_region: "us-east-1"
profile: "default" # AWS profile to use
cohere:
api_key: "${COHERE_API_KEY}"
Enterprise Configuration with Authentication
execution_engine: temporal
temporal:
host: "${TEMPORAL_SERVER_URL}"
namespace: "enterprise"
tls: true
task_queue: "enterprise-agents"
max_concurrent_activities: 50
api_key: "${TEMPORAL_API_KEY}"
rpc_metadata:
tenant-id: "${TENANT_ID}"
logger:
transports: [file]
level: info
path: "/var/log/mcp-agent/enterprise.jsonl"
path_settings:
path_pattern: "/var/log/mcp-agent/mcp-agent-{unique_id}.jsonl"
unique_id: "timestamp"
timestamp_format: "%Y%m%d"
# Enterprise observability
otel:
enabled: true
service_name: "mcp-agent-enterprise"
service_version: "${APP_VERSION}"
service_instance_id: "${INSTANCE_ID}"
exporters:
- type: "otlp"
endpoint: "${OTEL_ENDPOINT}"
headers:
Authorization: "Bearer ${OTEL_TOKEN}"
mcp:
servers:
corporate_db:
transport: "streamable_http"
url: "${CORPORATE_DB_URL}"
headers:
X-Tenant-ID: "${TENANT_ID}"
Authorization: "Bearer ${API_TOKEN}"
http_timeout_seconds: 30
read_timeout_seconds: 120
secure_files:
transport: "stdio"
command: "corporate-file-server"
args: ["--tenant", "${TENANT_ID}"]
env:
CORPORATE_AUTH_TOKEN: "${CORPORATE_AUTH_TOKEN}"
# Use organization's Azure OpenAI deployment
azure:
endpoint: "${AZURE_OPENAI_ENDPOINT}"
api_key: "${AZURE_OPENAI_KEY}"
credential_scopes:
- "https://cognitiveservices.azure.com/.default"
# Disable telemetry in enterprise environments
usage_telemetry:
enabled: false
Multi-Environment Configuration
Use different configurations for different environments:
# mcp_agent.config.dev.yaml
execution_engine: asyncio
logger:
transports: [console]
level: debug
progress_display: true
mcp:
servers:
filesystem:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
openai:
default_model: gpt-4o-mini # Cheaper for dev
Configuration Schema Reference
The complete configuration schema is available at mcp-agent.config.schema.json.
Core Settings Structure
# Top-level configuration options
execution_engine: "asyncio" | "temporal" # Default: asyncio
# Provider configurations (all optional)
openai: OpenAISettings
anthropic: AnthropicSettings
azure: AzureSettings
google: GoogleSettings
bedrock: BedrockSettings
cohere: CohereSettings
# Infrastructure
mcp: MCPSettings
temporal: TemporalSettings
logger: LoggerSettings
otel: OpenTelemetrySettings
# Application
agents: SubagentSettings
usage_telemetry: UsageTelemetrySettings
Provider Settings Reference
openai:
api_key: string # API key
base_url: string # Custom base URL
default_model: string # Default model name
reasoning_effort: "low" | "medium" | "high" # For o1 models
user: string # User identifier
default_headers: dict # Custom headers
anthropic:
api_key: string # API key
default_model: string # Default model name
provider: "anthropic" | "bedrock" | "vertexai" # Provider type
# For bedrock provider
aws_access_key_id: string
aws_secret_access_key: string
aws_session_token: string
aws_region: string
profile: string
# For vertexai provider
project: string
location: string
azure:
api_key: string # API key
endpoint: string # Azure endpoint URL
credential_scopes: list[string] # OAuth scopes
google:
api_key: string # For Gemini API
vertexai: boolean # Use Vertex AI (default: false)
project: string # GCP project (for Vertex AI)
location: string # GCP location (for Vertex AI)
Troubleshooting
Issue: mcp_agent.config.yaml
not foundSolutions:
- Ensure configuration files are in your project directory
- Check search paths: current directory,
.mcp-agent/
subdirectory, home directory ~/.mcp-agent/
- Use absolute path with
MCPApp(config_path="/path/to/config.yaml")
Issue: YAML validation or parsing errorsSolutions:
- Validate YAML syntax using online validators
- Check indentation (use spaces, not tabs)
- Verify all required fields are present
- Check the configuration schema
- Use quotes around string values with special characters
Environment Variable Substitution Errors
Issue: Variables like ${API_KEY}
not resolvedSolutions:
- Verify environment variables are set:
echo $API_KEY
- Use defaults:
${API_KEY:-default_value}
- Check variable names match exactly (case-sensitive)
- Escape literal
$
with $$
if needed
MCP Server Connection Errors
Issue: Cannot connect to MCP serversSolutions:
- Verify server commands are installed and accessible
- Check command arguments and paths
- Test server manually:
npx @modelcontextprotocol/server-filesystem .
- Verify environment variables for servers are set
- Check file permissions for stdio transport
- For HTTP transports, verify URL accessibility
Model Provider Authentication
Issue: API key or authentication errorsSolutions:
- Verify API keys are correct and active
- Check rate limits and quotas
- For Azure: ensure endpoint URL format is correct
- For Bedrock: verify AWS credentials and permissions
- For Google: check authentication method (API key vs service account)
Temporal Connection Issues
Issue: Cannot connect to Temporal serverSolutions:
- Verify Temporal server is running:
temporal server start-dev
- Check host and port configuration
- For production: verify TLS settings and certificates
- Check namespace exists and is accessible
- Verify API key if using Temporal Cloud
Next Steps