Skip to main content
mcp-agent ships production-ready implementations of every pattern in Anthropic’s Building Effective Agents plus complementary flows inspired by OpenAI Swarm. Each helper in workflows/factory.py returns an AugmentedLLM that can be treated like any other LLM in the framework—compose it, expose it as a tool, or wrap it with additional logic.

Patterns at a glance

PatternReach for it when…Factory helper(s)HighlightsRunnable example
Parallel (Map-Reduce)You need multiple specialists to look at the same request concurrentlycreate_parallel_llm(...)Fan-out/fan-in via FanOut + FanIn, accepts agents and plain callablesworkflow_parallel
RouterRequests must be dispatched to the best skill, server, or functioncreate_router_llm(...), create_router_embedding(...)Confidence-scored results, route_to_{agent,server,function} helpers, optional embedding routingworkflow_router
Intent ClassifierYou need lightweight intent buckets before routing or automationcreate_intent_classifier_llm(...), create_intent_classifier_embedding(...)Returns structured IntentClassificationResult with entities and metadataworkflow_intent_classifier
Planner (Orchestrator)A goal requires multi-step planning and coordination across agentscreate_orchestrator(...)Switch between full and iterative planning, override planner/synthesizer rolesworkflow_orchestrator_worker
Deep ResearchLong-horizon investigations with budgets, memory, and policy checkscreate_deep_orchestrator(...)Knowledge extraction, policy engine, Temporal-friendly executionworkflow_deep_orchestrator
Evaluator-OptimizerYou want an automated reviewer to approve or iterate on draftscreate_evaluator_optimizer_llm(...)QualityRating thresholds, detailed feedback loop, refinement_historyworkflow_evaluator_optimizer
Build Your OwnYou need a bespoke pattern stitched from the primitives aboveMix helpers, native agents, and @app.tool decoratorsCompose routers, parallel fan-outs, evaluators, or custom callablesSee all workflows + create_swarm(...)

Before you start

  • Model your specialists as AgentSpec or instantiate Agent/AugmentedLLM objects up front. The factory helpers accept any combination.
  • Run everything inside async with app.run() as running_app: so the shared Context is initialised (server registry, executor, tracing, secrets).
  • Tune behaviour with RequestParams (temperature, max tokens, strict schema mode) and provider-specific options (provider="anthropic", Azure/OpenAI models, etc.).
  • Expose the returned AugmentedLLM directly (await llm.generate_str(...)) or wrap it with @app.tool / @app.async_tool to make it callable over MCP.

Composable building blocks

  • Patterns are just AugmentedLLMs, so you can nest them—e.g. route to an orchestrator, run parallel fan-outs inside a planner step, or wrap the output of any pattern with an evaluator-optimizer loop.
  • Mix LLM-powered steps with deterministic functions. Routers accept plain Python callables; parallel workflows blend AgentSpec with helpers like fan_out_functions.
  • Share state via the Context: reuse secrets, telemetry, the executor, and the token counter across nested patterns without additional wiring.

Observability and control

  • Every pattern reports token usage through the global TokenCounter. Call await llm.get_token_node() to inspect fan-out costs, planner iterations, or evaluation loops.
  • Adjust concurrency and retries centrally in mcp_agent.config.yaml (executor.max_concurrent_activities, retry policy) instead of per-pattern plumbing.
  • Enable tracing (otel.enabled: true) to see spans for planner steps, router decisions, evaluator iterations, and MCP tool calls in Jaeger or any OTLP backend.
I