
How to set up and run Claude Code agent teams in your workflow
Learn to configure multiple AI agents working together in parallel on your codebase, from setup to advanced coordination patterns that actually save time.
Building complex features usually means watching one AI agent work through tasks one by one while you wait. But Claude Code's new agent teams feature changes that completely—you can now run multiple AI agents in parallel, each handling different parts of your project while they coordinate with each other.
Think of it like managing a real development team, except these team members never get tired, never forget context, and can work through your entire codebase simultaneously.
Claude Code's agent teams let you coordinate multiple Claude Code instances working together. One session acts as the team lead, coordinating work, assigning tasks, and synthesizing results. Teammates work independently, each in its own context window, and communicate directly with each other.
What makes agent teams different from single AI sessions?
The key difference isn't just parallel processing—it's that teammates work independently and communicate directly with each other, unlike subagents which run within a single session and can only report back to the main agent. You can also interact with individual teammates directly without going through the lead.
The core insight behind swarms is that LLMs perform worse as context expands. This isn't just about hitting token limits—the more information in the context window, the harder it is for the model to focus on what matters right now. Adding a project manager's strategic notes to a context that's trying to fix a CSS bug actively hurts performance.
This is where agent teams shine: each teammate gets their own focused context window, just like how human teams don't have backend engineers sitting in on frontend code reviews.
How do you enable agent teams?
Agent teams are experimental and disabled by default. Here's how to get them running:
Step 1: Enable the feature
Add CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS to your settings.json or environment. You have two options:
Via environment variable:
export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1
Or add it to your Claude Code settings.json file:
{
"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
}
The settings.json approach is recommended since it persists between sessions.
Step 2: Choose your display mode
In-process (default): All teammates run inside your terminal. Use Shift+Up/Down to select a teammate, Enter to view their session, Escape to interrupt. Works everywhere—VS Code terminal, iTerm, any shell.
Split panes: Each teammate gets its own terminal pane. See everyone's output simultaneously. Requires tmux or iTerm2. Does not work in VS Code integrated terminal.
Start with in-process mode—it works everywhere and you can switch to split panes once you're comfortable.
What tasks actually benefit from agent teams?
Not every task needs multiple agents. Agent teams are most effective for tasks where parallel exploration adds real value. The strongest use cases are: Research and review: multiple teammates can investigate different aspects of a problem simultaneously, then share and challenge each other's findings.
Here are the scenarios where agent teams save real time:
Code reviews with multiple perspectives A single reviewer tends to gravitate toward one type of issue at a time. Splitting review criteria into independent domains means security, performance, and test coverage all get thorough attention simultaneously.
Debugging with competing theories Single agents find one plausible explanation and stop. Multiple agents arguing find the right explanation. When you have an intermittent bug, spawn agents with different hypotheses and let them argue over the evidence.
Feature development across layers When building a feature that touches frontend, backend, and tests, assign each teammate ownership of specific files or modules to avoid conflicts.
Documentation sprints One agent audits code for missing docs, another generates API references, and a third creates usage examples and updates the README. A reviewer ensures consistency across all outputs.
How do you create and manage agent teams?
Creating teams is surprisingly simple—no YAML configs or complex setup. Tell Claude to create an agent team and describe the task and the team structure you want in natural language. Claude creates the team, spawns teammates, and coordinates work based on your prompt.
Basic team creation example:
Create an agent team to review PR #142.
Spawn three reviewers:
- One focused on security implications
- One checking performance impact
- One validating test coverage
Have them each review and report findings.
Claude creates a team with a shared task list, spawns teammates for each perspective, has them explore the problem, synthesizes findings, and attempts to clean up the team when finished.
Advanced coordination patterns:
For complex work, use delegate mode to keep the team lead focused on coordination: Press Shift+Tab to restrict the lead to coordination-only tools—spawning, messaging, shutting down teammates, and managing tasks. No code touching.
You can also require plan approval before implementation: "Spawn an architect teammate to refactor the authentication module. Require plan approval before they make any changes. The teammate works in read-only plan mode until the lead approves. If rejected, they revise and resubmit."
What are the best practices for avoiding chaos?
Agent teams can become expensive and unproductive if not managed properly. Here's what actually works:
Start with clear boundaries The best fit is any task where you can clearly delineate boundaries between subtasks and where those subtasks don't have heavy bidirectional dependencies. Agents work well in parallel when they can operate on separate files or modules.
Provide rich context in spawn prompts Give teammates rich context in their spawn prompts. They start with a blank conversation. Tell them what the project is, what files matter, what conventions to follow, and what their specific goal is.
Monitor and guide actively Check in on teammates' progress, redirect approaches that aren't working, and synthesize findings as they come in. Letting a team run unattended for too long increases the risk of wasted effort.
Use the two-step approach The most effective pattern isn't jumping straight into a team. It's a two-step approach: plan first with plan mode, then hand the plan to a team for parallel execution. The plan gives you a checkpoint before committing tokens.
What are the real costs and limitations?
Agent teams consume significantly more tokens than single sessions. The token cost is real (~5x per teammate), so reserve agent teams for work that genuinely benefits from multiple perspectives working in parallel. For simpler tasks, stick with subagents or single sessions.
The cost consideration is real. Each subagent consumes tokens independently. Running multiple agents in parallel increases usage, so productivity gains need to justify the spend.
File conflict management Two teammates editing the same file leads to overwrites. Break the work so each teammate owns a different set of files. The system includes file locking to prevent conflicts, but clear ownership boundaries work better than relying on the tooling.
When to use subagents instead Use subagents when you need quick, focused workers that report back. Use agent teams when workers need to share findings, challenge each other, and coordinate autonomously.
Where is this technology heading?
The introduction of agent teams represents a shift from single-agent to multi-agent coordination in development workflows. With agent teams, multiple Claude instances work in parallel on a shared codebase without active human intervention. Over nearly 2,000 Claude Code sessions and $20,000 in API costs, the agent team produced a 100,000-line compiler that can build Linux 6.9 on x86, ARM, and RISC-V.
This isn't just about making development faster—it's about enabling projects that were previously impossible for individual developers to tackle. The coordination patterns you learn with agent teams today will become the foundation for how development teams (both human and AI) work together tomorrow.
Getting started is straightforward: enable agent teams in your Claude Code settings, try a simple code review with three perspectives, and see how the coordination feels. Once you understand the patterns, you can scale up to parallel implementation on features that actually justify the token cost.
The future of development isn't just AI assistance—it's AI teams that can tackle complexity at a scale that matches your ambition.