This week’s AI Bite: Multi-agent workflow in Claude Code
Weekly AI Bites is a series that gives you a direct look into our day-to-day AI work. Every post shares insights, experiments, and experiences straight from our team’s meetings and Slack, highlighting what models we’re testing, which challenges we’re tackling, and what’s really working in real products. If you want to know what’s buzzing in AI, check Boldare’s channels every Monday for the latest bite.
As a Software Engineer, I wanted to share something I’ve been testing recently — running multiple Claude Code agents in parallel on separate worktrees. This is a practical real-world use case.

Table of contents
Background
I had an app that technically worked, but after a few manual tests I decided I wanted a completely different UX and architecture. I prepared a new product vision, a set of required changes, and a technical plan with stages. Then instead of working through it alone — I split the work across agents.
One important thing: I didn’t read any documentation on how to do this. I simply asked the agent itself — “can you work in parallel on different branches?” — and it explained the possibilities, proposed a workflow, and organized the entire structure on its own.
How it worked in practice
The core idea: instead of one long context — multiple agents, each with a fresh window and its own isolated branch.
main (API contract updated FIRST)
│
├── Agent 1 (worktree, in parallel)
│ Backend: new fields, DB migration, integration tests
│
├── Agent 2 (worktree, in parallel)
│ New AI prompt + new types (independent of Agent 1)
│
└── Agent 3 (after Agent 1 completes)
New endpoint (branched from Agent 1's branch — due to dependency!)
Each agent gets isolation via a worktree → Claude Code automatically creates a temporary worktree, the agent works on a separate branch. It then automatically merges into a test branch, runs tests, and I verify through the UI.
In later stages (independent frontend + backend) I managed to run 3 agents simultaneously — there were no dependencies between them.
Models: Opus for planning and dependency analysis, Sonnet for implementation (faster, cheaper, good enough for coding).
Synchronous vs. asynchronous agent mode
There’s also an option to launch an agent with run_in_background: true — the agent runs in the background and you get a notification when it’s done, instead of waiting in place. In theory you can do something else in the main conversation while agents are working.
In my case I deliberately didn’t use this — agents ran synchronously, because each phase (merge, test verification, decision on next step) required my review before launching the next ones. With this kind of flow, the “run → wait → evaluate → proceed” sequence made more sense than “fire in the background and check when done.” I will be testing run_in_background in scenarios where agents are truly independent and don’t block each other.
Advantages
- Real parallelism — you wait for the slowest agent, not the sum of all times
- Context isolation — each agent starts fresh, doesn’t “pollute” the main conversation
- Model selection per agent — Opus for thinking, Sonnet for doing
- Safety — nothing reaches main without your approval, test branch for verification
- Agents write tests — each agent gets an instruction to verify its own work
Limitations
- Agents don’t know about each other — you have to manually manage dependencies and ordering
- Dependency ordering is critical — if Agent B needs the output of Agent A, you can’t run them in parallel. Dependency analysis before starting is mandatory
- No real-time visibility — you see results only when the agent finishes (noticeable for 12+ min operations)
- Prompts must be very precise — the agent doesn’t have your conversation context. Vague prompt = wrong implementation
- Merge conflicts — if two agents touched the same file, you have to resolve manually
What can be configured better (plan, not all verified)
- CLAUDE.md with a parallel work section — so the agent knows upfront which files not to touch when working alongside others.
- A dedicated /parallel-analyze skill — a skill that reads the technical plan itself, analyzes dependencies, and proposes how to split work across agents. Currently I do this manually in conversation with the agent.
- Agent Teams (experimental feature) — agents can communicate with each other and share a common task list, which could eliminate manual dependency management entirely.
Share this article:






