Git Worktrees + AI Agents: Running Multiple Feature Iterations in Parallel

Over the past few days, I've been experimenting with a new-to-me workflow: using git worktrees to let AI agents try multiple parallel implementations of the same feature. It's been interesting enough to share.

What Are Git Worktrees?

Most developers know git branches. You're working on something, need to switch context, so you git checkout a different branch. Maybe you stash your changes first. It's...fine.

But git worktrees are different. They let you check out multiple branches simultaneously in different directories. Instead of constantly switching contexts, you just have several versions of your repo living side-by-side.

Here's the basic idea:

# You're in ~/project on the main branch
git worktree add ../project-feature-a -b feature/approach-a
git worktree add ../project-feature-b -b feature/approach-b

# Now you have:
# ~/project/          (main branch)
# ~/project-feature-a/ (feature/approach-a)
# ~/project-feature-b/ (feature/approach-b)

Each directory is a full working tree with its own checked-out branch. But they all share the same underlying .git repository, so commits and branches are immediately visible everywhere.

Why This Matters for AI Development

Here's where it gets interesting. When you're working with AI agents—whether it's Claude Code, Warp AI, or any other coding assistant—you're often not sure which approach will work best until you try it.

The traditional workflow is sequential trial-and-error:

  1. Ask AI to implement approach A
  2. Test it
  3. If it doesn't work well, revert and try approach B
  4. Repeat until something works

With worktrees, you can run multiple experiments in parallel:

  1. Spin up multiple worktrees
  2. Point AI agents at different worktrees
  3. Let them all work simultaneously
  4. Compare the results and pick the winner

This transforms AI coding from sequential iteration into parallel exploration.

A Practical Example

Let's say I need to implement user authentication. I'm not sure whether to use JWT tokens, session-based auth, or OAuth. Instead of trying each approach one by one, I do this:

# Create three experimental branches
git worktree add ../auth-jwt -b feature/auth-jwt
git worktree add ../auth-session -b feature/auth-session  
git worktree add ../auth-oauth -b feature/auth-oauth

# Now I can either:
# - Run three Claude Code instances in parallel
# - Ask one agent to work on each implementation sequentially
# - Mix and match with my own code

Each implementation stays isolated in its own directory. I can run tests across all three at once:

for dir in ../auth-*; do
    echo "Testing $dir..."
    (cd $dir && npm test) &
done
wait

The Real Win: Fail Faster, Learn More

Here's what I've found makes this workflow valuable:

Isolated experiments: If one AI-generated approach completely breaks something, it's contained to that worktree. My other experiments keep running.

Easy comparison: I can open multiple IDE windows and literally see the different implementations side-by-side. Sometimes approach A has better error handling while approach B has cleaner architecture—and I can steal the best parts from each.

Learning from variations: When Claude Code generates three different solutions to the same problem, I learn why certain approaches work better. This makes me better at prompting AI in the future.

No context switching penalty: My IDE doesn't need to re-index. My terminal doesn't lose state. I just cd between directories.

Best Practices I've Learned

After using this workflow for a bit, here's what's been working for me:

Use worktrees for exploration, not for branches you'll keep forever. Create them, experiment, pick a winner, then clean them up. Don't end up with 20 stale worktrees you forgot about.

Name them clearly. I use a pattern like ../project-experiment-description so I can tell at a glance what each worktree is testing.

Leverage AI for comparison. After generating three implementations, I'll ask Claude: "I have three approaches to this feature in these three directories. Can you analyze the trade-offs?" The AI can actually read all three codebases and give you architectural insights. I've also taken to creating specialized agents for design, code, and architecture reviews, and having them report on each approach before picking the winner.

Clean up religiously:

# When you're done
git worktree remove ../auth-jwt
git worktree remove ../auth-session
# Keep the winner
git checkout main
git merge feature/auth-oauth

Use it with Claude Code's best practices: The research → plan → implement → commit workflow that Anthropic recommends works even better when you're running multiple implementations. You can research once, then branch into parallel implementations.

Integration with Modern AI Tools

This workflow pairs naturally with how modern AI coding tools work:

Claude Code: You can run multiple claude instances in separate terminals, each working in a different worktree. The --dangerously-skip-permissions flag becomes especially useful here since you're deliberately running parallel experiments.

Warp AI: Warp's agentic development features work great with worktrees. You can use Warp's WARP.md files (similar to CLAUDE.md) in each worktree to give different instructions to the agent depending on which approach you're testing.

The key insight: these tools are getting good enough that the bottleneck isn't "can the AI write code?" but rather "which architectural approach should we take?" Worktrees let you explore that question in parallel rather than sequentially.

When NOT to Use This

This isn't for everything. I'm not using worktrees for:

  • Simple bug fixes (just fix it in one place)
  • Changes where you already know the approach
  • Small changes that don't justify the setup overhead
  • I've also been experimenting with this on my own, so I haven't explored how a team of people working together on a project would adopt this technique.

But when you're exploring genuinely different architectural approaches, or when you want an AI agent to try multiple solutions to a hard problem? Worktrees + AI agents is surprisingly powerful.

Getting Started

If you want to try this:

  1. Read the official git worktree documentation to understand the basics
  2. Try creating just two worktrees for your next feature
  3. Ask your AI assistant to implement different approaches in each
  4. Compare, learn, merge the winner

The cost of trying multiple approaches drops significantly—not because AI makes each approach cheaper (though it does), but because worktrees make parallel exploration practical.

The Jimmy Kimmel Affair

Like many, I watched the whole thing with Jimmy Kimmel Live! happen mostly in disbelief. It was a terrible attack on the First Amendment, and I'm glad that it appears to be mostly unwinding itself, at least in this case.

In an article on The Verge, which details Sinclair's reinstatement of Kimmel's show starting tonight, there was this quite from a Sinclair representative:

"Our decision to preempt this program was independent of any government interaction or influence," Sinclair said. "Free speech provides broadcasters with the right to exercise judgment as to the content on their local stations. While we understand that not everyone will agree with our decisions about programming, it is simply inconsistent to champion free speech while demanding that broadcasters air specific content."

That's a tricky statement in my mind, even if it might not be legally. While I'm sure that broadcasters, and the many middlemen like Sinclair who are in the chain of broadcasting television, have the right to exercise judgment in what they air, as a viewer, I am not happy to have those middlemen exercise that right. If I want to watch Kimmel, and he's on ABC, I expect to be able to turn to my local ABC affiliate and watch it. If the local affiliate's ownership has politics that are counter to Kimmel's, I expect them to keep their politics to themselves (or to their own editorial channels) or decide to not be an ABC affiliate.

Superman (2025)

I saw the premiere of Superman on Thursday night, and was happy to see that it was a colorful, positive movie with a solid version of the character that the new James Gunn-led DCU can build on. I think that it is a great movie for kids, normies, and comic book fans alike.

I really like the new interpretations of Superman and Lois Lane, and think they’re perhaps the best movie versions of those characters so far—especially Lois. That’s an enormous win for the movie and the DCU, and is enough to make this a success in my book. Unfortunately, I really don’t like this new version of Lex Luthor, who is portrayed as a one dimensional maniac. It’s hard to see the world-class genius he needs to be when he’s just unhinged in every scene.

I liked a lot about Man of Steel (★★☆☆☆), but was less happy with most of the rest of Zack Snyder’s colorless, joyless take on the DC Universe, and am happy to see that James Gunn’s version here is nearly a 180° from that. It’s approachable, fun, and it evokes the best parts of the comic book universe. It might go just a bit too far in that direction in that it can be a little goofy, but I’m happy enough with the balance it strikes.

My main criticism, other than Luthor, is that it just crams too much into its running time. Sure, it has a lot to set up—and I’m happy to forgo the back story of all that we’re introduced to—but the movie is so busy that it just doesn’t allow any time to breathe.

Overall, this is a promising start to the new DCU, and I’m looking forward to what comes next.

"Want More Bike Commuters? Build Protected Bike Lanes, Says New Study"

"Amex Centurion Lounge SFO Terminal 3 has closed"