The Agentic Coding Workflow Nobody Talks About: Managing the Agents Themselves
We’ve settled the “can AI write code” debate. It can. Claude, GPT, Gemini — they all produce working code from natural language prompts. The ceiling keeps rising, and the output quality keeps improving.
The new bottleneck isn’t the AI’s capability. It’s yours.
Specifically, your ability to manage what multiple AI coding agents are doing at the same time, across multiple projects, without losing track of context, approvals, and decisions.
The capability gap has shifted
With tools like Claude Code running in agentic mode — reading files, writing code, running tests, iterating on failures — the AI handles much of the implementation work autonomously. The human job has shifted from “fix what the AI wrote” to “supervise what the AI is doing.”
And that supervision problem is genuinely hard at scale.
What managing agents actually looks like
If you’re using Claude Code seriously, your typical day might involve:
- 3-5 active projects, each with at least one running Claude session
- Approval prompts appearing unpredictably across those sessions
- Context switching between sessions that are at different stages of different tasks
- Decision-making about whether to accept, reject, or redirect what an agent is doing
- Tracking which sessions are waiting for you, which are running autonomously, and which have finished
Do this in a standard terminal and you’re essentially air traffic control with no radar. You’re cycling through tabs, trying to remember which session is doing what, hoping you don’t miss an approval prompt that’s been blocking progress for twenty minutes.
The tools haven’t caught up to the workflow
Think about how much tooling exists for writing code: IDEs, linters, formatters, type checkers, CI/CD pipelines. Decades of investment in making human coding more productive.
Now think about what tooling exists for managing AI coding agents. The answer, mostly, is “a terminal and good luck.”
This is a tooling gap, and it matters because the management layer is becoming the rate limiter. You can have the most capable AI in the world, but if you can’t efficiently supervise three instances of it working in parallel, you’re bottlenecked at one serial stream of work.
What an agent management workflow actually needs
After working with Claude Code daily for months, I’ve identified the capabilities that matter most for managing agentic coding workflows:
Project-level organization. Agents need to be grouped by project, not just by terminal tab. When you switch context, you need to see all the sessions for that project — their status, their history, their configuration — in one place.
Non-disruptive approvals. Approval prompts shouldn’t require you to be staring at the terminal where they appear. You need to see them wherever you are and act on them without losing your current focus.
Safe parallel execution. Running two agents on the same codebase is dangerous without isolation. Git worktrees or similar mechanisms are needed to prevent agents from overwriting each other’s work.
Persistent context. When a session ends or you step away, the conversation and reasoning should be preserved. The context an agent built up over an hour of work is valuable, and it shouldn’t evaporate when you close a tab.
Status visibility. At a glance, you should know which agents are working, which are blocked, and which are done. Notifications should be push, not pull.
Per-project configuration. Different projects need different API keys, different MCP servers, different approval thresholds. The management layer should handle this automatically, not require you to remember.
How I solved this for my own workflow
I built Crystl because I hit every one of these problems myself. It’s a macOS terminal designed specifically for Claude Code, and it treats agent management as a first-class concern.
Gems organize sessions by project. Shards are named terminal sessions within a gem. The Crystal Rail gives at-a-glance visibility across all projects and sessions. Isolated shards use git worktrees for conflict-free parallel work. Floating approval panels surface prompts without disrupting your flow. Conversation history persists automatically. Notifications tell you when any session needs attention.
This isn’t about making the terminal prettier. It’s about building the management layer that agentic coding workflows require.
The category is emerging
I don’t think Crystl will be the only tool in this space for long. As agentic coding becomes mainstream — and it is becoming mainstream fast — the need for agent management tooling will become obvious to everyone.
The teams and individuals who figure out agent management first will have a compounding advantage. Not because they have better AI, but because they can run more of it, in parallel, without things falling apart.
What you can do today
If you’re feeling the management bottleneck, here are practical steps:
-
Acknowledge the shift. The hard problem is no longer “can the AI code.” It’s “can I manage what the AI is doing.” Treating it as a real workflow challenge — not just a terminal inconvenience — is step one.
-
Stop running agents in generic terminals. General-purpose terminals don’t know what an agent is, what an approval prompt is, or what a project is. You’re fighting the tool.
-
Invest in parallel execution. If you’re still working one serial Claude session at a time, you’re underutilizing the technology. Set up isolated environments so you can safely run multiple agents.
-
Preserve context. Every conversation with an AI agent contains decisions, reasoning, and institutional knowledge. Treat it like documentation, not like disposable terminal output.
Crystl handles all of this out of the box and it’s free — sign up at crystl.dev/login. But regardless of what tool you use, the management problem is real and it’s worth solving intentionally.