AI Pair Programming: How It Works and How to Do It Well
Pair programming used to mean two developers at one keyboard. AI has changed the equation — your pair partner is now an AI agent that can read your entire codebase, suggest approaches, and implement changes while you review and steer.
Here’s how to make it work.
What AI pair programming looks like
Traditional pair programming has a “driver” (typing) and a “navigator” (thinking, reviewing). AI pair programming flips this:
- You navigate. You decide what to build, describe the approach, and review the results.
- The AI drives. It reads the code, plans the implementation, makes edits, and runs commands.
- You approve. Each action requires your sign-off before it’s executed.
This is fundamentally different from autocomplete. You’re not filling in the next line — you’re directing an agent that works across your entire project.
The workflow
1. Set context
Start by telling the AI what you’re working on and what you want to accomplish. Be specific:
Vague: “Make the app better”
Better: “Add rate limiting to the /api/users endpoint. Use a sliding window algorithm. Store counts in Redis.”
The more context you provide upfront, the better the result. Include constraints, preferences, and patterns you want followed.
2. Let it explore
A good AI coding agent reads your codebase before writing code. When you give Claude Code a task, it uses tools like Read, Grep, and Glob to understand your project structure, find existing patterns, and identify the right files to modify.
Don’t rush this step. The exploration phase is where the AI builds context that leads to better code.
3. Review proposals
When the AI proposes a file edit or command, read it before approving:
- Does the approach match your expectations?
- Is it following your project’s patterns?
- Are there edge cases it’s missing?
- Is it overcomplicating things?
This review step is what makes AI pair programming effective. You catch issues before they’re committed, not after.
4. Iterate
AI rarely gets everything perfect on the first try. After approving the initial changes, review the result and provide feedback:
- “The auth middleware should also check for expired tokens”
- “Use the existing
validateEmailfunction instead of a regex” - “This is too complex — simplify it”
Each round of feedback improves the result. This iterative conversation is where the real value of pair programming happens.
5. Verify
Ask the AI to run your test suite, or run it yourself. Verify the changes work end-to-end before committing.
Getting better results
Write good prompts
Think of your prompts as a brief to a capable but new developer:
- Be specific about requirements. “Add pagination with 20 items per page, cursor-based, using the
idfield.” - Mention constraints. “Don’t add new dependencies.” “Keep the existing API contract.”
- Reference existing code. “Follow the same pattern as the
OrderService.”
Start small
Break large tasks into smaller pieces. Instead of “refactor the entire auth system,” try:
- “Extract the token validation logic into a separate module”
- “Add refresh token support”
- “Update the tests for the new module”
Each piece is easier for the AI to get right, and you can course-correct between steps.
Trust but verify
AI coding agents are good at writing correct code, but they can:
- Misunderstand requirements
- Over-engineer solutions
- Miss edge cases
- Break existing behavior
The approval step is your safety net. Use it.
Use the right tool for the right task
AI pair programming works best for:
- Building features — Describe what you want, let the AI implement it
- Fixing bugs — Point the AI at the problem, let it investigate and fix
- Refactoring — Describe the target state, let the AI get there
- Writing tests — AI is excellent at generating test cases
- Code exploration — Ask the AI to explain unfamiliar code
It works less well for:
- Deeply creative design decisions — AI can implement your vision, but shouldn’t define it
- Performance-critical optimization — AI may not understand your specific performance constraints
- Security-sensitive code — Always have human review for auth, encryption, and access control
Scaling pair programming
One limitation of pair programming with another human is that it doesn’t scale. You can only pair with one person at a time.
With AI, you can run multiple pair programming sessions simultaneously — one agent working on auth while another writes tests while a third refactors a module. The bottleneck shifts from the implementation to the review.
This is where session management becomes critical. If you’re running multiple AI pair sessions across multiple projects, you need a way to track what each one is doing and handle approval requests without constant context switching.
Crystl was built for exactly this. It organizes AI sessions into project workspaces, surfaces approval requests as floating panels that don’t interrupt your current work, and lets you run parallel sessions on the same repo using git worktree isolation. It’s pair programming at scale.