Using AI for Code Review and Quality
Code review is one of the most time-consuming parts of development. AI coding assistants can help — not by replacing human review, but by catching issues before they reach a reviewer and helping you understand unfamiliar code faster.
AI-assisted review techniques
Ask for a review before committing
Before you open a PR, ask your AI coding assistant to review the changes:
Review the changes I've made on this branch. Look for bugs,
edge cases, security issues, and anything that doesn't follow
the patterns in this codebase.
Claude Code will read the diff, examine the affected files, and point out potential issues. It catches things like:
- Null reference errors
- Missing error handling
- Inconsistent naming
- Unused imports or variables
- Security concerns (SQL injection, XSS, exposed secrets)
This pre-review step catches obvious issues before a human reviewer sees them, saving everyone time.
Understand unfamiliar code
When you’re reviewing a PR that touches code you don’t know well:
Explain what the code in src/payments/processor.ts does
and how these changes affect its behavior.
The AI reads the file, traces the logic, and explains it in plain English. This is faster than reading through unfamiliar code yourself and helps you give more informed review feedback.
Check for test coverage
After making changes, ask the AI to evaluate test coverage:
Are there any edge cases in the auth middleware that aren't
covered by tests? Write tests for any gaps you find.
The AI reads both the implementation and the test files, identifies gaps, and generates tests to fill them.
Refactor with confidence
When you want to refactor but worry about breaking things:
Refactor the UserService class to use dependency injection
instead of importing the database module directly. Make sure
all existing tests still pass.
The AI makes the changes and runs the test suite, iterating until everything passes.
Building a review workflow
Pre-commit review
Before committing any AI-generated code, review it yourself. Even if you approved each change individually, look at the full diff to make sure the pieces fit together:
git diff --staged
Check that:
- The overall approach makes sense
- No unnecessary files were changed
- The code style matches your project
- Tests were added for new functionality
Branch-based review
Run AI code review on a feature branch before merging:
- Create a branch for your feature
- Let the AI implement the feature
- Ask the AI to review its own work
- Review the full diff yourself
- Run the test suite
- Merge only when you’re satisfied
Parallel review sessions
For large PRs or complex changes, you can run multiple review passes:
- One session focused on security
- One session focused on performance
- One session focused on test coverage
Each session examines the same code from a different angle. Running these in parallel saves time compared to doing them sequentially.
If you’re running multiple review sessions at once, Crystl helps by organizing each session in its own shard within the project’s gem. You can see all review sessions at a glance and switch between them from the Crystal Rail.
What AI review catches (and misses)
Good at catching
- Syntax and type errors — Especially across file boundaries
- Missing error handling — Unchecked returns, unhandled promises, missing try/catch
- Inconsistencies — Naming conventions, code style, patterns that differ from the rest of the codebase
- Common security issues — SQL injection, XSS, hardcoded secrets, insecure defaults
- Dead code — Unused variables, unreachable branches, orphaned functions
Less good at catching
- Business logic errors — The AI doesn’t know your business rules unless you tell it
- Architectural concerns — Whether a feature belongs in this service or another
- Performance at scale — The AI doesn’t know your traffic patterns
- UX implications — Code changes that affect user experience in subtle ways
Tips for effective AI review
- Be specific about what to look for. “Review for security issues” is better than “review this code.”
- Provide context. Tell the AI about your deployment environment, user base, or compliance requirements if relevant.
- Don’t skip human review. AI review is a supplement, not a replacement. Use it to catch the easy stuff so human reviewers can focus on the hard stuff.
- Review the AI’s review. Sometimes the AI flags things that aren’t actually problems. Apply judgment.
- Keep history. Being able to go back and see what the AI flagged (and what you decided) is valuable context. Tools like Crystl preserve full session history automatically.