code-review
Systematic pre-commit quality gate using checklist-based review methodology with announcement protocol. Use when reviewing code before commit, performing quality gates, conducting systematic code review, or when fresh-eyes, pre-commit-review, quality-gate, or --review are mentioned. Reviewer agent should load this for thorough reviews.
$ Installieren
git clone https://github.com/outfitter-dev/agents /tmp/agents && cp -r /tmp/agents/baselayer/skills/code-review ~/.claude/skills/agents// tip: Run this command in your terminal to install the skill
name: code-review version: 1.0.0 description: Systematic pre-commit quality gate using checklist-based review methodology with announcement protocol. Use when reviewing code before commit, performing quality gates, conducting systematic code review, or when fresh-eyes, pre-commit-review, quality-gate, or --review are mentioned. Reviewer agent should load this for thorough reviews.
Fresh Eyes Review
Systematic pre-commit quality gate â checklist-based review â findings â summary.
<when_to_use>
- Pre-commit code review and quality gates
- Pre-merge pull request reviews
- Systematic code audits before deployment
- Quality verification for critical changes
- Second-opinion review requests
NOT for: quick sanity checks, trivial typo fixes, formatting-only changes
</when_to_use>
<announcement_protocol>
Starting Review
Review Scope: { files/areas under review } Focus Areas: { specific concerns or general quality gate } Checklist: { full or targeted categories }
During Review
Emit findings as discovered:
- {SEVERITY}
{FILE_PATH}:{LINE}â { issue description } - Impact: { consequences if shipped }
- Fix: { concrete remediation }
Completing Review
Review Complete
Findings Summary:
- ââ Severe: {COUNT} â blocking issues
- â Moderate: {COUNT} â should fix before merge
- â Minor: {COUNT} â consider addressing
Recommendation: { ship / fix blockers / needs rework }
{ detailed findings below if any found }
</announcement_protocol>
Type Safety
- â No
anytypes without justification comment - â Null/undefined handled explicitly (optional chaining, nullish coalescing)
- â Type guards used for union types
- â Discriminated unions for state machines
- â Generic constraints specified where needed
- â Return types explicit on public functions
- â No type assertions without safety comment
Error Handling
- â All error paths handled (no silent failures)
- â Meaningful error messages with context
- â Errors propagated or logged appropriately
- â Result types used for expected failures
- â Try/catch blocks have specific error handling
- â Promise rejections handled
- â Resource cleanup in finally blocks
Security
- â User input validated before use
- â No hardcoded secrets or credentials
- â Authentication/authorization checks present
- â Parameterized queries (no SQL injection)
- â XSS prevention (sanitized output)
- â CSRF protection where applicable
- â Sensitive data encrypted/hashed
- â Rate limiting on public endpoints
Testing
- â Tests exist for new functionality
- â Edge cases covered
- â Error scenarios tested
- â Actual assertions (not just execution)
- â No test pollution (proper setup/teardown)
- â Mocks used appropriately (not overused)
- â Test names describe behavior
- â Integration tests for critical paths
Code Quality
- â Names reveal intent (functions, variables, types)
- â Functions <50 lines (single responsibility)
- â Files <500 lines (consider splitting)
- â No magic numbers (use named constants)
- â DRY violations eliminated
- â Nested conditionals <3 deep
- â Cyclomatic complexity reasonable
- â Dead code removed
Documentation
- â Public APIs have JSDoc/TSDoc
- â Complex algorithms explained
- â Non-obvious decisions documented
- â Breaking changes noted
- â TODOs have context and owner
- â README updated if behavior changes
- â Examples provided for complex usage
Performance
- â No obvious N+1 queries
- â Appropriate data structures used
- â Unnecessary allocations avoided
- â Heavy operations async/batched
- â Caching where beneficial
- â Database indexes considered
Rust-Specific (when applicable)
- â
rustfmtandclippypassing - â
Resultpreferred over panic - â No
unwrap/expectoutside tests/startup - â Ownership/borrowing idiomatic
- â
Send/Syncbounds respected - â Unsafe code justified with comments
- â Proper error types (
thiserror/anyhow)
1. Announce (activeForm: Announcing review)
Emit starting protocol:
- Scope of review
- Focus areas
- Checklist approach (full or targeted)
2. Checklist (activeForm: Running checklist review)
Systematically verify each category:
- Type Safety â Error Handling â Security â Testing â Quality â Docs â Performance
- Flag violations immediately with severity
- Note clean areas briefly
3. Deep Dive (activeForm: Investigating findings)
For each finding:
- Verify it's actually a problem (not false positive)
- Assess severity and impact
- Determine concrete fix
- Check for pattern across codebase
4. Summarize (activeForm: Compiling review summary)
Emit completion protocol:
- Findings count by severity
- Recommendation (ship / fix blockers / rework)
- Detailed findings list
- Optional: patterns noticed, suggestions for future
Use TodoWrite with activeForm for tracking review phases.
<finding_format>
{SEVERITY} {FILE_PATH}:{LINE_RANGE}
Issue: { clear description of problem }
Impact: { consequences if shipped â security risk, runtime error, maintenance burden, etc }
Fix: { concrete steps to remediate }
Pattern: { if issue appears multiple times, note scope }
Example:
ââ src/auth/login.ts:45-52
Issue: Password compared using == instead of constant-time comparison
Impact: Timing attack vulnerability â attacker can infer password length and content through response timing
Fix: Use crypto.timingSafeEqual() or bcrypt's built-in comparison
Pattern: Single occurrence
</finding_format>
<severity_guidance>
ââ Severe (blocking):
- Security vulnerabilities
- Data loss risks
- Runtime crashes in common paths
- Breaking changes without migration
- Test failures or missing critical tests
â Moderate (should fix):
- Type safety violations
- Unhandled error cases
- Poor error messages
- Missing tests for edge cases
- Significant code quality issues
- Missing documentation for public APIs
â Minor (consider addressing):
- Code style inconsistencies
- Overly complex but functional code
- Minor performance optimizations
- Documentation improvements
- TODOs without context
- Naming improvements
</severity_guidance>
Loop: Scan â Verify â Document â Next category
- Announce review â scope, focus, approach
- Run checklist â systematically verify each category
- Document findings â severity, location, issue, impact, fix
- Investigate patterns â does finding repeat? Broader issue?
- Deep dive blockers â verify severity assessment, ensure fix is clear
- Compile summary â counts by severity, recommendation
- Deliver findings â completion protocol with detailed list
At each finding:
- Verify it's actually a problem
- Assess impact if shipped
- Determine concrete fix
- Note if pattern across files
Before completing review:
Check coverage:
- â All checklist categories verified?
- â Both happy path and error paths reviewed?
- â Tests examined for actual assertions?
- â Security-sensitive areas given extra scrutiny?
Check findings quality:
- â Severity accurately assessed?
- â Impact clearly explained?
- â Fix actionable and concrete?
- â False positives eliminated?
Check recommendation:
- â Aligned with findings severity?
- â Blockers clearly marked?
- â Path forward unambiguous?
ALWAYS:
- Announce review start with scope and focus
- Run systematic checklist, don't skip categories
- Emit findings as discovered, don't batch at end
- Assess severity honestly (err toward caution)
- Provide concrete fixes, not just complaints
- Complete with summary and recommendation
- Mark false positives if checklist item doesn't apply
- Consider patterns (single issue or systemic?)
NEVER:
- Skip checklist review for "quick check"
- Assume code is safe without verification
- Flag style preferences as blockers
- Provide vague findings without fix guidance
- Approve severe findings "for later fix"
- Complete review without announcement protocol
- Miss security checks on user input paths
- Ignore test quality (execution != validation)
Core methodology:
- checklist.md â extended checklist details, examples, severity guidance
- FORMATTING.md â formatting conventions
Related skills:
- codebase-analysis â evidence-based investigation (foundation for review)
- debugging-and-diagnosis â structured bug investigation
Repository
