Marketplace
code-review-assistant
Comprehensive PR review that synthesizes security, performance, correctness, style, and documentation findings with actionable fixes.
allowed_tools: Read, Write, Edit, Bash, Glob, Grep, Task, TodoWrite
model: sonnet
$ Installer
git clone https://github.com/DNYoussef/context-cascade /tmp/context-cascade && cp -r /tmp/context-cascade/skills/quality/code-review-assistant ~/.claude/skills/context-cascade// tip: Run this command in your terminal to install the skill
SKILL.md
name: code-review-assistant description: Comprehensive PR review that synthesizes security, performance, correctness, style, and documentation findings with actionable fixes. allowed-tools:
- Read
- Write
- Edit
- Bash
- Glob
- Grep
- Task
- TodoWrite model: sonnet x-version: 3.2.0 x-category: quality x-vcl-compliance: v3.1.1 x-cognitive-frames:
- HON
- MOR
- COM
- CLS
- EVD
- ASP
- SPC
STANDARD OPERATING PROCEDURE
Purpose
Perform an evidential, multi-perspective code review for pull requests or code drops, producing actionable findings with references, severity, and suggested remediations.
Trigger Conditions
- Positive: full PR reviews, pre-merge gates, production hardening, or targeted sweeps for security/performance/style/regression risk.
- Negative: trivial changes (typos, formatting-only), generated/vendor code, or runtime debugging tasks (route to functionality-audit).
Guardrails
- Confidence ceiling rule: Always append
Confidence: X.XX (ceiling: TYPE Y.YY)with ceilings {inference/report 0.70, research 0.85, observation/definition 0.95}. - Evidence discipline: Every finding must cite file:line, evidence type (DIRECT, STYLE_RULE, BEST_PRACTICE), and a reference or benchmark.
- Structure-first: Ensure examples and tests reflect review patterns; update references to match observed standards.
- Adversarial validation: Probe for false positives/negatives; explicitly mark uncertain findings as “needs manual review.”
- English-only output: Keep user-facing content free of VCL markers.
Execution Phases
- Intake & Scoping
- Identify PR scope, change risk areas, and excluded paths (generated/vendor/testdata).
- Map reviewers or lenses: security, performance, maintainability, style, documentation, test adequacy.
- Static Evidence Gathering
- Scan diffs for architecture shifts, API changes, and data/schema impacts.
- Note smells: complex functions, duplicated logic, risky patterns (e.g., unchecked inputs).
- Targeted Deep Dives
- For each lens, capture findings with file:line references and rationale.
- Validate claims against standards (OWASP, performance budgets, style guide, testing heuristics).
- Remediation Planning
- Propose fixes with severity (critical/major/minor/nit) and expected impact.
- Flag blockers that must be resolved pre-merge versus deferred follow-ups.
- Validation & Confidence
- Cross-check findings for overlap and conflicts; remove duplicates.
- State remaining risks and confidence with explicit ceiling.
Output Format
- Overview of scope, excluded areas, and applied review lenses.
- Finding list with severity, file:line, evidence type, and references.
- Remediation guidance grouped by priority (blocker/should/could).
- Residual risks, required follow-ups, and confidence statement with ceiling.
Validation Checklist
- Scope and exclusions documented.
- Findings include file:line, evidence type, and reference.
- Blockers vs. follow-ups separated with remediation steps.
- Adversarial checks performed; uncertain items flagged.
- Confidence ceiling provided; output is English-only.
Confidence: 0.72 (ceiling: inference 0.70) - SOP refactored with Prompt Architect confidence discipline and Skill Forge structure-first review pattern.
Repository

DNYoussef
Author
DNYoussef/context-cascade/skills/quality/code-review-assistant
8
Stars
2
Forks
Updated5d ago
Added1w ago