quality-judge

Evaluate LLM benchmark outputs against quality rubrics for STPA-Sec analysis. Use when comparing model outputs, assessing component extraction quality, UCA analysis correctness, or scenario generation completeness.

$ Instalar

git clone https://github.com/AISecurityAssurance/ai-sec /tmp/ai-sec && cp -r /tmp/ai-sec/.claude/skills/quality-judge ~/.claude/skills/ai-sec

// tip: Run this command in your terminal to install the skill