quality-judge
Evaluate LLM benchmark outputs against quality rubrics for STPA-Sec analysis. Use when comparing model outputs, assessing component extraction quality, UCA analysis correctness, or scenario generation completeness.
$ Instalar
git clone https://github.com/AISecurityAssurance/ai-sec /tmp/ai-sec && cp -r /tmp/ai-sec/.claude/skills/quality-judge ~/.claude/skills/ai-sec// tip: Run this command in your terminal to install the skill
Repository

AISecurityAssurance
Author
AISecurityAssurance/ai-sec/.claude/skills/quality-judge
0
Stars
0
Forks
Updated1w ago
Added1w ago