hooks-eval
detailed hook evaluation framework for Claude Code and Agent SDK hooks. Triggers: hook audit, hook security, hook performance, hook compliance, SDK hooks, hook evaluation, hook benchmarking, hook vulnerability Use when: auditing existing hooks for security vulnerabilities, benchmarking hook performance, implementing hooks using Python SDK, understanding hook callback signatures, validating hooks against compliance standards DO NOT use when: deciding hook placement - use hook-scope-guide instead. DO NOT use when: writing hook rules from scratch - use hookify instead. DO NOT use when: validating plugin structure - use validate-plugin instead. Use this skill BEFORE deploying hooks to production.
$ インストール
git clone https://github.com/athola/claude-night-market /tmp/claude-night-market && cp -r /tmp/claude-night-market/plugins/abstract/skills/hooks-eval ~/.claude/skills/claude-night-market// tip: Run this command in your terminal to install the skill
name: hooks-eval description: | detailed hook evaluation framework for Claude Code and Agent SDK hooks.
Triggers: hook audit, hook security, hook performance, hook compliance, SDK hooks, hook evaluation, hook benchmarking, hook vulnerability
Use when: auditing existing hooks for security vulnerabilities, benchmarking hook performance, implementing hooks using Python SDK, understanding hook callback signatures, validating hooks against compliance standards
DO NOT use when: deciding hook placement - use hook-scope-guide instead. DO NOT use when: writing hook rules from scratch - use hookify instead. DO NOT use when: validating plugin structure - use validate-plugin instead.
Use this skill BEFORE deploying hooks to production. version: 1.0.0 category: hook-management tags: [hooks, evaluation, security, performance, claude-sdk, agent-sdk] dependencies: [hook-scope-guide] provides: infrastructure: ["hook-evaluation", "security-scanning", "performance-analysis"] patterns: ["hook-auditing", "sdk-integration", "compliance-checking"] sdk_features: - "python-sdk-hooks" - "hook-callbacks" - "hook-matchers" estimated_tokens: 1200
Hooks Evaluation Framework
Overview
This skill provides a detailed framework for evaluating, auditing, and implementing Claude Code hooks across all scopes (plugin, project, global) and both JSON-based and programmatic (Python SDK) hooks.
Key Capabilities
- Security Analysis: Vulnerability scanning, dangerous pattern detection, injection prevention
- Performance Analysis: Execution time benchmarking, resource usage, optimization
- Compliance Checking: Structure validation, documentation requirements, best practices
- SDK Integration: Python SDK hook types, callbacks, matchers, and patterns
Core Components
| Component | Purpose |
|---|---|
| Hook Types Reference | Complete SDK hook event types and signatures |
| Evaluation Criteria | Scoring system and quality gates |
| Security Patterns | Common vulnerabilities and mitigations |
| Performance Benchmarks | Thresholds and optimization guidance |
Quick Reference
Hook Event Types
HookEvent = Literal[
"PreToolUse", # Before tool execution
"PostToolUse", # After tool execution
"UserPromptSubmit", # When user submits prompt
"Stop", # When stopping execution
"SubagentStop", # When a subagent stops
"PreCompact" # Before message compaction
]
Note: Python SDK does not support SessionStart, SessionEnd, or Notification hooks due to setup limitations.
Hook Callback Signature
async def my_hook(
input_data: dict[str, Any], # Hook-specific input
tool_use_id: str | None, # Tool ID (for tool hooks)
context: HookContext # Additional context
) -> dict[str, Any]: # Return decision/messages
...
Return Values
return {
"decision": "block", # Optional: block the action
"systemMessage": "...", # Optional: add to transcript
"hookSpecificOutput": {...} # Optional: hook-specific data
}
Quality Scoring (100 points)
| Category | Points | Focus |
|---|---|---|
| Security | 30 | Vulnerabilities, injection, validation |
| Performance | 25 | Execution time, memory, I/O |
| Compliance | 20 | Structure, documentation, error handling |
| Reliability | 15 | Timeouts, idempotency, degradation |
| Maintainability | 10 | Code structure, modularity |
Detailed Resources
- SDK Hook Types: See
modules/sdk-hook-types.mdfor complete Python SDK type definitions, patterns, and examples - Evaluation Criteria: See
modules/evaluation-criteria.mdfor detailed scoring rubric and quality gates - Security Patterns: See
modules/security-patterns.mdfor vulnerability detection and mitigation - Performance Guide: See
modules/performance-guide.mdfor benchmarking and optimization
Basic Evaluation Workflow
# 1. Run detailed evaluation
/hooks-eval --detailed
# 2. Focus on security issues
/hooks-eval --security-only --format sarif
# 3. Benchmark performance
/hooks-eval --performance-baseline
# 4. Check compliance
/hooks-eval --compliance-report
Integration with Other Tools
# Complete plugin evaluation pipeline
/hooks-eval --detailed # Evaluate all hooks
/analyze-hook hooks/specific.py # Deep-dive on one hook
/validate-plugin . # Validate overall structure
Related Skills
hook-scope-guide- Decide where to place hooks (plugin/project/global)hookify:writing-rules- Write hook rules and patternsvalidate-plugin- Validate complete plugin structure
Repository
