testing-skills-with-subagents

Use after writing a new skill and/or when testing existing skills, creating skill evaluations, or verifying skills work under pressure - applies TDD/RED-GREEN-REFACTOR to skill documentation by running baseline tests, measuring compliance, and closing rationalization loopholes

$ Installer

git clone https://github.com/WesleyMFrederick/cc-workflows /tmp/cc-workflows && cp -r /tmp/cc-workflows/.claude/skills/testing-skills-with-subagents ~/.claude/skills/cc-workflows

// tip: Run this command in your terminal to install the skill


name: testing-skills-with-subagents description: Use after writing a new skill and/or when testing existing skills, creating skill evaluations, or verifying skills work under pressure - applies TDD/RED-GREEN-REFACTOR to skill documentation by running baseline tests, measuring compliance, and closing rationalization loopholes

Testing Skills With Subagents

Overview

Testing skills is just TDD applied to process documentation.

Choose your testing approach based on development phase:

  • Fast variant: Quick iteration during skill development (15-30 min)
  • Slow variant: Rigorous validation before deployment (45-90 min)

Announce at start: "I'm using the testing-skills-with-subagents skill."

Workflow

Follow this workflow exactly - do not skip steps:

graph TD
    a@{ shape: stadium, label: "Start: Announce Skill Usage" }
    b@{ shape: rect, label: "Ask Variant Question" }
    c@{ shape: diam, label: "Variant Choice?" }
    d@{ shape: rect, label: "Read variants/fast-conversational.md" }
    e@{ shape: rect, label: "Read variants/slow-isolated.md" }
    f@{ shape: stadium, label: "Execute Variant Workflow" }

    a --> b
    b --> c
    c -->|Fast| d
    c -->|Slow| e
    d --> f
    e --> f

    classDef start fill:#ccffcc
    classDef decision fill:#ffffcc
    classDef action fill:#ccccff

    a:::start
    c:::decision
    b:::action
    d:::action
    e:::action
    f:::start

Design Rationale: Mermaid flowchart provides visual enforcement of workflow sequence, preventing LLM from skipping announcement or variant question.

Step 1: Choose Testing Variant

Question: "Which testing variant do you want to use?"

Options:

  1. "Fast: Conversational testing with control scenarios" - 15-30 min iteration, lightweight logging, good for skill development
  2. "Slow: Worktree-based isolated testing" - 45-90 min validation, full infrastructure, deployment-ready confidence

Trade-offs:

AspectFast VariantSlow Variant
Time15-30 min45-90 min
InfrastructureLightweight logsFull worktree isolation
ConfidenceModerateHigh (deployment-ready)
Best ForIteration, hypothesis testingPre-deployment validation

Step 2: Execute Selected Variant

IF user selected "Fast: Conversational testing":

IF user selected "Slow: Worktree-based testing":

Do NOT proceed without reading the selected variant file.

Error Handling

If variant file doesn't exist: Display this error message:

❌ Error: Variant file not found

Expected file: variants/{variant-name}.md

This indicates incomplete skill installation. Please check:
1. File structure is correct (.claude/skills/testing-skills-with-subagents/variants/)
2. Variant files exist (fast-conversational.md, slow-isolated.md)
3. Repository is up to date

Cannot proceed without variant file.

Design Rationale:

  • <critical-instruction> tags ensure LLM doesn't skip file read
  • Context passed forward but doesn't override variant's workflow
  • Clear error message for missing files