Marketplace

dev-cycle

Main orchestrator for the 6-gate development cycle system. Loads tasks/subtasks from PM team output and executes through implementation, devops, SRE, testing, review, and validation gates with state persistence and metrics collection.

$ 安裝

git clone https://github.com/LerianStudio/ring /tmp/ring && cp -r /tmp/ring/dev-team/skills/dev-cycle ~/.claude/skills/ring

// tip: Run this command in your terminal to install the skill


name: dev-cycle description: | Main orchestrator for the 6-gate development cycle system. Loads tasks/subtasks from PM team output and executes through implementation, devops, SRE, testing, review, and validation gates with state persistence and metrics collection.

trigger: |

  • Starting a new development cycle with a task file
  • Resuming an interrupted development cycle (--resume flag)
  • Need structured, gate-based task execution with quality checkpoints

prerequisite: |

  • Tasks file exists with structured subtasks
  • Not already in a specific gate skill execution
  • Human has not explicitly requested manual workflow

NOT_skip_when: |

  • "Task is simple" → Simple ≠ risk-free. Execute gates.
  • "Tests already pass" → Tests ≠ review. Different concerns.
  • "Time pressure" → Pressure ≠ permission. Document and proceed.
  • "Already did N gates" → Sunk cost is irrelevant. Complete all gates.

sequence: before: [dev-feedback-loop]

related: complementary: [dev-implementation, dev-devops, dev-sre, dev-testing, requesting-code-review, dev-validation, dev-feedback-loop]

verification: automated: - command: "test -f docs/dev-cycle/current-cycle.json || test -f docs/dev-refactor/current-cycle.json" description: "State file exists (dev-cycle or dev-refactor)" success_pattern: "exit 0" - command: "cat docs/dev-cycle/current-cycle.json 2>/dev/null || cat docs/dev-refactor/current-cycle.json | jq '.current_gate'" description: "Current gate is valid" success_pattern: "[0-5]" manual: - "All gates for current task show PASS in state file" - "No tasks have status 'blocked' for more than 3 iterations"

examples:

  • name: "New feature from PM workflow" invocation: "/dev-cycle docs/pre-dev/auth/tasks.md" expected_flow: |
    1. Load tasks with subtasks from tasks.md
    2. Ask user for checkpoint mode (per-task/per-gate/continuous)
    3. Execute Gate 0-5 for each task sequentially
    4. Generate feedback report after completion
  • name: "Resume interrupted cycle" invocation: "/dev-cycle --resume" expected_state: "Continues from last saved gate in current-cycle.json"
  • name: "Execute with per-gate checkpoints" invocation: "/dev-cycle tasks.md --checkpoint per-gate" expected_flow: |
    1. Execute Gate 0, pause for approval
    2. User approves, execute Gate 1, pause
    3. Continue until all gates complete

Development Cycle Orchestrator

Standards Loading (MANDATORY)

Before any gate execution, you MUST load Ring standards:

<fetch_required> https://raw.githubusercontent.com/LerianStudio/ring/main/CLAUDE.md </fetch_required>

Fetch URL above and extract: Agent Modification Verification requirements, Anti-Rationalization Tables requirements, and Critical Rules.

<block_condition>

  • WebFetch fails or returns empty
  • CLAUDE.md not accessible </block_condition>

If any condition is true, STOP and report blocker. Cannot proceed without Ring standards.

Overview

The development cycle orchestrator loads tasks/subtasks from PM team output (or manual task files) and executes through 6 quality gates. Tasks are loaded at initialization - no separate import gate.

Announce at start: "I'm using the dev-cycle skill to orchestrate task execution through 6 gates."

⛔ CRITICAL: Specialized Agents Perform All Tasks

See shared-patterns/shared-orchestrator-principle.md for full ORCHESTRATOR principle, role separation, forbidden/required actions, gate-to-agent mapping, and anti-rationalization table.

Summary: You orchestrate. Agents execute. If using Read/Write/Edit/Bash on source code → STOP. Dispatch agent.


⛔ ORCHESTRATOR BOUNDARIES (HARD GATE)

This section defines exactly what the orchestrator CAN and CANNOT do.

What Orchestrator CAN Do (PERMITTED)

ActionToolPurpose
Read task filesReadLoad task definitions from docs/pre-dev/*/tasks.md or docs/refactor/*/tasks.md
Read state filesReadLoad/verify docs/dev-cycle/current-cycle.json or docs/dev-refactor/current-cycle.json
Read PROJECT_RULES.mdReadLoad project-specific rules
Write state filesWritePersist cycle state to JSON
Track progressTodoWriteMaintain task list
Dispatch agentsTaskSend work to specialist agents
Ask user questionsAskUserQuestionGet execution mode, approvals
WebFetch standardsWebFetchLoad Ring standards

What Orchestrator CANNOT Do (FORBIDDEN)

Any of these actions by orchestrator = IMMEDIATE VIOLATION. Dispatch agent instead.


The 3-FILE RULE

If a task requires editing MORE than 3 files → MUST dispatch specialist agent.

This is not negotiable:

  • 1-3 files of non-source content (markdown, json, yaml) → Orchestrator MAY edit directly
  • 1+ source code files (*.go, *.ts, *.tsx) → MUST dispatch agent
  • 4+ files of any type → MUST dispatch agent

Orchestrator Workflow Order (MANDATORY)

┌─────────────────────────────────────────────────────────────────┐
│  CORRECT WORKFLOW ORDER                                         │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  1. Load task file (Read docs/pre-dev/*/tasks.md or docs/refactor/*/tasks.md) │
│  2. Ask execution mode (AskUserQuestion)                        │
│  3. Determine state path + Check/Load state (see State Path Selection) │
│  4. WebFetch Ring Standards                                     │
│  5. ⛔ LOAD SUB-SKILL for current gate (Skill tool)            │
│  6. Execute sub-skill instructions (dispatch agent via Task)    │
│  7. Wait for agent completion                                   │
│  8. Verify agent output (Standards Coverage Table)              │
│  9. Update state (Write to JSON)                               │
│  10. Proceed to next gate                                       │
│                                                                 │
│  ════════════════════════════════════════════════════════════   │
│  ❌ WRONG: Load → Mode → Standards → Task(agent) directly       │
│  ✅ RIGHT: Load → Mode → Standards → Skill(sub) → Task(agent)   │
│  ════════════════════════════════════════════════════════════   │
└─────────────────────────────────────────────────────────────────┘

⛔ SUB-SKILL LOADING IS MANDATORY (HARD GATE)

Before dispatching any agent, you MUST load the corresponding sub-skill first.

<cannot_skip>

  • Gate 0: Skill("dev-implementation") → then Task(subagent_type="backend-engineer-*", ...)
  • Gate 1: Skill("dev-devops") → then Task(subagent_type="devops-engineer", ...)
  • Gate 2: Skill("dev-sre") → then Task(subagent_type="sre", ...)
  • Gate 3: Skill("dev-testing") → then Task(subagent_type="qa-analyst", ...)
  • Gate 4: Skill("requesting-code-review") → then 3x Task(...) in parallel
  • Gate 5: Skill("dev-validation") → N/A (verification only) </cannot_skip>

Between "WebFetch standards" and "Task(agent)" there MUST be "Skill(sub-skill)".

The workflow for each gate is:

1. Skill("[sub-skill-name]")     ← Load sub-skill instructions
2. Follow sub-skill instructions  ← Sub-skill tells you HOW to dispatch
3. Task(subagent_type=...)       ← Dispatch agent as sub-skill instructs
4. Validate agent output          ← Per sub-skill validation rules
5. Update state                   ← Record results

Anti-Rationalization for Skipping Sub-Skills

RationalizationWhy It's WRONGRequired Action
"I know what the sub-skill does"Knowledge ≠ execution. Sub-skill has iteration logic.Load Skill() first
"Task() directly is faster"Faster ≠ correct. Sub-skill has validation rules.Load Skill() first
"Sub-skill just wraps Task()"Sub-skills have retry logic, fix dispatch, validation.Load Skill() first
"I'll follow the pattern manually"Manual = error-prone. Sub-skill is the pattern.Load Skill() first

Between "WebFetch standards" and "Task(agent)" there MUST be "Skill(sub-skill)".


Anti-Rationalization for Direct Coding

RationalizationWhy It's WRONGRequired Action
"It's just one small file"File count doesn't determine agent need. Language does.DISPATCH specialist agent
"I already loaded the standards"Loading standards ≠ permission to implement. Standards are for AGENTS.DISPATCH specialist agent
"Agent dispatch adds overhead"Overhead ensures compliance. Skip = skip verification.DISPATCH specialist agent
"I can write Go/TypeScript"Knowing language ≠ having Ring standards loaded. Agent has them.DISPATCH specialist agent
"Just a quick fix""Quick" is irrelevant. all source changes require specialist.DISPATCH specialist agent
"I'll read the file first to understand"Reading source → temptation to edit. Agent reads for you.DISPATCH specialist agent
"Let me check if tests pass first"Agent runs tests in TDD cycle. You don't run tests.DISPATCH specialist agent

Red Flags - Orchestrator Violation in Progress

If you catch yourself doing any of these, STOP IMMEDIATELY:

🚨 RED FLAG: About to Read *.go or *.ts file
   → STOP. Dispatch agent instead.

🚨 RED FLAG: About to Write/Create source code
   → STOP. Dispatch agent instead.

🚨 RED FLAG: About to Edit source code
   → STOP. Dispatch agent instead.

🚨 RED FLAG: About to run "go test" or "npm test"
   → STOP. Agent runs tests, not you.

🚨 RED FLAG: Thinking "I'll just..."
   → STOP. "Just" is the warning word. Dispatch agent.

🚨 RED FLAG: Thinking "This is simple enough..."
   → STOP. Simplicity is irrelevant. Dispatch agent.

🚨 RED FLAG: Standards loaded, but next action is not Task tool
   → STOP. After standards, IMMEDIATELY dispatch agent.

Recovery from Orchestrator Violation

If you violated orchestrator boundaries:

  1. STOP current execution immediately
  2. DISCARD any direct changes (git checkout -- .)
  3. DISPATCH the correct specialist agent
  4. Agent implements from scratch following TDD
  5. Document the violation for feedback loop

Sunk cost of direct work is IRRELEVANT. Agent dispatch is MANDATORY.


Blocker Criteria - STOP and Report

<block_condition>

  • Gate Failure: Tests not passing, review failed → STOP, cannot proceed to next gate
  • Missing Standards: No PROJECT_RULES.md → STOP, report blocker and wait
  • Agent Failure: Specialist agent returned errors → STOP, diagnose and report
  • User Decision Required: Architecture choice, framework selection → STOP, present options </block_condition>

You CANNOT proceed when blocked. Report and wait for resolution.

Cannot Be Overridden

<cannot_skip>

  • All 6 gates must execute - Each gate catches different issues
  • Gates execute in order (0→5) - Dependencies exist between gates
  • Gate 4 requires all 3 reviewers - Different review perspectives are complementary
  • Coverage threshold ≥ 85% - Industry standard for quality code
  • PROJECT_RULES.md must exist - Cannot verify standards without target </cannot_skip>

No exceptions. User cannot override. Time pressure cannot override.


Severity Calibration

SeverityCriteriaExamples
CRITICALBlocks deployment, security risk, data lossGate violation, skipped mandatory step
HIGHMajor functionality broken, standards violationMissing tests, wrong agent dispatched
MEDIUMCode quality, maintainability issuesIncomplete documentation, minor gaps
LOWBest practices, optimizationStyle improvements, minor refactoring

Report all severities. Let user prioritize.

Reviewer Verdicts Are Final

MEDIUM issues found in Gate 4 MUST be fixed. No exceptions.

RequestWhy It's WRONGRequired Action
"Can reviewer clarify if MEDIUM can defer?"Reviewer already decided. MEDIUM means FIX.Fix the issue, re-run reviewers
"Ask if this specific case is different"Reviewer verdict accounts for context already.Fix the issue, re-run reviewers
"Request exception for business reasons"Reviewers know business context. Verdict is final.Fix the issue, re-run reviewers

Severity mapping is absolute:

  • CRITICAL/HIGH/MEDIUM → Fix NOW, re-run all 3 reviewers
  • LOW → Add TODO(review): comment
  • Cosmetic → Add FIXME(nitpick): comment

No negotiation. No exceptions. No "special cases".


Pressure Resistance

See shared-patterns/shared-pressure-resistance.md for universal pressure scenarios.

Gate-specific note: Execution mode selection affects CHECKPOINTS (user approval pauses), not GATES (quality checks). all gates execute regardless of mode.


Common Rationalizations - REJECTED

See shared-patterns/shared-anti-rationalization.md for universal anti-rationalizations.

Gate-specific rationalizations:

ExcuseReality
"Automatic mode means faster"Automatic mode skips CHECKPOINTS, not GATES. Same quality, less interruption.
"Automatic mode will skip review"Automatic mode affects user approval pauses, not quality gates. all gates execute regardless.
"Defense in depth exists (frontend validates)"Frontend can be bypassed. Backend is the last line. Fix at source.
"Backlog the Medium issue, it's documented"Documented risk ≠ mitigated risk. Medium in Gate 4 = fix NOW, not later.
"Risk-based prioritization allows deferral"Gates ARE the risk-based system. Reviewers define severity, not you.

Red Flags - STOP

See shared-patterns/shared-red-flags.md for universal red flags.

If you catch yourself thinking any of those patterns, STOP immediately and return to gate execution.


Incremental Compromise Prevention

The "just this once" pattern leads to complete gate erosion:

Day 1: "Skip review just this once" → Approved (precedent set)
Day 2: "Skip testing, we did it last time" → Approved (precedent extended)
Day 3: "Skip implementation checks, pattern established" → Approved (gates meaningless)
Day 4: Production incident from Day 1 code

Prevention rules:

  1. No incremental exceptions - Each exception becomes the new baseline
  2. Document every pressure - Log who requested, why, outcome
  3. Escalate patterns - If same pressure repeats, escalate to team lead
  4. Gates are binary - Complete or incomplete. No "mostly done".

Gate Completion Definition (HARD GATE)

A gate is COMPLETE only when all components finish successfully:

GateComponents RequiredPartial = FAIL
0.1TDD-RED: Failing test written + failure output capturedTest exists but no failure output = FAIL
0.2TDD-GREEN: Implementation passes testCode exists but test fails = FAIL
0Both 0.1 and 0.2 complete0.1 done without 0.2 = FAIL
1Dockerfile + docker-compose + .env.exampleMissing any = FAIL
2Structured JSON logs with trace correlationPartial structured logs = FAIL
3Coverage ≥ 85% + all AC tested84% = FAIL
4all 3 reviewers PASS2/3 reviewers = FAIL
5Explicit "APPROVED" from user"Looks good" = not approved

CRITICAL for Gate 4: Running 2 of 3 reviewers is not a partial pass - it's a FAIL. Re-run all 3 reviewers.

Anti-Rationalization for Partial Gates:

RationalizationWhy It's WRONGRequired Action
"2 of 3 reviewers passed"Gate 4 requires all 3. 2/3 = 0/3.Re-run all 3 reviewers
"Gate mostly complete"Mostly ≠ complete. Binary: done or not done.Complete all components
"Can finish remaining in next cycle"Gates don't carry over. Complete NOW.Finish current gate
"Core components done, optional can wait"No component is optional within a gate.Complete all components

Gate Order Enforcement (HARD GATE)

Gates MUST execute in order: 0 → 1 → 2 → 3 → 4 → 5. No exceptions.

ViolationWhy It's WRONGConsequence
Skip Gate 1 (DevOps)"No infra changes"Code without container = works on my machine only
Skip Gate 2 (SRE)"Observability later"Blind production = debugging nightmare
Reorder Gates"Review before test"Reviewing untested code wastes reviewer time
Parallel Gates"Run 2 and 3 together"Dependencies exist. Order is intentional.

Gates are not parallelizable across different gates. Sequential execution is MANDATORY.

The 6 Gates

GateSkillPurposeAgent
0dev-implementationWrite code following TDDBased on task language/domain
1dev-devopsInfrastructure and deploymentdevops-engineer
2dev-sreObservability (health, logging, tracing)sre
3dev-testingUnit tests for acceptance criteriaqa-analyst
4requesting-code-reviewParallel code reviewcode-reviewer, business-logic-reviewer, security-reviewer (3x parallel)
5dev-validationFinal acceptance validationN/A (verification)

Integrated PM → Dev Workflow

PM Team OutputDev Team Execution (/dev-cycle)

Input TypePathStructure
Tasks onlydocs/pre-dev/{feature}/tasks.mdT-001, T-002, T-003 with requirements + acceptance criteria
Tasks + Subtasksdocs/pre-dev/{feature}/tasks.md + subtasks/{task-id}/ST-XXX-01.md, ST-XXX-02.md...

Execution Order

Core Principle: Each execution unit (task or subtask) passes through all 6 gates before the next unit.

Flow: Unit → Gate 0-5 → 🔒 Unit Checkpoint (Step 7.1) → 🔒 Task Checkpoint (Step 7.2) → Next Unit

ScenarioExecution UnitGates Per Unit
Task without subtasksTask itself6 gates
Task with subtasksEach subtask6 gates per subtask

Commit Timing

User selects when commits happen (Step 7 of initialization).

OptionWhen Commit HappensUse Case
(a) Per subtaskAfter each subtask passes Gate 5Fine-grained history, easy rollback per subtask
(b) Per taskAfter all subtasks of a task completeLogical grouping, one commit per feature chunk
(c) At the endAfter entire cycle completesSingle commit with all changes, clean history

Commit Message Format

TimingMessage FormatExample
Per subtaskfeat({subtask_id}): {subtask_title}feat(ST-001-02): implement user authentication handler
Per taskfeat({task_id}): {task_title}feat(T-001): implement user authentication
At the endfeat({cycle_id}): complete dev cycle for {feature}feat(cycle-abc123): complete dev cycle for auth-system

Commit Timing vs Execution Mode

Execution ModeCommit TimingBehavior
Manual per subtaskPer subtaskCommit + checkpoint after each subtask
Manual per subtaskPer taskCheckpoint after subtask, commit after task
Manual per subtaskAt endCheckpoint after subtask, commit at cycle end
Manual per taskPer subtaskCommit after subtask, checkpoint after task
Manual per taskPer taskCommit + checkpoint after task
Manual per taskAt endCheckpoint after task, commit at cycle end
AutomaticPer subtaskCommit after each subtask, no checkpoints
AutomaticPer taskCommit after task, no checkpoints
AutomaticAt endSingle commit at cycle end, no checkpoints

Note: Checkpoints (user approval pauses) are controlled by execution_mode. Commits are controlled by commit_timing. They are independent settings.

State Management

State Path Selection (MANDATORY)

The state file path depends on the source of tasks:

Task SourceState PathUse Case
docs/refactor/*/tasks.mddocs/dev-refactor/current-cycle.jsonRefactoring existing code
docs/pre-dev/*/tasks.mddocs/dev-cycle/current-cycle.jsonNew feature development
Any other pathdocs/dev-cycle/current-cycle.jsonDefault for manual tasks

Detection Logic:

if source_file contains "docs/refactor/" THEN
  state_path = "docs/dev-refactor/current-cycle.json"
else
  state_path = "docs/dev-cycle/current-cycle.json"

Store state_path in the state object itself so resume knows where to look.

State File Structure

State is persisted to {state_path} (either docs/dev-cycle/current-cycle.json or docs/dev-refactor/current-cycle.json):

{
  "version": "1.0.0",
  "cycle_id": "uuid",
  "started_at": "ISO timestamp",
  "updated_at": "ISO timestamp",
  "source_file": "path/to/tasks.md",
  "state_path": "docs/dev-cycle/current-cycle.json | docs/dev-refactor/current-cycle.json",
  "cycle_type": "feature | refactor",
  "execution_mode": "manual_per_subtask|manual_per_task|automatic",
  "commit_timing": "per_subtask|per_task|at_end",
  "status": "in_progress|completed|failed|paused|paused_for_approval|paused_for_testing|paused_for_task_approval|paused_for_integration_testing",
  "feedback_loop_completed": false,
  "current_task_index": 0,
  "current_gate": 0,
  "current_subtask_index": 0,
  "tasks": [
    {
      "id": "T-001",
      "title": "Task title",
      "status": "pending|in_progress|completed|failed|blocked",
      "feedback_loop_completed": false,
      "subtasks": [
        {
          "id": "ST-001-01",
          "file": "subtasks/T-001/ST-001-01.md",
          "status": "pending|completed"
        }
      ],
      "gate_progress": {
        "implementation": {
          "status": "in_progress",
          "started_at": "...",
          "tdd_red": {
            "status": "pending|in_progress|completed",
            "test_file": "path/to/test_file.go",
            "failure_output": "FAIL: TestFoo - expected X got nil",
            "completed_at": "ISO timestamp"
          },
          "tdd_green": {
            "status": "pending|in_progress|completed",
            "implementation_file": "path/to/impl.go",
            "test_pass_output": "PASS: TestFoo (0.003s)",
            "completed_at": "ISO timestamp"
          }
        },
        "devops": {"status": "pending"},
        "sre": {"status": "pending"},
        "testing": {"status": "pending"},
        "review": {"status": "pending"},
        "validation": {"status": "pending"}
      },
      "artifacts": {},
      "agent_outputs": {
        "implementation": {
          "agent": "backend-engineer-golang",
          "output": "## Summary\n...",
          "timestamp": "ISO timestamp",
          "duration_ms": 0,
          "iterations": 1,
          "standards_compliance": {
            "total_sections": 15,
            "compliant": 14,
            "not_applicable": 1,
            "non_compliant": 0,
            "gaps": []
          }
        },
        "devops": {
          "agent": "devops-engineer",
          "output": "## Summary\n...",
          "timestamp": "ISO timestamp",
          "duration_ms": 0,
          "iterations": 1,
          "artifacts_created": ["Dockerfile", "docker-compose.yml", ".env.example"],
          "verification_errors": [],
          "standards_compliance": {
            "total_sections": 8,
            "compliant": 8,
            "not_applicable": 0,
            "non_compliant": 0,
            "gaps": []
          }
        },
        "sre": {
          "agent": "sre",
          "output": "## Summary\n...",
          "timestamp": "ISO timestamp",
          "duration_ms": 0,
          "iterations": 1,
          "instrumentation_coverage": "92%",
          "validation_errors": [],
          "standards_compliance": {
            "total_sections": 10,
            "compliant": 10,
            "not_applicable": 0,
            "non_compliant": 0,
            "gaps": []
          }
        },
        "testing": {
          "agent": "qa-analyst",
          "output": "## Summary\n...",
          "verdict": "PASS",
          "coverage_actual": 87.5,
          "coverage_threshold": 85,
          "iterations": 1,
          "timestamp": "ISO timestamp",
          "duration_ms": 0,
          "failures": [],
          "uncovered_criteria": [],
          "standards_compliance": {
            "total_sections": 6,
            "compliant": 6,
            "not_applicable": 0,
            "non_compliant": 0,
            "gaps": []
          }
        },
        "review": {
          "iterations": 1,
          "timestamp": "ISO timestamp",
          "duration_ms": 0,
          "code_reviewer": {
            "agent": "code-reviewer",
            "output": "...",
            "verdict": "PASS",
            "timestamp": "...",
            "issues": [],
            "standards_compliance": {
              "total_sections": 12,
              "compliant": 12,
              "not_applicable": 0,
              "non_compliant": 0,
              "gaps": []
            }
          },
          "business_logic_reviewer": {
            "agent": "business-logic-reviewer",
            "output": "...",
            "verdict": "PASS",
            "timestamp": "...",
            "issues": [],
            "standards_compliance": {
              "total_sections": 8,
              "compliant": 8,
              "not_applicable": 0,
              "non_compliant": 0,
              "gaps": []
            }
          },
          "security_reviewer": {
            "agent": "security-reviewer",
            "output": "...",
            "verdict": "PASS",
            "timestamp": "...",
            "issues": [],
            "standards_compliance": {
              "total_sections": 10,
              "compliant": 10,
              "not_applicable": 0,
              "non_compliant": 0,
              "gaps": []
            }
          }
        },
        "validation": {
          "result": "approved|rejected",
          "timestamp": "ISO timestamp"
        }
      }
    }
  ],
  "metrics": {
    "total_duration_ms": 0,
    "gate_durations": {},
    "review_iterations": 0,
    "testing_iterations": 0
  }
}

Structured Error/Issue Schemas

These schemas enable dev-feedback-loop to analyze issues without parsing raw output.

Standards Compliance Gap Schema

{
  "section": "Error Handling (MANDATORY)",
  "status": "❌",
  "reason": "Missing error wrapping with context",
  "file": "internal/handler/user.go",
  "line": 45,
  "evidence": "return err // should wrap with additional context"
}

Test Failure Schema

{
  "test_name": "TestUserCreate_InvalidEmail",
  "test_file": "internal/handler/user_test.go",
  "error_type": "assertion",
  "expected": "ErrInvalidEmail",
  "actual": "nil",
  "message": "Expected validation error for invalid email format",
  "stack_trace": "user_test.go:42 → user.go:28"
}

Review Issue Schema

{
  "severity": "MEDIUM",
  "category": "error-handling",
  "description": "Error not wrapped with context before returning",
  "file": "internal/handler/user.go",
  "line": 45,
  "suggestion": "Use fmt.Errorf(\"failed to create user: %w\", err)",
  "fixed": false,
  "fixed_in_iteration": null
}

DevOps Verification Error Schema

{
  "check": "docker_build",
  "status": "FAIL",
  "error": "COPY failed: file not found in build context: go.sum",
  "suggestion": "Ensure go.sum exists and is not in .dockerignore"
}

SRE Validation Error Schema

{
  "check": "structured_logging",
  "status": "FAIL",
  "file": "internal/handler/user.go",
  "line": 32,
  "error": "Using fmt.Printf instead of structured logger",
  "suggestion": "Use logger.Info().Str(\"user_id\", id).Msg(\"user created\")"
}

Populating Structured Data

Each gate MUST populate its structured fields when saving to state:

GateFields to Populate
Gate 0 (Implementation)standards_compliance (total, compliant, gaps[])
Gate 1 (DevOps)standards_compliance + verification_errors[]
Gate 2 (SRE)standards_compliance + validation_errors[]
Gate 3 (Testing)standards_compliance + failures[] + uncovered_criteria[]
Gate 4 (Review)standards_compliance per reviewer + issues[] per reviewer

All gates track standards_compliance:

  • total_sections: Count from agent's standards file (via standards-coverage-table.md)
  • compliant: Sections marked ✅ in Standards Coverage Table
  • not_applicable: Sections marked N/A
  • non_compliant: Sections marked ❌ (MUST be 0 to pass gate)
  • gaps[]: Detailed info for each ❌ section (even if later fixed)

Empty arrays [] indicate no issues found - this is valid data for feedback-loop.

⛔ State Persistence Rule (MANDATORY)

"Update state" means BOTH update the object and write to file. Not just in-memory.

After every Gate Transition

You MUST execute these steps after completing any gate (0, 1, 2, 3, 4, or 5):

# Step 1: Update state object with gate results
state.tasks[current_task_index].gate_progress.[gate_name].status = "completed"
state.tasks[current_task_index].gate_progress.[gate_name].completed_at = "[ISO timestamp]"
state.current_gate = [next_gate_number]
state.updated_at = "[ISO timestamp]"

# Step 2: Write to file (MANDATORY - use Write tool)
Write tool:
  file_path: [state.state_path]  # Use state_path from state object
  content: [full JSON state]

# Step 3: Verify persistence (MANDATORY - use Read tool)
Read tool:
  file_path: [state.state_path]  # Use state_path from state object
# Confirm current_gate and gate_progress match expected values

State Persistence Checkpoints

AfterMUST UpdateMUST Write File
Gate 0.1 (TDD-RED)tdd_red.status, tdd_red.failure_output✅ YES
Gate 0.2 (TDD-GREEN)tdd_green.status, implementation.status✅ YES
Gate 1 (DevOps)devops.status, agent_outputs.devops✅ YES
Gate 2 (SRE)sre.status, agent_outputs.sre✅ YES
Gate 3 (Testing)testing.status, agent_outputs.testing✅ YES
Gate 4 (Review)review.status, agent_outputs.review✅ YES
Gate 5 (Validation)validation.status, task status✅ YES
Step 7.1 (Unit Approval)status = "paused_for_approval"✅ YES
Step 7.2 (Task Approval)status = "paused_for_task_approval"✅ YES

Anti-Rationalization for State Persistence

RationalizationWhy It's WRONGRequired Action
"I'll save state at the end"Crash/timeout loses all progressSave after each gate
"State is in memory, that's updated"Memory is volatile. File is persistent.Write to JSON file
"Only save on checkpoints"Gates without saves = unrecoverable on resumeSave after every gate
"Write tool is slow"Write takes <100ms. Lost progress takes hours.Write after every gate
"I updated the state variable"Variable ≠ file. Without Write tool, nothing persists.Use Write tool explicitly

Verification Command

After each gate, the state file MUST reflect:

  • current_gate = next gate number
  • updated_at = recent timestamp
  • Previous gate status = "completed"

If verification fails → State was not persisted. Re-execute Write tool.


Step 0: Verify PROJECT_RULES.md Exists (HARD GATE)

NON-NEGOTIABLE. Cycle CANNOT proceed without project standards.

Step 0 Flow

┌─────────────────────────────────────────────────────────────────────────────┐
│  Check: Does docs/PROJECT_RULES.md exist?                                   │
│                                                                             │
│  ├── YES → Proceed to Step 1 (Initialize or Resume)                        │
│  │                                                                          │
│  └── no → ASK: "Is this a LEGACY project (created without PM workflow)?"   │
│       │                                                                     │
│       ├── YES (legacy project) → LEGACY PROJECT ANALYSIS:                   │
│       │   Step 1: Dispatch codebase-explorer (technical info only)          │
│       │   Step 2: Ask 3 questions (what agent can't determine):             │
│       │     1. What do you need help with?                                  │
│       │     2. Any external APIs not visible in code?                       │
│       │     3. Any specific technology not in Ring Standards?               │
│       │   Step 3: Generate PROJECT_RULES.md (deduplicated from Ring)        │
│       │   Note: Business rules belong in PRD, not in PROJECT_RULES          │
│       │   → Proceed to Step 1                                               │
│       │                                                                     │
│       └── no (new project) → ASK: "Do you have PRD, TRD, or Feature Map?"  │
│           │                                                                 │
│           ├── YES (has PM docs) → "Please provide the file path(s)"        │
│           │   → Read PRD/TRD/Feature Map → Extract info                    │
│           │   → Generate PROJECT_RULES.md                                  │
│           │   → Ask supplementary questions if info is incomplete          │
│           │   → Save and proceed to Step 1                                 │
│           │                                                                 │
│           └── no (no PM docs) → ⛔ HARD BLOCK:                              │
│               "PM documents are REQUIRED for new projects.                  │
│                Run /pre-dev-full or /pre-dev-feature first."               │
│               → STOP (cycle cannot proceed)                                 │
└─────────────────────────────────────────────────────────────────────────────┘

Step 0.1: Check for PROJECT_RULES.md

# Check if file exists
Read tool:
  file_path: "docs/PROJECT_RULES.md"

# If file exists and has content → Proceed to Step 1
# If file does not exist or is empty → Continue to Step 0.2

Step 0.2: Check if Legacy Project

Ask the User

Use AskUserQuestion:

┌─────────────────────────────────────────────────────────────────┐
│ 📋 PROJECT_RULES.md not FOUND                                   │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│ I need to create docs/PROJECT_RULES.md to understand your       │
│ project's specific conventions and domain.                      │
│                                                                 │
│ First, I need to know: Is this a LEGACY project?                │
│                                                                 │
│ A legacy project is one that was created WITHOUT using the      │
│ PM team workflow (no PRD, TRD, or Feature Map documents).       │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Question

"Is this a legacy project (created without PM team workflow)?"

Options

(a) Yes, this is a legacy project (b) No, this is a new project following Ring workflow

If YES (legacy)

Go to Step 0.2.1 (Legacy Project Analysis)

If no (new project)

Go to Step 0.3 (Check for PM Documents)

Step 0.2.1: Legacy Project Analysis (Technical Only)

Overview

For legacy projects, analyze codebase for TECHNICAL information only:

┌─────────────────────────────────────────────────────────────────┐
│ 📋 LEGACY PROJECT ANALYSIS                                      │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│ Since this is a legacy project, I'll analyze the codebase       │
│ for TECHNICAL information (not business rules).                 │
│                                                                 │
│ Step 1: Automated analysis (codebase-explorer)                  │
│ Step 2: Ask for project-specific tech not in Ring Standards     │
│ Step 3: Generate PROJECT_RULES.md (deduplicated)                │
│                                                                 │
│ Note: Business rules belong in PRD/product docs, not here.      │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Step 0.2.1a: Automated Codebase Analysis (MANDATORY)

⛔ You MUST use the Task tool to dispatch codebase-explorer. This is not implicit.

Dispatch Agent

Dispatch codebase-explorer to analyze the legacy project for TECHNICAL information:

Action: Use Task tool with EXACTLY these parameters:

┌─────────────────────────────────────────────────────────────────────────────────┐
│  ⛔ If Task tool not used → Analysis does not happen → PROJECT_RULES.md INVALID │
└─────────────────────────────────────────────────────────────────────────────────┘
# Agent 1: Codebase Explorer - Technical Analysis
Task tool:
  subagent_type: "ring-default:codebase-explorer"
  model: "opus"
  description: "Analyze legacy project for PROJECT_RULES.md"
  prompt: |
    Analyze this LEGACY codebase to extract technical information for PROJECT_RULES.md.
    
    This is an existing project created without PM documentation.
    Your job is to understand what exists in the code.
    
    **Extract:**
    1. **Project Structure:** Directory layout, module organization
    2. **Technical Stack:** Languages, frameworks, databases, external services
    3. **Architecture Patterns:** Clean Architecture, MVC, microservices, etc.
    4. **Existing Features:** Main modules, endpoints, capabilities
    5. **Internal Libraries:** Shared packages, utilities
    6. **Configuration:** Environment variables, config patterns
    7. **Database:** Schema patterns, migrations, ORM usage
    8. **External Integrations:** APIs consumed, message queues
    
    **Output format:**
    ## Technical Analysis (Legacy Project)
    
    ### Project Overview
    [What this project appears to do based on code analysis]
    
    ### Technical Stack
    - Language: [detected]
    - Framework: [detected]
    - Database: [detected]
    - External Services: [detected]
    
    ### Architecture Patterns
    [Detected patterns]
    
    ### Existing Features
    [List of features/modules found]
    
    ### Project Structure
    [Directory layout explanation]
    
    ### Configuration
    [Env vars, config files found]
    
    ### External Integrations
    [APIs, services detected]

Note: Business logic analysis is not needed for PROJECT_RULES.md. Business rules belong in PRD/product docs, not technical project rules.

Verification (MANDATORY)

After agent completes, confirm:

  • codebase-explorer returned "## Technical Analysis (Legacy Project)" section
  • Output contains non-empty content for: Tech Stack, External Integrations, Configuration

If agent failed or returned empty output → Re-dispatch. Cannot proceed without technical analysis.

Step 0.2.1b: Supplementary Questions (Only What Agents Can't Determine)

Post-Analysis Questions

After agents complete, ask only what they couldn't determine from code:

┌─────────────────────────────────────────────────────────────────┐
│ ✓ Codebase Analysis Complete                                    │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│ I've analyzed your codebase. Now I need a few details that      │
│ only you can provide (not visible in the code).                 │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Questions to Ask

Use AskUserQuestion for each:

#QuestionWhy Agents Can't Determine This
1What do you need help with? (Current task/feature/fix)Future intent, not in code
2Any external APIs or services not visible in code? (Third-party integrations planned)Planned integrations, not yet in code
3Any specific technology not in Ring Standards? (Message broker, cache, etc.)Project-specific tech not in Ring

Note: Business rules belong in PRD/product docs, not in PROJECT_RULES.md.

Step 0.2.1c: Generate PROJECT_RULES.md

Combine Agent Outputs and User Answers

Create tool:
  file_path: "docs/PROJECT_RULES.md"
  content: |
    # Project Rules
    
    > Ring Standards apply automatically. This file documents only what Ring does not cover.
    > For error handling, logging, testing, architecture, lib-commons  See Ring Standards (auto-loaded by agents)
    > Generated from legacy project analysis.
    
    ## What Ring Standards Already Cover (DO not ADD HERE)
    
    The following are defined in Ring Standards and MUST not be duplicated:
    - Error handling patterns (no panic, wrap errors)
    - Logging standards (structured JSON, zerolog/zap)
    - Testing patterns (table-driven tests, mocks)
    - Architecture patterns (Hexagonal, Clean Architecture)
    - Observability (OpenTelemetry, trace correlation)
    - lib-commons usage and patterns
    - API directory structure
    
    ---
    
    ## Tech Stack (Not in Ring Standards)
    
    [From codebase-explorer: Technologies not covered by Ring Standards]
    [e.g., specific message broker, specific cache, DB if not PostgreSQL]
    
    | Technology | Purpose | Notes |
    |------------|---------|-------|
    | [detected] | [purpose] | [notes] |
    
    ## Non-Standard Directory Structure
    
    [From codebase-explorer: Directories that deviate from Ring's standard API structure]
    [e.g., workers/, consumers/, polling/]
    
    | Directory | Purpose | Pattern |
    |-----------|---------|---------|
    | [detected] | [purpose] | [pattern] |
    
    ## External Integrations
    
    [From codebase-explorer: Third-party services specific to this project]
    
    | Service | Purpose | Docs |
    |---------|---------|------|
    | [detected] | [purpose] | [link] |
    
    ## Environment Configuration
    
    [From codebase-explorer: Project-specific env vars not covered by Ring]
    
    | Variable | Purpose | Example |
    |----------|---------|---------|
    | [detected] | [purpose] | [example] |
    
    ## Domain Terminology
    
    [From codebase analysis: Technical names used in this codebase]
    
    | Term | Definition | Used In |
    |------|------------|---------|
    | [detected] | [definition] | [location] |
    
    ---
    
    *Generated: [ISO timestamp]*
    *Source: Legacy project analysis (codebase-explorer)*
    *Ring Standards Version: [version from WebFetch]*

Present to User

┌─────────────────────────────────────────────────────────────────┐
│ ✓ PROJECT_RULES.md Generated for Legacy Project                 │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│ I analyzed your codebase using:                                 │
│   • codebase-explorer (technical patterns, stack, structure)    │
│                                                                 │
│ Combined with your input on:                                    │
│   • Current development goal                                    │
│   • External integrations                                       │
│   • Project-specific technology                                 │
│                                                                 │
│ Generated: docs/PROJECT_RULES.md                                │
│                                                                 │
│ Note: Ring Standards (error handling, logging, testing, etc.)   │
│ are not duplicated - agents load them automatically via WebFetch│
│                                                                 │
│ Please review the file and make any corrections needed.         │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Ask for Approval

Use AskUserQuestion:

  • Question: "PROJECT_RULES.md has been generated. Would you like to review it before proceeding?"
  • Options: (a) Proceed (b) Open for editing first

After Approval

Proceed to Step 1

Step 0.3: Check for PM Documents (PRD/TRD/Feature Map)

Check for PM Documents

For NEW projects (not legacy), ask about PM documents:

┌─────────────────────────────────────────────────────────────────┐
│ 📋 NEW PROJECT - PM DOCUMENTS CHECK                             │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│ Since this is a new project following Ring workflow, you        │
│ should have PM documents from the pre-dev workflow.             │
│                                                                 │
│ Do you have any of these PM documents?                          │
│   • PRD (Product Requirements Document)                         │
│   • TRD (Technical Requirements Document)                       │
│   • Feature Map (from pre-dev-feature-map skill)                │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Question

"Do you have PRD, TRD, or Feature Map documents for this project?"

Options

(a) Yes, I have PM documents (b) No, I don't have these documents

If YES - Ask for File Paths

"Please provide the file path(s) to your PM documents:
 - PRD path (or 'skip' if none): 
 - TRD path (or 'skip' if none): 
 - Feature Map path (or 'skip' if none): "

Example Paths

Typical PM team output structure:

docs/pre-dev/{feature-name}/
├── prd.md              → PRD path: docs/pre-dev/auth-system/prd.md
├── trd.md              → TRD path: docs/pre-dev/auth-system/trd.md
├── feature-map.md      → Feature Map path: docs/pre-dev/auth-system/feature-map.md
├── api-design.md
├── data-model.md
└── tasks.md

Common Patterns

  • /pre-dev-full output: docs/pre-dev/{feature}/prd.md, trd.md, feature-map.md
  • /pre-dev-feature output: docs/pre-dev/{feature}/prd.md, feature-map.md
  • Custom locations: User may have docs in different paths (e.g., requirements/, specs/)

Then

Go to Step 0.3.1 (Generate from PM Documents)

If no

HARD BLOCK (Step 0.3.2)

Step 0.3.1: Generate from PM Documents (PRD/TRD/Feature Map)

Read the Provided Documents

# Read PRD if provided
Read tool:
  file_path: "[user-provided PRD path]"

# Read TRD if provided  
Read tool:
  file_path: "[user-provided TRD path]"

# Read Feature Map if provided
Read tool:
  file_path: "[user-provided Feature Map path]"

Extract PROJECT_RULES.md Content from PM Documents

⛔ DEDUPLICATION RULE: Extract only what Ring Standards DO NOT cover.

From PRDExtract For PROJECT_RULES.mdNote
Domain terms, entitiesDomain TerminologyTechnical names only
External service mentionsExternal IntegrationsThird-party APIs
Business rulesN/A❌ Stays in PRD, not PROJECT_RULES
ArchitectureN/A❌ Ring Standards covers this
From TRDExtract For PROJECT_RULES.mdNote
Tech stack not in RingTech Stack (Not in Ring)Only non-standard tech
External APIsExternal IntegrationsThird-party services
Non-standard directoriesNon-Standard Directory StructureWorkers, consumers, etc.
Architecture decisionsN/A❌ Ring Standards covers this
Database patternsN/A❌ Ring Standards covers this
From Feature MapExtract For PROJECT_RULES.mdNote
Technology choices not in RingTech Stack (Not in Ring)Only if not in Ring
External dependenciesExternal IntegrationsThird-party services
ArchitectureN/A❌ Ring Standards covers this

Generate PROJECT_RULES.md

Create tool:
  file_path: "docs/PROJECT_RULES.md"
  content: |
    # Project Rules
    
    >  IMPORTANT: Ring Standards are not automatic. Agents MUST WebFetch them before implementation.
    > This file documents only project-specific information not covered by Ring Standards.
    > Generated from PM documents (PRD/TRD/Feature Map).
    >
    > Ring Standards URLs:
    > - Go: https://raw.githubusercontent.com/LerianStudio/ring/main/dev-team/docs/standards/golang.md
    > - TypeScript: https://raw.githubusercontent.com/LerianStudio/ring/main/dev-team/docs/standards/typescript.md
    
    ## What Ring Standards Cover (DO not DUPLICATE HERE)
    
    The following are defined in Ring Standards and MUST not be duplicated in this file:
    - Error handling patterns (no panic, wrap errors)
    - Logging standards (structured JSON via lib-commons)
    - Testing patterns (table-driven tests, mocks)
    - Architecture patterns (Hexagonal, Clean Architecture)
    - Observability (OpenTelemetry via lib-commons)
    - lib-commons / lib-common-js usage and patterns
    - API directory structure (Lerian pattern)
    - Database connections (PostgreSQL, MongoDB, Redis via lib-commons)
    - Bootstrap pattern (config.go, service.go, server.go)
    
    **Agents MUST WebFetch Ring Standards and output Standards Coverage Table.**
    
    ---
    
    ## Tech Stack (Not in Ring Standards)
    
    [From TRD/Feature Map: only technologies not covered by Ring Standards]
    
    | Technology | Purpose | Notes |
    |------------|---------|-------|
    | [detected] | [purpose] | [notes] |
    
    ## Non-Standard Directory Structure
    
    [From TRD: Directories that deviate from Ring's standard API structure]
    
    | Directory | Purpose | Pattern |
    |-----------|---------|---------|
    | [detected] | [purpose] | [pattern] |
    
    ## External Integrations
    
    [From TRD/PRD: Third-party services specific to this project]
    
    | Service | Purpose | Docs |
    |---------|---------|------|
    | [detected] | [purpose] | [link] |
    
    ## Environment Configuration
    
    [From TRD: Project-specific env vars not covered by Ring]
    
    | Variable | Purpose | Example |
    |----------|---------|---------|
    | [detected] | [purpose] | [example] |
    
    ## Domain Terminology
    
    [From PRD: Technical names used in this codebase]
    
    | Term | Definition | Used In |
    |------|------------|---------|
    | [detected] | [definition] | [location] |
    
    ---
    
    *Generated from: [PRD path], [TRD path], [Feature Map path]*
    *Ring Standards Version: [version from WebFetch]*
    *Generated: [ISO timestamp]*

Check for Missing Information

If any section is empty or incomplete, ask supplementary questions:

Missing SectionSupplementary Question
Tech Stack (Not in Ring)"Any technology not covered by Ring Standards (message broker, cache, etc.)?"
External Integrations"Any third-party APIs or external services?"
Domain Terminology"What are the main entities/classes in this codebase?"
Non-Standard Directories"Any directories that don't follow standard API structure (workers, consumers)?"

Note: Do not ask about architecture, error handling, logging, testing - Ring Standards covers these.

After Generation

Present to user for review, then proceed to Step 1.

Step 0.3.2: HARD BLOCK - No PM Documents (New Projects Only)

When User Has No PM Documents

┌─────────────────────────────────────────────────────────────────┐
│ ⛔ CANNOT PROCEED - PM DOCUMENTS REQUIRED                       │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│ Development cannot start without PM documents.                  │
│                                                                 │
│ You MUST create PRD, TRD, and/or Feature Map documents first    │
│ using PM team skills:                                           │
│                                                                 │
│   /pre-dev-full     → For features ≥2 days (9 gates)           │
│   /pre-dev-feature  → For features <2 days (4 gates)           │
│                                                                 │
│ These commands will guide you through creating:                 │
│   • PRD (Product Requirements Document)                         │
│   • TRD (Technical Requirements Document)                       │
│   • Feature Map (technology choices, feature relationships)     │
│                                                                 │
│ After completing pre-dev workflow, run /dev-cycle again.        │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Action

STOP EXECUTION. Do not proceed to Step 1.

Step 0 Anti-Rationalization

RationalizationWhy It's WRONGRequired Action
"Skip PM docs, I'll add them later"Later = never. No PM docs = no project context = agents guessing.Run /pre-dev-full or /pre-dev-feature NOW
"Project is simple, doesn't need PM docs"Simple projects still need domain context defined upfront.Create PM documents first
"I know what I want to build"Your knowledge ≠ documented knowledge agents can use.Document in PRD/TRD/Feature Map
"PM workflow takes too long"PM workflow takes 30-60 min. Rework from unclear requirements takes days.Invest time upfront
"Just let me start coding"Coding without requirements = building the wrong thing.Requirements first, code second
"It's legacy but I don't want to answer questions"Legacy analysis takes ~5 min. Without it, agents have zero context.Answer the 4 questions
"Legacy project is too complex to explain"Start with high-level answers. PROJECT_RULES.md can be refined later.Provide what you know NOW

Pressure Resistance

User SaysYour Response
"Just skip this, I'll create PM docs later""PM documents are REQUIRED for new projects. Without them, agents cannot understand your project's domain context or technical requirements. Run /pre-dev-full or /pre-dev-feature first."
"I don't need formal documents""PM documents are the source of truth for PROJECT_RULES.md. Development cannot start without documented requirements."
"This is just a quick prototype""Even prototypes need clear requirements. /pre-dev-feature takes ~30 minutes and prevents hours of rework."
"I already explained what I want verbally""Verbal explanations cannot be used by agents. Requirements MUST be documented in PRD/TRD/Feature Map files."
"It's a legacy project but skip the questions""The legacy analysis (codebase-explorer + 3 questions) is the only way I can understand your project. It takes ~5 minutes and enables me to help you effectively."
"I'll fill in PROJECT_RULES.md myself""That works! Create docs/PROJECT_RULES.md with: Tech Stack (not in Ring), External Integrations, Domain Terminology. Do not duplicate Ring Standards content. Then run /dev-cycle again."

Step 1: Initialize or Resume

New Cycle (with task file path)

Input: path/to/tasks.md or path/to/pre-dev/{feature}/

  1. Detect input: File → Load directly | Directory → Load tasks.md + discover subtasks/
  2. Build order: Read tasks, check for subtasks (ST-XXX-01, 02...) or TDD autonomous mode
  3. Determine state path:
    • if source_file contains docs/refactor/state_path = "docs/dev-refactor/current-cycle.json", cycle_type = "refactor"
    • else → state_path = "docs/dev-cycle/current-cycle.json", cycle_type = "feature"
  4. Initialize state: Generate cycle_id, create state file at {state_path}, set indices to 0
  5. Display plan: "Loaded X tasks with Y subtasks"
  6. ASK EXECUTION MODE (MANDATORY - AskUserQuestion):
    • Options: (a) Manual per subtask (b) Manual per task (c) Automatic
    • Do not skip: User hints ≠ mode selection. Only explicit a/b/c is valid.
  7. ASK COMMIT TIMING (MANDATORY - AskUserQuestion):
    • Options: (a) Per subtask (b) Per task (c) At the end
    • Store in commit_timing field in state
  8. Start: Display mode + commit timing, proceed to Gate 0

Resume Cycle (--resume flag)

  1. Find existing state file:
    • Check docs/dev-cycle/current-cycle.json first
    • If not found, check docs/dev-refactor/current-cycle.json
    • If neither exists → Error: "No cycle to resume"
  2. Load found state file, validate (state_path is stored in the state object)
  3. Display: cycle started, tasks completed/total, current task/subtask/gate, paused reason
  4. Handle paused states:
StatusAction
paused_for_approvalRe-present Step 7.1 checkpoint
paused_for_testingAsk if testing complete → continue or keep paused
paused_for_task_approvalRe-present Step 7.2 checkpoint
paused_for_integration_testingAsk if integration testing complete
paused (generic)Ask user to confirm resume
in_progressResume from current gate

Input Validation

Task files are generated by /pre-dev-* or /dev-refactor, which handle content validation. The dev-cycle performs basic format checks:

Format Checks

CheckValidationAction
File existsTask file path is readableError: abort
Task headersAt least one ## Task: foundError: abort
Task ID format## Task: {ID} - {Title}Warning: use line number as ID
Acceptance criteriaAt least one - [ ] per taskWarning: task may fail validation gate

Step 2: Gate 0 - Implementation (Per Execution Unit)

REQUIRED SUB-SKILL: Use dev-implementation

Execution Unit: Task (if no subtasks) or Subtask (if task has subtasks)

⛔ MANDATORY: Agent Dispatch Required

See shared-patterns/shared-orchestrator-principle.md for full details.

Gate 0 has TWO explicit sub-phases with a HARD GATE between them:

┌─────────────────────────────────────────────────────────────────┐
│  GATE 0.1: TDD-RED                                              │
│  ─────────────────                                              │
│  Write failing test → Run test → Capture FAILURE output         │
│                                                                 │
│  ═══════════════════ HARD GATE ═══════════════════════════════  │
│  CANNOT proceed to 0.2 until failure output is captured         │
│  ════════════════════════════════════════════════════════════   │
│                                                                 │
│  GATE 0.2: TDD-GREEN                                            │
│  ──────────────────                                             │
│  Implement minimal code → Run test → Verify PASS                │
└─────────────────────────────────────────────────────────────────┘

Step 2.1: Gate 0.1 - TDD-RED (Write Failing Test)

  1. Record gate start timestamp

  2. Set gate_progress.implementation.tdd_red.status = "in_progress"

  3. Determine appropriate agent based on content:

    • Go files/go.mod → backend-engineer-golang
    • TypeScript backend → backend-engineer-typescript
    • React/Frontend → frontend-engineer-typescript
    • Infrastructure → devops-engineer
  4. Dispatch to selected agent for TDD-RED only:

    See shared-patterns/template-tdd-prompts.md for the TDD-RED prompt template.

    Include: unit_id, title, requirements, acceptance_criteria in the prompt.

  5. Receive TDD-RED report from agent

  6. VERIFY FAILURE OUTPUT EXISTS (HARD GATE): See shared-patterns/template-tdd-prompts.md for verification rules.

  7. Update state:

    gate_progress.implementation.tdd_red = {
      "status": "completed",
      "test_file": "[path from agent]",
      "failure_output": "[actual failure output from agent]",
      "completed_at": "[ISO timestamp]"
    }
    
  8. Display to user:

    ┌─────────────────────────────────────────────────┐
    │ ✓ TDD-RED COMPLETE                              │
    ├─────────────────────────────────────────────────┤
    │ Test: [test_file]:[test_function]               │
    │ Failure: [first line of failure output]         │
    │                                                 │
    │ Proceeding to TDD-GREEN...                      │
    └─────────────────────────────────────────────────┘
    
  9. Proceed to Gate 0.2

Step 2.2: Gate 0.2 - TDD-GREEN (Implementation)

PREREQUISITE: gate_progress.implementation.tdd_red.status == "completed"

  1. Set gate_progress.implementation.tdd_green.status = "in_progress"

  2. Dispatch to same agent for TDD-GREEN:

    See shared-patterns/template-tdd-prompts.md for the TDD-GREEN prompt template (includes observability requirements).

    Include: unit_id, title, tdd_red.test_file, tdd_red.failure_output in the prompt.

  3. Receive TDD-GREEN report from agent

  4. VERIFY PASS OUTPUT EXISTS (HARD GATE): See shared-patterns/template-tdd-prompts.md for verification rules.

  5. Update state:

    gate_progress.implementation.tdd_green = {
      "status": "completed",
      "implementation_file": "[path from agent]",
      "test_pass_output": "[actual pass output from agent]",
      "completed_at": "[ISO timestamp]"
    }
    gate_progress.implementation.status = "completed"
    artifacts.implementation = {files_changed, commit_sha}
    agent_outputs.implementation = {
      agent: "[selected_agent]",
      output: "[full agent output for feedback analysis]",
      timestamp: "[ISO timestamp]",
      duration_ms: [execution time],
      iterations: [standards_compliance_iterations from Step 2.3],
      standards_compliance: {
        total_sections: [N from final verification],
        compliant: [N sections with ✅],
        not_applicable: [N sections with N/A],
        non_compliant: 0,
        gaps: []  // Empty when gate passes; populated during iterations
      }
    }
    

    If iterations > 1, populate gaps[] with issues found in previous iterations:

    gaps: [
      {
        "section": "[section name from ❌ row]",
        "status": "❌",
        "reason": "[why it failed]",
        "file": "[file path if available]",
        "line": [line number if available],
        "evidence": "[code snippet or description]",
        "fixed_in_iteration": [iteration number when fixed]
      }
    ]
    
  6. Display to user:

    ┌─────────────────────────────────────────────────┐
    │ ✓ GATE 0 COMPLETE (TDD-RED → TDD-GREEN)        │
    ├─────────────────────────────────────────────────┤
    │ RED:   [test_file] - FAIL captured ✓           │
    │ GREEN: [impl_file] - PASS verified ✓           │
    │                                                 │
    │ Proceeding to Gate 1 (DevOps)...               │
    └─────────────────────────────────────────────────┘
    
  7. Proceed to Step 2.3 (Standards Compliance Verification)

Step 2.3: Standards Compliance Verification (HARD GATE)

PREREQUISITE: gate_progress.implementation.tdd_green.status == "completed"

Purpose: Verify implementation follows all standards defined in agent's standards file.

See shared-patterns/template-tdd-prompts.md → "Orchestrator Enforcement" section for full verification process.

Process:

Step 2.3.1: Initialize Iteration Counter

Set standards_compliance_iterations = 0
Set MAX_ITERATIONS = 3

Step 2.3.2: Verification Loop

LOOP while standards_compliance_iterations < MAX_ITERATIONS:
  
  1. INCREMENT counter FIRST (before any verification):
     standards_compliance_iterations += 1
  
  2. PARSE agent output for "## Standards Coverage Table":
     if not FOUND:
       → Output INCOMPLETE
       → Log: "Iteration [N]: Standards Coverage Table missing"
       → Re-dispatch agent (see Step 2.3.3)
       → CONTINUE loop
  
  3. PARSE "all STANDARDS MET:" value:
     if not FOUND:
       → Output INCOMPLETE
       → Log: "Iteration [N]: Compliance Summary missing"
       → Re-dispatch agent (see Step 2.3.3)
       → CONTINUE loop
  
  4. COUNT sections from Standards Coverage Table:
     total_sections = count all rows
     compliant = count rows with ✅
     not_applicable = count rows with N/A
     non_compliant = count rows with ❌
  
  5. VERIFY compliance:
     if "all STANDARDS MET: ✅ YES" and non_compliant == 0:
       → Gate 0 PASSED
       → BREAK loop (proceed to Step 2.3.4)
     
     if "all STANDARDS MET: ❌ no" or non_compliant > 0:
       → Gate 0 BLOCKED
       → Extract ❌ sections
       → Log: "Iteration [N]: [non_compliant] sections non-compliant"
       → Re-dispatch agent (see Step 2.3.3)
       → CONTINUE loop

END LOOP

if standards_compliance_iterations >= MAX_ITERATIONS and non_compliant > 0:
  → HARD BLOCK
  → Update state with failure info
  → Report to user: "Standards compliance failed after 3 attempts"
  → STOP execution

Step 2.3.3: Re-dispatch Agent for Compliance Fix

Task tool:
  subagent_type: "[same agent from TDD-GREEN]"
  model: "opus"
  description: "Fix missing Ring Standards for [unit_id] (attempt [N]/3)"
  prompt: |
    ⛔ STANDARDS not MET - Fix Required (Attempt [standards_compliance_iterations] of 3)
    
    Your Standards Coverage Table shows these sections as ❌:
    [list  sections extracted from table]
    
    WebFetch your standards file:
    [URL for agent's standards - golang.md, typescript.md, etc.]
    
    Implement all missing sections.
    Return updated Standards Coverage Table with all  or N/A.
    
    Previous attempt summary:
    - Total sections: [total_sections]
    - Compliant: [compliant]
    - Not applicable: [not_applicable]
    - Non-compliant: [non_compliant]

Step 2.3.4: Update State with Compliance Metrics

gate_progress.implementation = {
  "status": "completed",
  "tdd_red": {...},
  "tdd_green": {...},
  "standards_verified": true,
  "standards_compliance_iterations": [final count - e.g., 1, 2, or 3],
  "standards_coverage": {
    "total_sections": [N - from final successful verification],
    "compliant": [N - sections with ✅],
    "not_applicable": [N - sections with N/A],
    "non_compliant": 0
  }
}

Note: non_compliant MUST be 0 when gate passes. If non-zero after 3 iterations, gate is BLOCKED.

  1. Display to user:

    ┌─────────────────────────────────────────────────┐
    │ ✓ GATE 0 COMPLETE                              │
    ├─────────────────────────────────────────────────┤
    │ TDD-RED:   [test_file] - FAIL captured ✓       │
    │ TDD-GREEN: [impl_file] - PASS verified ✓       │
    │ STANDARDS: [N]/[N] sections compliant ✓        │
    │ ITERATIONS: [standards_compliance_iterations]   │
    │                                                 │
    │ Proceeding to Gate 1 (DevOps)...               │
    └─────────────────────────────────────────────────┘
    
  2. ⛔ SAVE STATE TO FILE (MANDATORY):

    Write tool:
      file_path: [state.state_path]  # "docs/dev-cycle/current-cycle.json" or "docs/dev-refactor/current-cycle.json"
      content: [full updated state JSON]
    

    See "State Persistence Rule" section. State MUST be written to file after Gate 0.

  3. Proceed to Gate 1

Standards Compliance Anti-Rationalization

RationalizationWhy It's WRONGRequired Action
"Agent said implementation is complete"Agent completion ≠ Standards compliance. Verify table.Parse and verify Standards Coverage Table
"Table wasn't in agent output"Missing table = Incomplete output = Re-dispatchRe-dispatch agent to output table
"Only 1-2 sections are ❌"any ❌ = BLOCKED. Count is irrelevant.Re-dispatch to fix all ❌ sections
"lib-commons is the main thing"all sections are equally required. No prioritization.Verify all sections from standards-coverage-table.md
"Agent knows the standards"Knowledge ≠ implementation. Evidence required.Check file:line evidence in table
"Standards verification is slow"Verification prevents rework. 30 seconds vs hours.Always verify before proceeding

TDD Sub-Phase Anti-Rationalization

RationalizationWhy It's WRONGRequired Action
"Test passes on first run, skip RED"Passing test ≠ TDD. Test MUST fail first.Delete test, rewrite to fail first
"I'll capture failure output later"Later = never. Failure output is the gate.Capture NOW or cannot proceed
"Failure output is in the logs somewhere""Somewhere" ≠ captured. Must be in state.Extract and store in tdd_red.failure_output
"GREEN passed, RED doesn't matter now"RED proves test validity. Skip = invalid test.Re-run RED phase, capture failure
"Agent already did TDD internally"Internal ≠ verified. State must show evidence.Agent must output failure explicitly

Step 3: Gate 1 - DevOps (Per Execution Unit)

REQUIRED SUB-SKILL: Use dev-devops

⛔ HARD GATE: Required Artifacts MUST Be Created

Gate 1 is a BLOCKING gate. DevOps agent MUST create all required artifacts. If any artifact is missing:

  • You CANNOT proceed to Gate 2
  • You MUST re-dispatch to devops-engineer to create missing artifacts
  • You MUST verify all artifacts exist before proceeding

Required Artifacts

See shared-patterns/standards-coverage-table.md → "devops-engineer → devops.md" for all required sections.

Key artifacts from devops.md:

  • Containers (Dockerfile + Docker Compose)
  • Makefile Standards (all required commands)
  • Infrastructure as Code (if applicable)
  • Helm charts (if K8s deployment)

Step 3.1: Prepare Input for dev-devops Skill

Gather from previous gates:

devops_input = {
  // REQUIRED - from current execution unit
  unit_id: state.current_unit.id,
  
  // REQUIRED - from Gate 0 context
  language: state.current_unit.language,  // "go" | "typescript" | "python"
  service_type: state.current_unit.service_type,  // "api" | "worker" | "batch" | "cli"
  implementation_files: agent_outputs.implementation.files_changed,  // list of files from Gate 0
  
  // OPTIONAL - additional context
  gate0_handoff: agent_outputs.implementation,  // full Gate 0 output
  new_dependencies: state.current_unit.new_deps || [],  // new deps added in Gate 0
  new_env_vars: state.current_unit.env_vars || [],  // env vars needed
  new_services: state.current_unit.services || [],  // postgres, redis, etc.
  existing_dockerfile: [check if Dockerfile exists],
  existing_compose: [check if docker-compose.yml exists]
}

Step 3.2: Invoke dev-devops Skill

1. Record gate start timestamp

2. Invoke dev-devops skill with structured input:

   Skill("dev-devops") with input:
     unit_id: devops_input.unit_id
     language: devops_input.language
     service_type: devops_input.service_type
     implementation_files: devops_input.implementation_files
     gate0_handoff: devops_input.gate0_handoff
     new_dependencies: devops_input.new_dependencies
     new_env_vars: devops_input.new_env_vars
     new_services: devops_input.new_services
     existing_dockerfile: devops_input.existing_dockerfile
     existing_compose: devops_input.existing_compose

   The skill handles:
   - Dispatching devops-engineer agent
   - Dockerfile creation/update
   - docker-compose.yml configuration
   - .env.example documentation
   - Verification commands execution
   - Fix iteration loop (max 3 attempts)

3. Parse skill output for results:
   
   Expected output sections:
   - "## DevOps Summary" → status, iterations
   - "## Files Changed" → Dockerfile, docker-compose, .env.example actions
   - "## Verification Results" → build, startup, health checks
   - "## Handoff to Next Gate" → ready_for_sre: YES/no
   
   if skill output contains "Status: PASS" and "Ready for Gate 2: YES":
     → Gate 1 PASSED. Proceed to Step 3.3.
   
   if skill output contains "Status: FAIL" or "Ready for Gate 2: no":
     → Gate 1 BLOCKED.
     → Skill already dispatched fixes to devops-engineer
     → Skill already re-ran verification
     → If "ESCALATION" in output: STOP and report to user

4. **⛔ SAVE STATE TO FILE (MANDATORY):**
   Write tool → [state.state_path]

Step 3.3: Gate 1 Complete

5. When dev-devops skill returns PASS:
   
   Parse from skill output:
   - status: extract from "## DevOps Summary"
   - dockerfile_action: extract from "## Files Changed" table
   - compose_action: extract from "## Files Changed" table
   - verification_passed: extract from "## Verification Results"
   
   - agent_outputs.devops = {
       skill: "dev-devops",
       output: "[full skill output]",
       artifacts_created: ["Dockerfile", "docker-compose.yml", ".env.example"],
       verification_passed: true,
       timestamp: "[ISO timestamp]",
       duration_ms: [execution time]
     }

6. Update state:
   - gate_progress.devops.status = "completed"
   - gate_progress.devops.artifacts = [list from skill output]

7. Proceed to Gate 2

Gate 1 Anti-Rationalization Table

RationalizationWhy It's WRONGRequired Action
"Dockerfile exists, skip other artifacts"all artifacts required. 1/4 ≠ complete.Create all artifacts
"docker-compose not needed locally"docker-compose is MANDATORY for local dev.Create docker-compose.yml
"Makefile is optional"Makefile is MANDATORY for standardized commands.Create Makefile
".env.example can be added later".env.example documents required config NOW.Create .env.example
"Small service doesn't need all this"Size is irrelevant. Standards apply uniformly.Create all artifacts

Step 4: Gate 2 - SRE (Per Execution Unit)

REQUIRED SUB-SKILL: Use dev-sre

Step 4.1: Prepare Input for dev-sre Skill

Gather from previous gates:

sre_input = {
  // REQUIRED - from current execution unit
  unit_id: state.current_unit.id,
  
  // REQUIRED - from Gate 0 context
  language: state.current_unit.language,  // "go" | "typescript" | "python"
  service_type: state.current_unit.service_type,  // "api" | "worker" | "batch" | "cli"
  implementation_agent: agent_outputs.implementation.agent,  // e.g., "backend-engineer-golang"
  implementation_files: agent_outputs.implementation.files_changed,  // list of files from Gate 0
  
  // OPTIONAL - additional context
  external_dependencies: state.current_unit.external_deps || [],  // HTTP clients, gRPC, queues
  gate0_handoff: agent_outputs.implementation,  // full Gate 0 output
  gate1_handoff: agent_outputs.devops  // full Gate 1 output
}

Step 4.2: Invoke dev-sre Skill

1. Record gate start timestamp

2. Invoke dev-sre skill with structured input:

   Skill("dev-sre") with input:
     unit_id: sre_input.unit_id
     language: sre_input.language
     service_type: sre_input.service_type
     implementation_agent: sre_input.implementation_agent
     implementation_files: sre_input.implementation_files
     external_dependencies: sre_input.external_dependencies
     gate0_handoff: sre_input.gate0_handoff
     gate1_handoff: sre_input.gate1_handoff

   The skill handles:
   - Dispatching SRE agent for validation
   - Structured logging validation
   - Distributed tracing validation
   - Code instrumentation coverage (90%+ required)
   - Context propagation validation (InjectHTTPContext/InjectGRPCContext)
   - Dispatching fixes to implementation agent if needed
   - Re-validation loop (max 3 iterations)

3. Parse skill output for validation results:
   
   Expected output sections:
   - "## Validation Result" → status, iterations, coverage
   - "## Instrumentation Coverage" → table with per-layer coverage
   - "## Issues Found" → list or "None"
   - "## Handoff to Next Gate" → ready_for_testing: YES/no
   
   if skill output contains "Status: PASS" and "Ready for Gate 3: YES":
     → Gate 2 PASSED. Proceed to Step 4.3.
   
   if skill output contains "Status: FAIL" or "Ready for Gate 3: no":
     → Gate 2 BLOCKED. 
     → Skill already dispatched fixes to implementation agent
     → Skill already re-ran validation
     → If "ESCALATION" in output: STOP and report to user

4. **⛔ SAVE STATE TO FILE (MANDATORY):**
   Write tool → [state.state_path]

Step 4.3: Gate 2 Complete

5. When dev-sre skill returns PASS:
   
   Parse from skill output:
   - status: extract from "## Validation Result"
   - instrumentation_coverage: extract percentage from coverage table
   - iterations: extract from "Iterations:" line
   
   - agent_outputs.sre = {
       skill: "dev-sre",
       output: "[full skill output]",
       validation_result: "PASS",
       instrumentation_coverage: "[X%]",
       iterations: [count],
       timestamp: "[ISO timestamp]",
       duration_ms: [execution time]
     }

6. Update state:
   - gate_progress.sre.status = "completed"
   - gate_progress.sre.observability_validated = true
   - gate_progress.sre.instrumentation_coverage = "[X%]"

7. Proceed to Gate 3

Gate 2 Anti-Rationalization Table

See dev-sre/SKILL.md for complete anti-rationalization tables covering:

  • Observability deferral rationalizations
  • Instrumentation coverage rationalizations
  • Context propagation rationalizations

Gate 2 Pressure Resistance

User SaysYour Response
"Skip SRE validation, we'll add observability later""Observability is MANDATORY for Gate 2. Invoking dev-sre skill now."
"SRE found issues but let's continue""Gate 2 is a HARD GATE. dev-sre skill handles fix dispatch and re-validation."
"Instrumentation coverage is low but code works""90%+ instrumentation coverage is REQUIRED. dev-sre skill will not pass until met."

Step 5: Gate 3 - Testing (Per Execution Unit)

REQUIRED SUB-SKILL: Use dev-testing

Step 5.1: Prepare Input for dev-testing Skill

Gather from previous gates:

testing_input = {
  // REQUIRED - from current execution unit
  unit_id: state.current_unit.id,
  acceptance_criteria: state.current_unit.acceptance_criteria,  // list of ACs to test
  implementation_files: agent_outputs.implementation.files_changed,
  language: state.current_unit.language,  // "go" | "typescript" | "python"
  
  // OPTIONAL - additional context
  coverage_threshold: 85,  // Ring minimum, PROJECT_RULES.md can raise
  gate0_handoff: agent_outputs.implementation,  // full Gate 0 output
  existing_tests: [check for existing test files]
}

Step 5.2: Invoke dev-testing Skill

1. Record gate start timestamp

2. Invoke dev-testing skill with structured input:

   Skill("dev-testing") with input:
     unit_id: testing_input.unit_id
     acceptance_criteria: testing_input.acceptance_criteria
     implementation_files: testing_input.implementation_files
     language: testing_input.language
     coverage_threshold: testing_input.coverage_threshold
     gate0_handoff: testing_input.gate0_handoff
     existing_tests: testing_input.existing_tests

   The skill handles:
   - Dispatching qa-analyst agent
   - Test creation following TDD methodology
   - Coverage measurement and validation (85%+ required)
   - Traceability matrix (AC → Test mapping)
   - Dispatching fixes to implementation agent if coverage < threshold
   - Re-validation loop (max 3 iterations)

3. Parse skill output for results:
   
   Expected output sections:
   - "## Testing Summary" → status, iterations
   - "## Coverage Report" → threshold vs actual
   - "## Traceability Matrix" → AC-to-test mapping
   - "## Handoff to Next Gate" → ready_for_review: YES/no
   
   if skill output contains "Status: PASS" and "Ready for Gate 4: YES":
     → Gate 3 PASSED. Proceed to Step 5.3.
   
   if skill output contains "Status: FAIL" or "Ready for Gate 4: no":
     → Gate 3 BLOCKED.
     → Skill already dispatched fixes to implementation agent
     → Skill already re-ran coverage check
     → If "ESCALATION" in output: STOP and report to user

4. **⛔ SAVE STATE TO FILE (MANDATORY):**
   Write tool → [state.state_path]

Step 5.3: Gate 3 Complete

5. When dev-testing skill returns PASS:
   
   Parse from skill output:
   - coverage_actual: extract percentage from "## Coverage Report"
   - coverage_threshold: extract from "## Coverage Report"
   - criteria_covered: extract from "## Traceability Matrix"
   - iterations: extract from "Iterations:" line
   
   - agent_outputs.testing = {
       skill: "dev-testing",
       output: "[full skill output]",
       verdict: "PASS",
       coverage_actual: [X%],
       coverage_threshold: [85%],
       criteria_covered: "[X/Y]",
       iterations: [count],
       timestamp: "[ISO timestamp]",
       duration_ms: [execution time],
       failures: [],  // Empty when PASS; see schema below for FAIL
       uncovered_criteria: []  // Empty when all ACs covered
     }
   
   **If iterations > 1 (tests failed before passing), populate `failures[]`:**
   ```json
   failures: [
     {
       "test_name": "TestUserCreate_InvalidEmail",
       "test_file": "internal/handler/user_test.go",
       "error_type": "assertion|panic|timeout|compilation",
       "expected": "[expected value]",
       "actual": "[actual value]",
       "message": "[error message from test output]",
       "stack_trace": "[relevant stack trace]",
       "fixed_in_iteration": [iteration number when fixed]
     }
   ]

If coverage < 100% of acceptance criteria, populate uncovered_criteria[]:

uncovered_criteria: [
  {
    "criterion_id": "AC-001",
    "description": "User should receive email confirmation",
    "reason": "No test found for email sending functionality"
  }
]
  1. Update state:

    • gate_progress.testing.status = "completed"
    • gate_progress.testing.coverage = [coverage_actual]
  2. Proceed to Gate 4


### Gate 3 Thresholds

- **Minimum:** 85% (Ring standard - CANNOT be lowered)
- **Project-specific:** Can be higher if defined in `docs/PROJECT_RULES.md`
- **Validation:** Threshold < 85% → Use 85%

### Gate 3 Pressure Resistance

| User Says | Your Response |
|-----------|---------------|
| "84% is close enough" | "85% is Ring minimum. dev-testing skill enforces this." |
| "Skip testing, deadline" | "Testing is MANDATORY. dev-testing skill handles iterations." |
| "Manual testing covers it" | "Gate 3 requires executable unit tests. Invoking dev-testing now." |

## Step 6: Gate 4 - Review (Per Execution Unit)

**REQUIRED SUB-SKILL:** Use `requesting-code-review`

### Step 6.1: Prepare Input for requesting-code-review Skill

```text
Gather from previous gates:

review_input = {
  // REQUIRED - from current execution unit
  unit_id: state.current_unit.id,
  base_sha: state.current_unit.base_sha,  // SHA before implementation
  head_sha: [current HEAD],  // SHA after all gates
  implementation_summary: state.current_unit.title + requirements,
  requirements: state.current_unit.acceptance_criteria,
  
  // OPTIONAL - additional context
  implementation_files: agent_outputs.implementation.files_changed,
  gate0_handoff: agent_outputs.implementation  // full Gate 0 output
}

Step 6.2: Invoke requesting-code-review Skill

1. Record gate start timestamp

2. Invoke requesting-code-review skill with structured input:

   Skill("requesting-code-review") with input:
     unit_id: review_input.unit_id
     base_sha: review_input.base_sha
     head_sha: review_input.head_sha
     implementation_summary: review_input.implementation_summary
     requirements: review_input.requirements
     implementation_files: review_input.implementation_files
     gate0_handoff: review_input.gate0_handoff

   The skill handles:
   - Dispatching all 3 reviewers in PARALLEL (single message with 3 Task calls)
   - Aggregating issues by severity (CRITICAL/HIGH/MEDIUM/LOW/COSMETIC)
   - Dispatching fixes to implementation agent for blocking issues
   - Re-running all 3 reviewers after fixes
   - Iteration tracking (max 3 attempts)
   - Adding TODO/FIXME comments for non-blocking issues

3. Parse skill output for results:
   
   Expected output sections:
   - "## Review Summary" → status, iterations
   - "## Issues by Severity" → counts per severity level
   - "## Reviewer Verdicts" → code-reviewer, business-logic-reviewer, security-reviewer
   - "## Handoff to Next Gate" → ready_for_validation: YES/no
   
   if skill output contains "Status: PASS" and "Ready for Gate 5: YES":
     → Gate 4 PASSED. Proceed to Step 6.3.
   
   if skill output contains "Status: FAIL" or "Ready for Gate 5: no":
     → Gate 4 BLOCKED.
     → Skill already dispatched fixes to implementation agent
     → Skill already re-ran all 3 reviewers
     → If "ESCALATION" in output: STOP and report to user

4. **⛔ SAVE STATE TO FILE (MANDATORY):**
   Write tool → [state.state_path]

Step 6.3: Gate 4 Complete

5. When requesting-code-review skill returns PASS:
   
   Parse from skill output:
   - reviewers_passed: extract from "## Reviewer Verdicts" (should be "3/3")
   - issues_critical: extract count from "## Issues by Severity"
   - issues_high: extract count from "## Issues by Severity"
   - issues_medium: extract count from "## Issues by Severity"
   - iterations: extract from "Iterations:" line
   
   - agent_outputs.review = {
       skill: "requesting-code-review",
       output: "[full skill output]",
       iterations: [count],
       timestamp: "[ISO timestamp]",
       duration_ms: [execution time],
       reviewers_passed: "3/3",
       code_reviewer: {
         verdict: "PASS",
         issues_count: N,
         issues: []  // Structured issues - see schema below
       },
       business_logic_reviewer: {
         verdict: "PASS",
         issues_count: N,
         issues: []
       },
       security_reviewer: {
         verdict: "PASS",
         issues_count: N,
         issues: []
       }
     }
   
   **Populate `issues[]` for each reviewer with all issues found (even if fixed):**
   ```json
   issues: [
     {
       "severity": "CRITICAL|HIGH|MEDIUM|LOW|COSMETIC",
       "category": "error-handling|security|performance|maintainability|business-logic|...",
       "description": "[detailed description of the issue]",
       "file": "internal/handler/user.go",
       "line": 45,
       "code_snippet": "return err",
       "suggestion": "Use fmt.Errorf(\"failed to create user: %w\", err)",
       "fixed": true|false,
       "fixed_in_iteration": [iteration number when fixed, null if not fixed]
     }
   ]

Issue tracking rules:

  • all issues found across all iterations MUST be recorded
  • fixed: true + fixed_in_iteration: N for issues resolved during review
  • fixed: false + fixed_in_iteration: null for LOW/COSMETIC (TODO/FIXME added)
  • This enables feedback-loop to analyze recurring issue patterns
  1. Update state:

    • gate_progress.review.status = "completed"
    • gate_progress.review.reviewers_passed = "3/3"
  2. Proceed to Gate 5


### Gate 4 Anti-Rationalization Table

| Rationalization | Why It's WRONG | Required Action |
|-----------------|----------------|-----------------|
| "Only 1 MEDIUM issue, can proceed" | MEDIUM = MUST FIX. Quantity is irrelevant. | **Fix the issue, re-run all reviewers** |
| "Issue is cosmetic, not really MEDIUM" | Reviewer decided severity. Accept their judgment. | **Fix the issue, re-run all reviewers** |
| "Will fix in next sprint" | Deferred fixes = technical debt = production bugs. | **Fix NOW before Gate 5** |
| "User approved, can skip fix" | User approval ≠ reviewer override. Fixes are mandatory. | **Fix the issue, re-run all reviewers** |
| "Same issue keeps appearing, skip it" | Recurring issue = fix is wrong. Debug properly. | **Root cause analysis, then fix** |
| "Only security reviewer found it" | One reviewer = valid finding. All findings matter. | **Fix the issue, re-run all reviewers** |
| "Iteration limit reached, just proceed" | Limit = escalate, not bypass. Quality is non-negotiable. | **Escalate to user, DO NOT proceed** |
| "Tests pass, review issues don't matter" | Tests ≠ review. Different quality dimensions. | **Fix the issue, re-run all reviewers** |

### Gate 4 Pressure Resistance

| User Says | Your Response |
|-----------|---------------|
| "Just skip this MEDIUM issue" | "MEDIUM severity issues are blocking by definition. I MUST dispatch a fix to the appropriate agent before proceeding. This protects code quality." |
| "I'll fix it later, let's continue" | "Gate 4 is a HARD GATE. All CRITICAL/HIGH/MEDIUM issues must be resolved NOW. I'm dispatching the fix to [agent] and will re-run reviewers after." |
| "We're running out of time" | "Proceeding with known issues creates larger problems later. The fix dispatch is automated and typically takes 2-5 minutes. Quality gates exist to save time overall." |
| "Override the gate, I approve" | "User approval cannot override reviewer findings. The gate ensures code quality. I'll dispatch the fix now." |
| "It's just a style issue" | "If it's truly cosmetic, reviewers would mark it COSMETIC (non-blocking). MEDIUM means it affects maintainability or correctness. Fixing now." |

## Step 7: Gate 5 - Validation (Per Execution Unit)

```text
For current execution unit:

1. Record gate start timestamp
2. Verify acceptance criteria:
   For each criterion in acceptance_criteria:
     - Check if implemented
     - Check if tested
     - Mark as PASS/FAIL

3. Run final verification:
   - All tests pass?
   - No Critical/High/Medium review issues?
   - All acceptance criteria met?

4. If validation fails:
   - Log failure reasons
   - Determine which gate to revisit
   - Loop back to appropriate gate

5. If validation passes:
   - Set unit status = "completed"
   - Record gate end timestamp
   - agent_outputs.validation = {
       result: "approved",
       timestamp: "[ISO timestamp]",
       criteria_results: [{criterion, status}]
     }
   - Proceed to Step 7.1 (Execution Unit Approval)

Step 7.1: Execution Unit Approval (Conditional)

Checkpoint depends on execution_mode: manual_per_subtask → Execute | manual_per_task / automatic → Skip

  1. COMMIT CHECK (before checkpoint):

    • if commit_timing == "per_subtask":
      • Execute /commit command with message: feat({unit_id}): {unit_title}
      • Include all changed files from this subtask
    • else: Skip commit (will happen at task or cycle end)
  2. Set status = "paused_for_approval", save state

  3. Present summary: Unit ID, Parent Task, Gates 0-5 status, Criteria X/X, Duration, Files Changed, Commit Status

  4. AskUserQuestion: "Ready to proceed?" Options: (a) Continue (b) Test First (c) Stop Here

  5. Handle response:

ResponseAction
ContinueSet in_progress, move to next unit (or Step 7.2 if last)
Test FirstSet paused_for_testing, STOP, output resume command
Stop HereSet paused, STOP, output resume command

Step 7.2: Task Approval Checkpoint (Conditional)

Checkpoint depends on execution_mode: manual_per_subtask / manual_per_task → Execute | automatic → Skip

  1. COMMIT CHECK (before task checkpoint):

    • if commit_timing == "per_task":
      • Execute /commit command with message: feat({task_id}): {task_title}
      • Include all changed files from this task (all subtasks combined)
    • else if commit_timing == "per_subtask": Already committed per subtask
    • else: Skip commit (will happen at cycle end)
  2. Set task status = "completed", cycle status = "paused_for_task_approval", save state

  3. Present summary: Task ID, Subtasks X/X, Total Duration, Review Iterations, Files Changed, Commit Status

  4. AskUserQuestion: "Task complete. Ready for next?" Options: (a) Continue (b) Integration Test (c) Stop Here

  5. Handle response:

After completing all subtasks of a task:

0. Check execution_mode from state:
   - If "automatic": Still run feedback, then skip to next task
   - If "manual_per_subtask" or "manual_per_task": Continue with checkpoint below

1. Set task status = "completed"

2. **⛔ MANDATORY: Run dev-feedback-loop skill**

   ```yaml
   Skill tool:
     skill: "dev-feedback-loop"

Note: dev-feedback-loop manages its own TodoWrite tracking internally.

The skill will:

  • Add its own todo item for tracking
  • Calculate assertiveness score for the task
  • Dispatch prompt-quality-reviewer agent with agent_outputs from state
  • Generate improvement suggestions
  • Write feedback to docs/feedbacks/cycle-{date}/{agent}.md
  • Mark its todo as completed

After feedback-loop completes, update state:

  • Set tasks[current].feedback_loop_completed = true in state file

Anti-Rationalization for Feedback Loop:

RationalizationWhy It's WRONGRequired Action
"Task was simple, skip feedback"Simple tasks still contribute to patternsExecute Skill tool
"Already at 100% score"High scores need tracking for replicationExecute Skill tool
"User approved, feedback unnecessary"Approval ≠ process quality metricsExecute Skill tool
"No issues found, nothing to report"Absence of issues IS dataExecute Skill tool
"Time pressure, skip metrics"Metrics take <2 min, prevent future issuesExecute Skill tool

⛔ HARD GATE: You CANNOT proceed to step 3 without executing the Skill tool above.

  1. Set cycle status = "paused_for_task_approval"

  2. Save state

  3. Present task completion summary (with feedback metrics): ┌─────────────────────────────────────────────────┐ │ ✓ TASK COMPLETED │ ├─────────────────────────────────────────────────┤ │ Task: [task_id] - [task_title] │ │ │ │ Subtasks Completed: X/X │ │ ✓ ST-001-01: [title] │ │ ✓ ST-001-02: [title] │ │ ✓ ST-001-03: [title] │ │ │ │ Total Duration: Xh Xm │ │ Total Review Iterations: N │ │ │ │ ═══════════════════════════════════════════════ │ │ FEEDBACK METRICS │ │ ═══════════════════════════════════════════════ │ │ │ │ Assertiveness Score: XX% (Rating) │ │ │ │ Prompt Quality by Agent: │ │ backend-engineer-golang: 90% (Excellent) │ │ qa-analyst: 75% (Acceptable) │ │ code-reviewer: 88% (Good) │ │ │ │ Improvements Suggested: N │ │ Feedback Location: │ │ docs/feedbacks/cycle-YYYY-MM-DD/ │ │ │ │ ═══════════════════════════════════════════════ │ │ │ │ All Files Changed This Task: │ │ - file1.go │ │ - file2.go │ │ - ... │ │ │ │ Next Task: [next_task_id] - [next_task_title] │ │ Subtasks: N (or "TDD autonomous") │ │ or "No more tasks - cycle complete" │ └─────────────────────────────────────────────────┘

  4. ASK FOR EXPLICIT APPROVAL using AskUserQuestion tool:

    Question: "Task [task_id] complete. Ready to start the next task?" Options: a) "Continue" - Proceed to next task b) "Integration Test" - User wants to test the full task integration c) "Stop Here" - Pause cycle

  5. Handle user response:

    If "Continue":

    • Set status = "in_progress"
    • Move to next task
    • Set current_task_index += 1
    • Set current_subtask_index = 0
    • Reset to Gate 0
    • Continue execution

    If "Integration Test":

    • Set status = "paused_for_integration_testing"
    • Save state
    • Output: "Cycle paused for integration testing. Test task [task_id] integration and run: /dev-cycle --resume when ready to continue."
    • STOP execution

    If "Stop Here":

    • Set status = "paused"
    • Save state
    • Output: "Cycle paused after task [task_id]. Resume with: /dev-cycle --resume"
    • STOP execution

**Note:** Tasks without subtasks execute both 7.1 and 7.2 in sequence.

## Step 8: Cycle Completion

0. **FINAL COMMIT CHECK (before completion):**
   - if `commit_timing == "at_end"`:
     - Execute `/commit` command with message: `feat({cycle_id}): complete dev cycle for {feature_name}`
     - Include all changed files from the entire cycle
   - else: Already committed per subtask or per task

1. **Calculate metrics:** total_duration_ms, average gate durations, review iterations, pass/fail ratio
2. **Update state:** `status = "completed"`, `completed_at = timestamp`
3. **Generate report:** Task | Subtasks | Duration | Review Iterations | Status | Commit Status

4. **⛔ MANDATORY: Run dev-feedback-loop skill for cycle metrics**

   ```yaml
   Skill tool:
     skill: "dev-feedback-loop"

Note: dev-feedback-loop manages its own TodoWrite tracking internally.

After feedback-loop completes, update state:

  • Set feedback_loop_completed = true at cycle level in state file

⛔ HARD GATE: Cycle incomplete until feedback-loop executes.

RationalizationWhy It's WRONGRequired Action
"Cycle done, feedback is extra"Feedback IS part of cycle completionExecute Skill tool
"Will run feedback next session"Next session = never. Run NOW.Execute Skill tool
"All tasks passed, no insights"Pass patterns need documentation tooExecute Skill tool
  1. Report: "Cycle completed. Tasks X/X, Subtasks Y, Time Xh Xm, Review iterations X"

Quick Commands

# Full PM workflow then dev execution
/pre-dev-full my-feature
/dev-cycle docs/pre-dev/my-feature/

# Simple PM workflow then dev execution
/pre-dev-feature my-feature
/dev-cycle docs/pre-dev/my-feature/tasks.md

# Manual task file
/dev-cycle docs/tasks/sprint-001.md

# Resume interrupted cycle
/dev-cycle --resume

Error Recovery

TypeConditionAction
RecoverableNetwork timeoutRetry with exponential backoff
RecoverableAgent failureRetry once, then pause for user
RecoverableTest flakinessRe-run tests up to 2 times
Non-RecoverableMissing required filesStop and report
Non-RecoverableInvalid state fileMust restart (cannot resume)
Non-RecoverableMax review iterationsPause for user

On any error: Update state → Set status (failed/paused) → Save immediately → Report (what failed, why, how to recover, resume command)

Execution Report

Base metrics per shared-patterns/output-execution-report.md.

MetricValue
DurationXh Xm Ys
Tasks ProcessedN/M
Current GateGate X - [name]
Review IterationsN
ResultPASS/FAIL/IN_PROGRESS

Gate Timings

GateDurationStatus
ImplementationXm Ysin_progress
DevOps-pending
SRE-pending
Testing-pending
Review-pending
Validation-pending

State File Location

docs/dev-cycle/current-cycle.json (feature) or docs/dev-refactor/current-cycle.json (refactor)