Marketplace

ops-incident-response

Structured workflow for production incident management following SRE best practices. Covers incident declaration, triage, coordination, resolution, and post-mortem.

$ Installer

git clone https://github.com/LerianStudio/ring /tmp/ring && cp -r /tmp/ring/ops-team/skills/ops-incident-response ~/.claude/skills/ring

// tip: Run this command in your terminal to install the skill


name: ops-incident-response description: | Structured workflow for production incident management following SRE best practices. Covers incident declaration, triage, coordination, resolution, and post-mortem.

trigger: |

  • Production outage or degradation
  • Customer-impacting issues
  • Security incidents
  • SLA breach risk

skip_when: |

  • Development environment issues -> standard debugging
  • Non-production alerts -> normal ticket workflow
  • Planned maintenance -> change management

related: similar: [systematic-debugging] uses: [incident-responder]

Incident Response Workflow

This skill defines the structured process for handling production incidents. It MUST be followed for all SEV1, SEV2, and SEV3 incidents.

See shared-patterns/incident-severity.md for severity definitions.


Incident Response Phases

PhaseFocusOwner
1. DetectionIdentify and confirm incidentMonitoring/On-call
2. DeclarationAssess severity, declare incidentIncident Commander
3. TriageIdentify impact and initial hypothesisResponse Team
4. MitigationRestore service, implement workaroundEngineering Team
5. ResolutionPermanent fix, verificationEngineering Team
6. Post-IncidentRCA, action items, documentationIncident Commander

Phase 1: Detection

Trigger: Alert fires or user report received.

Required Actions

  1. Acknowledge alert within SLA (see severity matrix)
  2. Initial assessment:
    • What is the symptom?
    • What is affected?
    • When did it start?
  3. Check for related alerts - Is this isolated or part of larger issue?

Detection Checklist

  • Alert acknowledged in monitoring system
  • Initial symptom documented
  • Related alerts checked
  • Recent deployments checked
  • Known issue list checked

Phase 2: Declaration

Owner: First responder declares incident, assigns severity.

Severity Assignment

CriteriaSEV1SEV2SEV3
Complete outageX
Data loss riskX
>50% users affectedX
<50% users affectedX
Workaround availableX

See shared-patterns/incident-severity.md for complete definitions.

Declaration Actions

  1. Create incident channel (if SEV1/SEV2):

    • Format: #incident-YYYY-MM-DD-brief-description
    • Post initial summary
  2. Assign Incident Commander (IC):

    • SEV1: Senior on-call or escalate to manager
    • SEV2/SEV3: Primary on-call
  3. Update status page (if customer-facing):

    • Acknowledge incident
    • Set appropriate severity
    • Estimated update time

Declaration Template

**INCIDENT DECLARED**

**Severity:** SEV[1/2/3]
**Title:** [Brief description]
**Incident Commander:** @[name]
**Channel:** #incident-[date]-[slug]

**Impact:**
- Services affected: [list]
- Users affected: [count/percentage]
- Started: [timestamp UTC]

**Current Status:**
[Brief description of current state]

**Next Update:** [timestamp]

Phase 3: Triage

Owner: Incident Commander coordinates, engineering investigates.

Triage Questions (5 Whys Approach)

  1. What is the exact symptom?
  2. What changed recently? (deployments, config, traffic)
  3. What is the blast radius?
  4. What is the root cause hypothesis?
  5. What is the quickest path to mitigation?

Triage Checklist

  • Service dependencies mapped
  • Recent changes identified
  • Error patterns analyzed
  • Resource utilization checked
  • Initial hypothesis formed

Communication During Triage

Update frequency by severity:

SeverityInternal UpdateExternal Update
SEV1Every 10 minEvery 15 min
SEV2Every 15 minEvery 30 min
SEV3Every 30 minAs needed

Phase 4: Mitigation

Owner: Engineering implements fix, IC coordinates.

Mitigation Options (in order of preference)

  1. Rollback - If recent deployment caused issue
  2. Scale - If capacity related
  3. Restart - If state corruption
  4. Failover - If regional/AZ issue
  5. Feature disable - If specific feature causes issue
  6. Hotfix - If rollback not possible

Mitigation Checklist

  • Mitigation option selected with rationale
  • Change approved (SEV1: skip formal, document later)
  • Implementation tracked in incident channel
  • Verification criteria defined
  • Rollback plan ready

Mitigation Template

**MITIGATION IN PROGRESS**

**Action:** [description]
**Owner:** @[name]
**Started:** [timestamp]

**Verification:**
- [ ] [criterion 1]
- [ ] [criterion 2]

**Rollback Plan:**
[If mitigation fails, do X]

Phase 5: Resolution

Owner: Engineering confirms fix, IC verifies resolution.

Resolution Criteria

ALL must be true before marking resolved:

  1. Primary symptom resolved - Users no longer affected
  2. Monitoring confirms - Metrics returned to baseline
  3. No related alerts - All triggered alerts cleared
  4. Verification period passed - 15 min stability for SEV1/2

Resolution Checklist

  • Primary symptom verified resolved
  • Metrics returned to normal
  • All related alerts resolved
  • Verification period completed
  • Customer communication sent (if applicable)
  • Status page updated to resolved

Resolution Template

**INCIDENT RESOLVED**

**Duration:** [X hours Y minutes]
**Resolution Time:** [timestamp UTC]

**Root Cause:**
[Brief description of what caused the incident]

**Fix Applied:**
[What was done to resolve]

**Next Steps:**
- [ ] RCA scheduled for [date]
- [ ] Action items tracked in [location]

**Retrospective:** [date/time]

Phase 6: Post-Incident

Owner: Incident Commander schedules RCA, tracks action items.

RCA Requirements

SeverityRCA RequiredTimeline
SEV1MANDATORY48 hours
SEV2MANDATORY1 week
SEV3Optional2 weeks

RCA Template

# Incident Post-Mortem: [Title]

**Incident ID:** INC-YYYY-NNNN
**Date:** YYYY-MM-DD
**Duration:** X hours Y minutes
**Severity:** SEV[1/2/3]
**Author:** @[incident commander]

## Summary
[2-3 sentence summary of what happened]

## Impact
- **Users Affected:** [count/percentage]
- **Revenue Impact:** [if applicable]
- **SLA Impact:** [if applicable]

## Timeline
| Time (UTC) | Event |
|------------|-------|
| HH:MM | [event] |

## Root Cause
[Technical description of the root cause]

## Contributing Factors
1. [Factor 1]
2. [Factor 2]

## What Went Well
1. [Item 1]
2. [Item 2]

## What Could Be Improved
1. [Item 1]
2. [Item 2]

## Action Items
| Item | Owner | Due Date | Status |
|------|-------|----------|--------|
| [action] | @[name] | YYYY-MM-DD | Open |

## Lessons Learned
[Key takeaways for the team]

Post-Incident Checklist

  • RCA document created
  • Blameless retrospective held
  • Action items assigned and tracked
  • Runbook updated (if applicable)
  • Monitoring improved (if gaps found)
  • Incident documented in knowledge base

Anti-Rationalization Table

RationalizationWhy It's WRONGRequired Action
"Document later, fix first"Memory fades in hoursDocument AS you fix
"Small incident, skip RCA"Small incidents reveal systemic issuesRCA for SEV1/SEV2 minimum
"Root cause is obvious"Obvious != correctInvestigate with data
"Skip verification period"Premature resolution = reopenWait full verification period

Pressure Resistance

User SaysYour Response
"Mark resolved now, verify later""Cannot mark resolved until verification complete. This prevents reopened incidents."
"Skip the RCA, we know what happened""RCA is mandatory for this severity. Schedule within required timeline."
"No time for documentation""Real-time documentation takes 30 seconds per event. Memory loss causes worse rework."

Dispatch Specialist

For complex incidents, dispatch the incident-responder agent:

Task tool:
  subagent_type: "incident-responder"
  model: "opus"
  prompt: |
    INCIDENT: [description]
    SEVERITY: SEV[X]
    CURRENT STATUS: [state]
    REQUEST: [specific assistance needed]