infra-tester
Test infrastructure configurations and deployments - security scanning with Checkov/tfsec, cost estimation analysis, pre-deployment validation, post- deployment verification, integration testing, generates comprehensive test reports with pass/fail status, identifies vulnerabilities and compliance issues, tracks test history for trend analysis.
$ インストール
git clone https://github.com/fractary/claude-plugins /tmp/claude-plugins && cp -r /tmp/claude-plugins/plugins/faber-cloud/skills/infra-tester ~/.claude/skills/claude-plugins// tip: Run this command in your terminal to install the skill
name: infra-tester model: claude-haiku-4-5 description: | Test infrastructure configurations and deployments - security scanning with Checkov/tfsec, cost estimation analysis, pre-deployment validation, post- deployment verification, integration testing, generates comprehensive test reports with pass/fail status, identifies vulnerabilities and compliance issues, tracks test history for trend analysis. tools: Bash, Read, Write, Edit
Infrastructure Testing Skill
<CRITICAL_RULES> IMPORTANT: Testing and validation rules
- Always run security scans before allowing deployment
- Generate cost estimates to prevent budget surprises
- Perform post-deployment verification to ensure resources are healthy
- Document all test results with timestamps
- Fail fast on critical security issues
- Never skip tests for production environment </CRITICAL_RULES>
EXECUTE STEPS:
Step 1: Load Configuration
- Read: .fractary/plugins/faber-cloud/devops.json
- Extract: environment settings, resource patterns, cost thresholds
- Output: "✓ Configuration loaded"
Step 2: Determine Test Phase
- If phase == "pre-deployment":
- Read: workflow/pre-deployment-tests.md
- Execute: Security scanning, cost estimation
- If phase == "post-deployment":
- Read: workflow/post-deployment-tests.md
- Execute: Resource verification, integration tests
- Output: "✓ Test phase determined: ${phase}"
Step 3: Execute Tests
- Run tests based on phase
- Collect results for each test
- Track pass/fail status
- Output: "✓ Tests executed: ${test_count} tests"
Step 4: Analyze Results
- Read: workflow/analyze-results.md
- Categorize findings: critical/high/medium/low
- Check against thresholds
- Determine overall pass/fail
- Output: "✓ Results analyzed: ${status}"
Step 5: Generate Report
- Create test report with findings
- Include recommendations
- Save to: .fractary/plugins/faber-cloud/test-reports/${environment}/${timestamp}-${phase}.json
- Generate human-readable summary
- Output: "✓ Report generated: ${report_path}"
Step 6: Document Results
- Update test history log
- Execute: ../devops-common/scripts/update-test-history.sh
- Output: "✓ Test history updated"
OUTPUT COMPLETION MESSAGE:
✅ COMPLETED: Infrastructure Testing
Status: ${overall_status}
Tests Run: ${test_count}
Passed: ${passed_count}
Failed: ${failed_count}
Critical Issues: ${critical_count}
Report: ${report_path}
───────────────────────────────────────
Next: Review report before proceeding with ${next_action}
IF FAILURE:
❌ FAILED: Infrastructure Testing
Phase: ${phase}
Tests Failed: ${failed_tests}
Critical Issues: ${critical_issues}
───────────────────────────────────────
Resolution: Address issues before proceeding to deployment
<COMPLETION_CRITERIA> This skill is complete and successful when ALL verified:
✅ 1. Tests Executed
- All required tests run successfully
- No test execution errors
- Results collected for all tests
✅ 2. Results Analyzed
- Findings categorized by severity
- Overall status determined (pass/fail)
- Threshold checks completed
✅ 3. Report Generated
- Test report created in JSON format
- Human-readable summary generated
- All findings documented
✅ 4. History Updated
- Test results logged with timestamp
- Test history file updated
- Trend data available
FAILURE CONDITIONS - Stop and report if: ❌ Critical security vulnerabilities found (return findings to manager) ❌ Test execution errors (return error details) ❌ Cost exceeds configured threshold (return cost analysis)
PARTIAL COMPLETION - Not acceptable: ⚠️ Some tests skipped → Return to Step 3 ⚠️ Report not generated → Return to Step 5 </COMPLETION_CRITERIA>
-
Test Report
- Location: .fractary/plugins/faber-cloud/test-reports/${environment}/${timestamp}-${phase}.json
- Format: JSON with findings array
- Contains: Test results, findings, recommendations, overall status
-
Test Summary
- Overall status: PASS/FAIL
- Test counts: total, passed, failed
- Critical issues: count and descriptions
- Cost estimate (if pre-deployment)
Return to agent:
{
"status": "PASS|FAIL",
"phase": "${phase}",
"environment": "${environment}",
"tests_run": ${test_count},
"tests_passed": ${passed_count},
"tests_failed": ${failed_count},
"critical_issues": ${critical_count},
"cost_estimate": "${cost}" (pre-deployment only),
"report_path": "${report_path}",
"recommendations": ["..."]
}
Update:
- Test history log with results
- Test report registry
- Trend analysis data
<ERROR_HANDLING> <TEST_EXECUTION_ERROR> Pattern: Test tool fails to execute Action: 1. Log error details 2. Check tool installation 3. Return error to manager Delegate: None (inform manager) </TEST_EXECUTION_ERROR>
<CRITICAL_SECURITY_ISSUE> Pattern: Critical or high severity security finding Action: 1. Mark test as FAILED 2. Document findings 3. Return to manager with findings Delegate: None (block deployment) </CRITICAL_SECURITY_ISSUE>
<COST_THRESHOLD_EXCEEDED> Pattern: Estimated cost exceeds configured threshold Action: 1. Mark test as FAILED 2. Document cost analysis 3. Return to manager with cost breakdown Delegate: None (request user approval) </COST_THRESHOLD_EXCEEDED> </ERROR_HANDLING>
Repository
