Marketplace

agentic-layer-assessment

Assess agentic layer maturity using the 12-grade classification system (Class 1-3). Use when evaluating codebase readiness, identifying next upgrade steps, or tracking progress toward the Codebase Singularity.

allowed_tools: Read, Grep, Glob

$ Instalar

git clone https://github.com/melodic-software/claude-code-plugins /tmp/claude-code-plugins && cp -r /tmp/claude-code-plugins/plugins/tac/skills/agentic-layer-assessment ~/.claude/skills/claude-code-plugins

// tip: Run this command in your terminal to install the skill


name: agentic-layer-assessment description: Assess agentic layer maturity using the 12-grade classification system (Class 1-3). Use when evaluating codebase readiness, identifying next upgrade steps, or tracking progress toward the Codebase Singularity. allowed-tools: Read, Grep, Glob

Agentic Layer Assessment

Assess agentic layer maturity using the complete 12-grade classification system from TAC Lesson 14.

When to Use

  • Evaluating current agentic layer maturity
  • Identifying the next grade to achieve
  • Tracking progress toward Codebase Singularity
  • Onboarding new team members to agentic patterns
  • Planning agentic infrastructure investments

Prerequisites

  • Access to the codebase's .claude/ directory
  • Understanding of @adw-framework.md classification system

The Classification System

Three classes with 12 total grades:

Class 1: Foundation (In-Loop Agentic Coding)

GradeComponentIndicator
1Memory FilesCLAUDE.md exists with guidance
2Sub-AgentsTask agents used for parallelization
3Skills/MCPsCustom skills or MCP integrations
4Closed-LoopsSelf-validating prompts
5TemplatesBug/feature/chore classification
6Prompt ChainsMulti-step composite workflows
7Agent ExpertsExpertise files with self-improve

Class 2: External Integration (Out-Loop Agentic Coding)

GradeComponentIndicator
1WebhooksExternal triggers (PITER framework)
2ADWsAI Developer Workflows running

Class 3: Production Orchestration (Orchestrated Agentic Coding)

GradeComponentIndicator
1OrchestratorMeta-agent managing fleet
2Orchestrator WorkflowsHuman-orchestrator interaction
3ADWs + OrchestratorFull autonomous execution

Assessment Process

Step 1: Scan Codebase

Check for indicators of each grade:

# Grade 1: Memory files
ls .claude/ CLAUDE.md

# Grade 2: Sub-agents
ls .claude/agents/

# Grade 3: Skills
ls .claude/skills/ || ls -d */skills/ 2>/dev/null

# Grade 4: Closed-loop patterns
grep -r "validation" .claude/commands/
grep -r "retry" .claude/commands/

# Grade 5: Templates
ls .claude/commands/ | grep -E "(chore|bug|feature)"

# Grade 6: Prompt chains
grep -r "Step 1" .claude/commands/
grep -r "Then execute" .claude/commands/

# Grade 7: Agent experts
ls .claude/commands/experts/ 2>/dev/null
find . -name "expertise.yaml"

# Grade 8 (Class 2 G1): Webhooks
find . -name "*webhook*" -o -name "*trigger*"

# Grade 9 (Class 2 G2): ADWs
ls adws/ 2>/dev/null

# Grade 10-12 (Class 3): Orchestrator
find . -name "*orchestrator*"

Step 2: Score Each Grade

For each grade, determine status:

StatusMeaning
✅ CompleteFully implemented and used
🔶 PartialSome elements present
❌ MissingNot implemented

Step 3: Calculate Current Level

Your level = highest consecutive completed grade

Example:

  • Grades 1-4: ✅
  • Grade 5: 🔶
  • Grades 6-7: ❌

Result: Class 1 Grade 4 (solid), targeting Grade 5

Step 4: Identify Next Step

Recommend specific actions for next grade:

CurrentNext Step
Grade 1Add Task agents for parallelization
Grade 2Create custom skills or MCP
Grade 3Add validation loops to prompts
Grade 4Implement issue classification templates
Grade 5Chain prompts into workflows
Grade 6Build first agent expert
Grade 7Set up external triggers
C2G1Implement AI Developer Workflows
C2G2Build orchestrator agent
C3G1Add human-orchestrator workflows
C3G2Connect orchestrator to ADWs

Output Format

## Agentic Layer Assessment Report

**Codebase:** [project name]
**Date:** [assessment date]
**Assessed by:** [model]

### Classification Summary

**Current Level:** Class [1/2/3] Grade [1-7/1-2/1-3]
**Maturity Score:** [X]/12 grades achieved

### Grade-by-Grade Assessment

| Grade | Component | Status | Evidence |
| --- | --- | --- | --- |
| C1G1 | Memory Files | ✅/🔶/❌ | [what was found] |
| C1G2 | Sub-Agents | ✅/🔶/❌ | [what was found] |
...

### Strengths

- [What's working well]

### Gaps

- [What's missing or weak]

### Recommended Next Steps

1. **Priority 1:** [Most impactful improvement]
2. **Priority 2:** [Second priority]
3. **Priority 3:** [Third priority]

### Path to Class 3

[Roadmap of remaining grades to achieve]

Assessment Checklist

  • Scanned .claude/ directory structure
  • Checked for memory files (CLAUDE.md)
  • Searched for agent/skill definitions
  • Analyzed prompt patterns (loops, chains)
  • Looked for templates and classification
  • Checked for expertise files
  • Searched for external triggers
  • Identified ADW presence
  • Assessed orchestrator implementation
  • Calculated maturity score
  • Identified highest consecutive grade
  • Recommended next steps

Key Insight

"Your agentic layer should be specialized to fit and wrap your codebase. Don't focus on reuse, focus on making these prompts great for that one codebase."

Each grade builds on the previous. Skip a grade and the foundation becomes unstable.

Anti-Patterns

Anti-PatternProblemSolution
Skipping gradesMissing foundationBuild progressively
Over-engineering earlyComplexity before valueStart with Grade 1-2
Generic layersDon't fit codebaseSpecialize for your project
Assessment without actionNo improvementPrioritize next step

Cross-References

  • @adw-framework.md - Classification system details
  • @agentic-layer-structure.md - Directory structure
  • @zte-progression.md - Zero-touch engineering path
  • @minimum-viable-agentic skill - Starting point

Version History

  • v1.0.0 (2026-01-01): Initial release (Lesson 14)

Last Updated

Date: 2026-01-01 Model: claude-opus-4-5-20251101