技術寫作
5624 skills in 文件 > 技術寫作
bloat-detector
Detect codebase bloat through progressive analysis: dead code, duplication, complexity, and documentation bloat. Triggers: bloat detection, dead code, code cleanup, duplication, redundancy, codebase health, technical debt, unused code Use when: preparing for refactoring, context usage is high, quarterly maintenance, pre-release cleanup DO NOT use when: actively developing new features, time-sensitive bug fixes. DO NOT use when: codebase is < 1000 lines (insufficient scale for bloat). Progressive 3-tier detection: quick scan → targeted analysis → deep audit.
test-review
Evaluate and upgrade test suites with TDD/BDD rigor, coverage tracking, and quality assessment. Triggers: test audit, test coverage, test quality, TDD, BDD, test gaps, test improvement, coverage analysis, test remediation Use when: auditing test suites, analyzing coverage gaps, improving test quality, evaluating TDD/BDD compliance DO NOT use when: writing new tests - use parseltongue:python-testing. DO NOT use when: updating existing tests - use sanctum:test-updates. Use this skill for test suite evaluation and quality assessment.
update-readme
Consolidate README content using language-aware exemplars, internal doc linkage, and reproducible evidence. Triggers: README update, documentation refresh, readme structure, exemplar research, language-aware docs, readme modernization, project documentation Use when: README requires structural refresh, adding features to documentation, aligning readme with exemplar standards, improving project presentation DO NOT use when: updating inline docs - use doc-updates. DO NOT use when: consolidating ephemeral reports - use doc-consolidation. Run git-workspace-review first to capture repo context.
go-practices
Go conventions for hexagonal architecture, project structure, error handling, testing, and observability. Use when writing Go services.
hooks-eval
detailed hook evaluation framework for Claude Code and Agent SDK hooks. Triggers: hook audit, hook security, hook performance, hook compliance, SDK hooks, hook evaluation, hook benchmarking, hook vulnerability Use when: auditing existing hooks for security vulnerabilities, benchmarking hook performance, implementing hooks using Python SDK, understanding hook callback signatures, validating hooks against compliance standards DO NOT use when: deciding hook placement - use hook-scope-guide instead. DO NOT use when: writing hook rules from scratch - use hookify instead. DO NOT use when: validating plugin structure - use validate-plugin instead. Use this skill BEFORE deploying hooks to production.
doc-updates
Update documentation with writing guideline enforcement, consolidation detection, and accuracy verification. Triggers: documentation update, docs update, ADR, docstrings, writing guidelines, readme update, debloat docs Use when: updating documentation after code changes, enforcing writing guidelines, maintaining ADRs DO NOT use when: README-specific updates - use update-readme instead. DO NOT use when: complex multi-file consolidation - use doc-consolidation. Use this skill for general documentation updates with built-in quality gates.
skills-eval
Evaluate and improve Claude skill quality through auditing. Triggers: skill audit, quality review, compliance check, improvement suggestions, token usage analysis, skill evaluation, skill assessment, skill optimization, skill standards, skill metrics, skill performance Use when: reviewing skill quality, preparing skills for production, auditing existing skills, generating improvement recommendations, checking compliance with standards, analyzing token efficiency, benchmarking skill performance DO NOT use when: creating new skills from scratch - use modular-skills instead. DO NOT use when: writing prose for humans - use writing-clearly-and-concisely. DO NOT use when: need architectural design patterns - use modular-skills. Use this skill BEFORE shipping any skill to production. Check even if unsure.
vhs-recording
Generate terminal recordings using VHS (Charmbracelet) tape files. Executes tape files to produce GIF outputs of terminal sessions. Triggers: terminal recording, vhs tape, terminal demo, cli demo Use when: creating terminal recordings for tutorials and documentation
skill-authoring
Guide to effective Claude Code skill authoring using TDD methodology and persuasion principles. Triggers: skill authoring, skill writing, new skill, TDD skills, skill creation, skill best practices, skill validation, skill deployment, skill compliance Use when: creating new skills from scratch, improving existing skills with low compliance rates, learning skill authoring best practices, validating skill quality before deployment, understanding what makes skills effective DO NOT use when: evaluating existing skills - use skills-eval instead. DO NOT use when: analyzing skill architecture - use modular-skills instead. DO NOT use when: writing general documentation for humans. YOU MUST write a failing test before writing any skill. This is the Iron Law.
spec-writing
Create clear, testable specifications with user stories and acceptance criteria. Triggers: spec writing, feature specification, requirements, user stories Use when: creating new specifications or writing acceptance criteria DO NOT use when: generating implementation tasks - use task-planning.
tutorial-updates
Orchestrate tutorial generation from VHS tapes and Playwright specs to dual-tone markdown with GIF recording. Triggers: tutorial update, gif generation, tape recording, update tutorial, regenerate gifs, tutorial manifest Use when: regenerating tutorial GIFs, updating documentation demos, creating tutorials from tape files DO NOT use when: only updating text - use doc-updates. DO NOT use when: only capturing browser - use scry:browser-recording directly.
api-review
Evaluate public API surfaces against internal guidelines and external exemplars. Triggers: API review, API design, consistency audit, API documentation, versioning, surface inventory, exemplar research Use when: reviewing API design, auditing consistency, governing documentation, researching API exemplars DO NOT use when: architecture review - use architecture-review. DO NOT use when: implementation bugs - use bug-review. Use this skill for API surface evaluation and design review.
python-testing
Python testing with pytest, fixtures, mocking, and TDD workflows. Triggers: pytest, unit tests, test fixtures, mocking, TDD, test suite, coverage, test-driven development, testing patterns, parameterized tests Use when: writing unit tests, setting up test suites, implementing TDD, configuring pytest, creating fixtures, async testing DO NOT use when: evaluating test quality - use pensive:test-review instead. DO NOT use when: infrastructure test config - use leyline:pytest-config. Consult this skill for Python testing implementation and patterns.
Unnamed Skill
Code quality practices: error handling, validation, logging, and DRY. Use when writing or reviewing code.
optimizing-large-skills
Systematic methodology to reduce skill file size through externalization, consolidation, and progressive loading patterns. Triggers: large skill, skill optimization, skill size, 300 lines, inline code, skill refactoring, skill context reduction, skill modularization Use when: skills exceed 300 lines, multiple code blocks (10+) with similar functionality, heavy Python inline with markdown, functions >20 lines embedded DO NOT use when: skill is under 300 lines and well-organized. DO NOT use when: creating new skills - use modular-skills instead. Consult this skill when skills-eval shows "Large skill file" warnings.
modular-skills
Design skills as modular building blocks for predictable token usage. Triggers: skill design, skill architecture, modularization, token optimization, skill structure, refactoring skills, new skill creation, skill complexity Use when: creating new skills that will be >150 lines, breaking down complex monolithic skills, planning skill architecture, refactoring overlapping skills, reviewing skill maintainability, designing skill module structure DO NOT use when: evaluating existing skill quality - use skills-eval instead. DO NOT use when: writing prose for humans - use writing-clearly-and-concisely. DO NOT use when: need improvement recommendations - use skills-eval. Use this skill BEFORE creating any new skill. Check even if unsure.
testing-debugging
Ensuring software correctness and reliability by writing automated tests, using quality assurance tools, and systematically debugging issues.
secure-coding
Incorporating security at every step of software development – writing code that defends against vulnerabilities and protects user data.
code-readability
Writing clean, understandable, and self-documenting code that is easy to review and maintain over time.
documentation
Communicating the intended behavior and context of code through clear documentation and comments, and sharing knowledge with the team.