Content & Media
Content creation, media processing, and design skills
18175 skills in this category
Planning Implementation
Create structured implementation plans before coding. Use when breaking down complex features, refactors, or system changes. Validates requirements, analyzes codebase impact, and produces actionable task breakdowns with identified dependencies and risks.
Vercel CLI Management
Deploy, manage environment variables, view logs, and configure cron jobs with Vercel CLI. Use when deploying to Vercel, managing env vars (add/update/remove), viewing runtime/build logs, or configuring scheduled tasks in vercel.json.
Writing Like User
Emulate the user's personal writing voice and style patterns. Use when the user asks to write content in their voice, draft documents, compose messages, or requests "write this like me" or "in my style."
Writing Slash Commands
Create and use custom slash commands to automate Claude Code workflows. Learn argument passing, frontmatter configuration, bash integration, and file references for building reusable prompts.
Skills Guide
Package expertise into discoverable, reusable capabilities that extend Claude's functionality. Use when creating new Skills, understanding how Skills work, or organizing existing capabilities.
Writing Documentation for LLMs
Teaches how to write effective documentation and instructions for LLMs by assuming competence, using progressive disclosure, prioritizing examples over explanations, and building feedback loops into workflows.
Auditing Security
Identify and remediate vulnerabilities through systematic code analysis. Use when performing security assessments, pre-deployment reviews, compliance validation (OWASP, PCI-DSS, GDPR), investigating known vulnerabilities, or post-incident analysis.
Reviewing Code
Systematically evaluate code changes for security, correctness, performance, and spec alignment. Use when reviewing PRs, assessing code quality, or verifying implementation against requirements.
Skills Guide
Defines required structure, frontmatter format, and best practices for SKILL.md files. Use BEFORE creating or editing any skill - this is the spec to follow, not optional reference.
Output Styles Guide
Adapt Claude Code for different use cases beyond software engineering by customizing system prompts for teaching, learning, analysis, or domain-specific workflows.
ctf-rev
Solve CTF reverse engineering challenges using systematic analysis to find flags, keys, or passwords. Use for crackmes, binary bombs, key validators, obfuscated code, algorithm recovery, or any challenge requiring program comprehension to extract hidden information.
binary-triage
Performs initial binary triage by surveying memory layout, strings, imports/exports, and functions to quickly understand what a binary does and identify suspicious behavior. Use when first examining a binary, when user asks to triage/survey/analyze a program, or wants an overview before deeper reverse engineering.
Code Formatting
MANDATORY: When writing Go tests, you MUST use 'When...it should...' format for ALL test names. When writing any Go code, you MUST remind user to run 'make lint-fix' and 'make verify'. These are non-negotiable HyperShift requirements.
qdrant-vector-search
High-performance vector similarity search engine for RAG and semantic search. Use when building production RAG systems requiring fast nearest neighbor search, hybrid search with filtering, or scalable vector storage with Rust-powered performance.
sparse-autoencoder-training
Provides guidance for training and analyzing Sparse Autoencoders (SAEs) using SAELens to decompose neural network activations into interpretable features. Use when discovering interpretable features, analyzing superposition, or studying monosemantic representations in language models.
blip-2-vision-language
Vision-language pre-training framework bridging frozen image encoders and LLMs. Use when you need image captioning, visual question answering, image-text retrieval, or multimodal chat with state-of-the-art zero-shot performance.
crewai-multi-agent
Multi-agent orchestration framework for autonomous AI collaboration. Use when building teams of specialized agents working together on complex tasks, when you need role-based agent collaboration with memory, or for production workflows requiring sequential/hierarchical execution. Built without LangChain dependencies for lean, fast execution.
pyvene-interventions
Provides guidance for performing causal interventions on PyTorch models using pyvene's declarative intervention framework. Use when conducting causal tracing, activation patching, interchange intervention training, or testing causal hypotheses about model behavior.
langsmith-observability
LLM observability platform for tracing, evaluation, and monitoring. Use when debugging LLM applications, evaluating model outputs against datasets, monitoring production systems, or building systematic testing pipelines for AI applications.
gguf-quantization
GGUF format and llama.cpp quantization for efficient CPU/GPU inference. Use when deploying models on consumer hardware, Apple Silicon, or when needing flexible quantization from 2-8 bit without GPU requirements.