LLM & Agents
6763 skills in Data & AI > LLM & Agents
atlas-agent-product-manager
Product management expertise for story creation, backlog management, validation, and release coordination
otto-orchestrate
Use when spawning Codex agents via Otto to delegate implementation work.
ai-check
Detect AI/LLM-generated text patterns in research writing. Use when: (1) Reviewing manuscript drafts before submission, (2) Pre-commit validation of documentation, (3) Quality assurance checks on research artifacts, (4) Ensuring natural academic writing style, (5) Tracking writing authenticity over time. Analyzes grammar perfection, sentence uniformity, paragraph structure, word frequency (AI-typical words like 'delve', 'leverage', 'robust'), punctuation patterns, and transition word overuse.
skill-builder
Use this skill when creating new Claude Code skills from scratch, editing existing skills to improve their descriptions or structure, or converting Claude Code sub-agents to skills. This includes designing skill workflows, writing SKILL.md files, organizing supporting files with intention-revealing names, and leveraging CLI tools and Node.js scripting.
writing-skills
Use when creating new skills, editing existing skills, or verifying skills work before deployment - applies TDD to process documentation by testing with subagents before writing, iterating until bulletproof against rationalization
refactoring-assistant
Assists with code refactoring by detecting code smells, suggesting improvements, and providing refactoring patterns. Activates when writing/editing code, explicitly requested refactoring, or when code quality issues are detected. Maintains awareness of core principles while providing detailed patterns and examples.
r-development
Modern R development practices emphasizing tidyverse patterns (dplyr 1.1 and later, native pipe, join_by, .by grouping), rlang metaprogramming, performance optimization, and package development. Use when Claude needs to write R code, create R packages, optimize R performance, or provide R programming guidance.
finetune-design
Use when preparing to fine-tune an LLM for multi-turn conversations, before generating any training data. Triggers - starting a fine-tuning project, need to define evaluation criteria, designing conversation data generation.
verdict-flags
a skill that contains all the best practices for using Verdict Flags in shopify codebases. It should be uses whenever the agent is creating, updating or removing flags
Version Sync Skill
Detect and synchronize Node.js version specifications across project files.
control-office-lamp
Control office lamp via Home Assistant. Use when the user asks to turn on, turn off, or toggle their office lamp, or check its status. Triggers on phrases like "turn on my lamp", "office light on/off", "toggle the lamp", or "is my lamp on".
nextjs-server-client-components
Guide for choosing between Server Components and Client Components in Next.js App Router. CRITICAL for useSearchParams (requires Suspense + 'use client'), navigation (Link, redirect, useRouter), cookies/headers access, and 'use client' directive. Activates when prompt mentions useSearchParams, Suspense, navigation, routing, Link component, redirect, pathname, searchParams, cookies, headers, async components, or 'use client'. Essential for avoiding mixing server/client APIs.
mcp2cli
Convert MCP servers into standalone Bash-invokable scripts. Use when user wants to make an MCP server usable as bash commands, convert MCP to CLI, or wrap MCP tools for agent use.
launch-planner
Helps turn app ideas into shippable MVPs with opinionated guidance on scoping, tech stack (Next.js, Supabase, Vercel), and avoiding feature creep. Use when the user wants to plan an MVP, create a PRD, generate Claude Code prompts, scope features, make product decisions, or needs help staying focused on shipping. Triggers include "plan this app", "help me scope this", "create a PRD", "what should I build first", "generate a Claude Code prompt", or any request to turn an idea into something shippable.
kpi-code-quality
KPI for measuring and improving code quality. Covers lint errors, type safety, test coverage, and verification pass rates. Use to ensure code meets quality standards.
python-performance
Use when Python code runs slowly, needs profiling, requires async/await patterns, or needs concurrent execution - covers profiling tools, optimization patterns, and asyncio; measure before optimizing (plugin:python@dot-claude)
opencode-cli
This skill should be used when configuring or using the OpenCode CLI for headless LLM automation. Use when the user asks to "configure opencode", "use opencode cli", "set up opencode", "opencode run command", "opencode model selection", "opencode providers", "opencode vertex ai", "opencode mcp servers", "opencode ollama", "opencode local models", "opencode deepseek", "opencode kimi", "opencode mistral", "fallback cli tool", or "headless llm cli". Covers command syntax, provider configuration, Vertex AI setup, MCP servers, local models, cloud providers, and subprocess integration patterns.
ai-model-cascade
A production-ready pattern for integrating AI models (specifically Google Gemini) with automatic fallback, retry logic, structured output via Zod schemas, and comprehensive error handling. Use when integrating AI/LLM APIs, need automatic fallback when models are overloaded, want type-safe structured responses, or building features requiring reliable AI generation.
review-agent
Reviews code for quality, security, and best practices
domain-profiles
Domain-specific configuration profiles for learning resource creation. Defines search strategies, special fields, terminology policies, and content structures for different academic domains: technology, history, science, arts, and general. Use when researcher or writer agents need domain-adapted behavior.