LLM & Agents
6763 skills in Data & AI > LLM & Agents
provider-integration-templates
OpenRouter framework integration templates for Vercel AI SDK, LangChain, and OpenAI SDK. Use when integrating OpenRouter with frameworks, setting up AI providers, building chat applications, implementing streaming responses, or when user mentions Vercel AI SDK, LangChain, OpenAI SDK, framework integration, or provider setup.
sequential-task-processor
Implements Anthropic's Prompt Chaining pattern for complex multi-step tasks. Decomposes requests into sequential steps where each LLM call processes the previous output, with validation gates between stages. Use for tasks requiring systematic breakdown like "Build a React app" or "Create a REST API".
chatbot-component
ChatBotプロジェクトに新しいUIコンポーネントを追加するためのスキル。コンポーネント作成手順、CSS構成ルール、イベントハンドリングパターンを提供します。新しいUIコンポーネントを作成する時、モーダルを追加する時、CSSスタイルを追加する時、UIイベントを処理する時に使用してください。
openai-web-search
Search the web using GPT-5.2 and OpenAI's Responses API with web_search tool. Use when the user wants to fetch real-time information from the internet via terminal/CLI using OpenAI's API.
site-reliability-engineer
Docusaurus build health validation and deployment safety for Claude Skills showcase. Pre-commit MDX validation (Liquid syntax, angle brackets, prop mismatches), pre-build link checking, post-build health reports. Activate on 'build errors', 'commit hooks', 'deployment safety', 'site health', 'MDX validation'. NOT for general DevOps (use deployment-engineer), Kubernetes/cloud infrastructure (use kubernetes-architect), runtime monitoring (use observability-engineer), or non-Docusaurus projects.
code-review
Use when receiving code review feedback (especially if unclear or technically questionable), when completing tasks or major features requiring review before proceeding, or before making any completion/success claims. Covers three practices - receiving feedback with technical rigor over performative agreement, requesting reviews via code-reviewer subagent, and verification gates requiring evidence before any status claims. Essential for subagent-driven development, pull requests, and preventing false completion claims.
requesting-code-review
Use when completing tasks, implementing major features, or before merging to verify work meets requirements - dispatches super:code-reviewer subagent to review implementation against plan or requirements before proceeding
skill-creator
Guide for creating effective skills, adding skill references, skill scripts or optimizing existing skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, frameworks, libraries or plugins usage, or API and tool integrations.
python-testing
Expert skill for writing production-grade Python tests using pytest with modern fixtures, parametrization, and coverage integration.
Unnamed Skill
Fine-tuning multimodal vision-language models (Llama 3.2 Vision, Qwen2.5 VL) using optimized vision layers (triggers: vision models, multimodal, Llama 3.2 Vision, Qwen2.5 VL, UnslothVisionDataCollator, finetune_vision_layers).
testing
Test development with pytest, fixtures, and integration testing. Use for writing tests, test patterns, coverage, parametrization, and debugging test failures.
task-decomposition
Break down complex tasks into actionable, atomic steps that can be executed by individual agents.
codex-cli-adapter
OpenAI Codex CLI 어댑터. dual-ai-loop의 기본 CLI로서 설치, 버전 확인, 명령어 패턴, 에러 처리를 제공.
parallel-agents
Orchestrate parallel development with multiple Claude Code agents from PRD specs. Use when asked to parallelize development, break down a PRD into agent tasks, coordinate multi-agent workflows, or scale development across independent workstreams.
jelly-multi-ai-code-review
Multi-AI code review orchestrator that coordinates Claude Code, Codex CLI, Gemini CLI, and Factory.ai Droid to perform comprehensive, validated, iterative code reviews with automated improvement cycles. Activated when users need multi-perspective AI code reviews or comprehensive code quality checks.
python-testing-standards
Comprehensive Python testing best practices, pytest conventions, test structure patterns (AAA, Given-When-Then), fixture usage, mocking strategies, code coverage standards, and common anti-patterns. Essential reference for code reviews, test writing, and ensuring high-quality Python test suites with pytest, unittest.mock, and pytest-cov.
Unnamed Skill
**Use this in every interaction** for validating implementation plans and todos, brainstorming before coding, creating TodoWrite todos for checklists, and reviewing work for completeness, clarity, and alignment with design goals - describes Codex cli usage including non-interactive exec mode, prompting guardrails, and operational checklists for reliable Codex sessions
taxasge-backend-dev
Patterns backend FastAPI architecture 3-tiers, complète DEV_AGENT avec best practices backend
agent-config-validator
Validate AgentConfig definitions for the Agent Framework. Use when creating or modifying agent configurations to ensure correct structure, valid tool references, and proper sub-agent composition. Validates TypeScript interfaces and Python Pydantic models.
markdowntown-workbench
Use this when working on the Workbench UI, Workbench state/store, export/compile pipeline, or adapter/target behavior (agents-md, claude-code, github-copilot, etc.).