持續整合/部署
13574 skills in DevOps > 持續整合/部署
writing-documentation-with-diataxis
Applies the Diataxis framework to create or improve technical documentation. Use when being asked to write high quality tutorials, how-to guides, reference docs, or explanations, when reviewing documentation quality, or when deciding what type of documentation to create. Helps identify documentation types using the action/cognition and acquisition/application dimensions.
testing-anti-patterns
Use when writing or changing tests, adding mocks, or tempted to add test-only methods to production code - prevents testing mock behaviour, production pollution with test-only methods, and mocking without understanding dependencies
skill-creator
Guide for creating effective Claude Skills. This skill should be used when users want to create (or update) a skill that extends Claude's capabilities with specialised knowledge, workflows, or tool integrations.
aws-strands-agents-agentcore
Use when working with AWS Strands Agents SDK or Amazon Bedrock AgentCore platform for building AI agents. Provides architecture guidance, implementation patterns, deployment strategies, observability, quality evaluations, multi-agent orchestration, and MCP server integration.
shell-scripting
Practical bash scripting guidance emphasising defensive programming, ShellCheck compliance, and simplicity. Use when writing shell scripts that need to be reliable and maintainable.
claude-md-authoring
Creating and maintaining CLAUDE.md project memory files that provide non-obvious codebase context. Use when (1) creating a new CLAUDE.md for a project, (2) adding architectural patterns or design decisions to existing CLAUDE.md, (3) capturing project-specific conventions that aren't obvious from code inspection.
gitlab
Load before running any glab commands to ensure correct CLI syntax. Use when creating/viewing MRs, checking pipelines, managing issues, or any GitLab operations (when remote contains "gitlab").
narsil
Use narsil-mcp code intelligence tools effectively. Use when searching code, finding symbols, analyzing call graphs, scanning for security vulnerabilities, exploring dependencies, or performing static analysis on indexed repositories.
patch-diff-analyzer
Specialized in reverse-engineering compiled binaries (JARs, DLLs). Use this when the user asks to compare versions, find security fixes, or analyze binary patches.
elixir-architect
Use when designing or architecting Elixir/Phoenix applications, creating comprehensive project documentation, planning OTP supervision trees, defining domain models with Ash Framework, structuring multi-app projects with path-based dependencies, or preparing handoff documentation for Director/Implementor AI collaboration
architecture-patterns
Provides guidance on software architecture patterns and design decisions. Use when designing systems, choosing patterns, structuring projects, or when asked about architectural approaches.
latex-rhythm-refiner
Post-process LaTeX project prose to improve readability through varied sentence and paragraph lengths. Removes filler phrases and unnecessary transitions while preserving all citations and semantic meaning.
arxiv-paper-writer
Write LaTeX ML/AI review articles for arXiv using the IEEEtran template and verified BibTeX citations.
sentencepiece
Language-independent tokenizer treating text as raw Unicode. Supports BPE and Unigram algorithms. Fast (50k sentences/sec), lightweight (6MB memory), deterministic vocabulary. Used by T5, ALBERT, XLNet, mBART. Train on raw text without pre-tokenization. Use when you need multilingual support, CJK languages, or reproducible tokenization.
llamaindex
Data framework for building LLM applications with RAG. Specializes in document ingestion (300+ connectors), indexing, and querying. Features vector indices, query engines, agents, and multi-modal support. Use for document Q&A, chatbots, knowledge retrieval, or building RAG pipelines. Best for data-centric LLM applications.
Unnamed Skill
Scalable data processing for ML workloads. Streaming execution across CPU/GPU, supports Parquet/CSV/JSON/images. Integrates with Ray Train, PyTorch, TensorFlow. Scales from single machine to 100s of nodes. Use for batch inference, data preprocessing, multi-modal data loading, or distributed ETL pipelines.
simpo-training
Simple Preference Optimization for LLM alignment. Reference-free alternative to DPO with better performance (+6.4 points on AlpacaEval 2.0). No reference model needed, more efficient than DPO. Use for preference alignment when want simpler, faster training than DPO/PPO.
pytorch-fsdp
Expert guidance for Fully Sharded Data Parallel training with PyTorch FSDP - parameter sharding, mixed precision, CPU offloading, FSDP2
langchain
Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.
llamaguard
Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.