torch-pipeline-parallelism

This skill provides guidance for implementing PyTorch pipeline parallelism for distributed training of large language models. It should be used when implementing pipeline parallel training loops, partitioning transformer models across GPUs, or working with AFAB (All-Forward-All-Backward) scheduling patterns. The skill covers model partitioning, inter-rank communication, gradient flow management, and common pitfalls in distributed training implementations.

$ 설치

git clone https://github.com/letta-ai/skills /tmp/skills && cp -r /tmp/skills/ai/benchmarks/letta/terminal-bench-2/trajectory-only/torch-pipeline-parallelism ~/.claude/skills/skills

// tip: Run this command in your terminal to install the skill

Repository

letta-ai
letta-ai
Author
letta-ai/skills/ai/benchmarks/letta/terminal-bench-2/trajectory-only/torch-pipeline-parallelism
13
Stars
1
Forks
Updated3d ago
Added6d ago