Marketplace
quantization
Model quantization for efficient inference and training. Covers precisiontypes (FP32, FP16, BF16, INT8, INT4), BitsAndBytes configuration, memoryestimation, and performance tradeoffs.
$ Installer
git clone https://github.com/atrawog/bazzite-ai-plugins /tmp/bazzite-ai-plugins && cp -r /tmp/bazzite-ai-plugins/bazzite-ai-jupyter/skills/quantization ~/.claude/skills/bazzite-ai-plugins// tip: Run this command in your terminal to install the skill
Repository

atrawog
Author
atrawog/bazzite-ai-plugins/bazzite-ai-jupyter/skills/quantization
0
Stars
0
Forks
Updated6d ago
Added6d ago