Marketplace

quantization

Model quantization for efficient inference and training. Covers precisiontypes (FP32, FP16, BF16, INT8, INT4), BitsAndBytes configuration, memoryestimation, and performance tradeoffs.

$ Installieren

git clone https://github.com/atrawog/bazzite-ai-plugins /tmp/bazzite-ai-plugins && cp -r /tmp/bazzite-ai-plugins/bazzite-ai-jupyter/skills/quantization ~/.claude/skills/bazzite-ai-plugins

// tip: Run this command in your terminal to install the skill