ollama

Ollama LLM inference server management via Podman Quadlet. Single-instancedesign with GPU acceleration for running local LLMs. Use when users needto install Ollama, pull models, run inference, or manage the Ollama server.

$ 安裝

git clone https://github.com/atrawog/bazzite-ai /tmp/bazzite-ai && cp -r /tmp/bazzite-ai/.claude/skills/ollama ~/.claude/skills/bazzite-ai

// tip: Run this command in your terminal to install the skill