Marketplace
ollama
Ollama LLM inference server management via Podman Quadlet. Single-instancedesign with GPU acceleration for running local LLMs. Use when users needto configure Ollama, pull models, run inference, or manage the Ollama server.
$ インストール
git clone https://github.com/atrawog/bazzite-ai-plugins /tmp/bazzite-ai-plugins && cp -r /tmp/bazzite-ai-plugins/bazzite-ai/skills/ollama ~/.claude/skills/bazzite-ai-plugins// tip: Run this command in your terminal to install the skill
Repository

atrawog
Author
atrawog/bazzite-ai-plugins/bazzite-ai/skills/ollama
0
Stars
0
Forks
Updated6d ago
Added6d ago