llm-cost-optimization

Reduce LLM API costs without sacrificing quality. Covers prompt caching (Anthropic), local response caching, prompt compression, debouncing triggers, and cost analysis. Use when building LLM-powered features, analyzing API costs, optimizing prompts, or implementing caching strategies.

$ 설치

git clone https://github.com/ebiyy/traylingo /tmp/traylingo && cp -r /tmp/traylingo/.claude/skills/llm-cost-optimization ~/.claude/skills/traylingo

// tip: Run this command in your terminal to install the skill