llm-cost-optimization
Reduce LLM API costs without sacrificing quality. Covers prompt caching (Anthropic), local response caching, prompt compression, debouncing triggers, and cost analysis. Use when building LLM-powered features, analyzing API costs, optimizing prompts, or implementing caching strategies.
$ インストール
git clone https://github.com/ebiyy/traylingo /tmp/traylingo && cp -r /tmp/traylingo/.claude/skills/llm-cost-optimization ~/.claude/skills/traylingo// tip: Run this command in your terminal to install the skill
Repository

ebiyy
Author
ebiyy/traylingo/.claude/skills/llm-cost-optimization
0
Stars
0
Forks
Updated1w ago
Added1w ago