Marketplace

llm-integration

Integrate LLMs into applications - APIs, prompting, fine-tuning, and context management

$ 安裝

git clone https://github.com/pluginagentmarketplace/custom-plugin-ai-agents /tmp/custom-plugin-ai-agents && cp -r /tmp/custom-plugin-ai-agents/skills/llm-integration ~/.claude/skills/custom-plugin-ai-agents

// tip: Run this command in your terminal to install the skill


name: llm-integration description: Integrate LLMs into applications - APIs, prompting, fine-tuning, and context management sasmp_version: "1.3.0" bonded_agent: 02-llm-integration bond_type: PRIMARY_BOND version: "2.0.0"

LLM Integration

Integrate Large Language Models with production-grade reliability.

When to Use This Skill

Invoke this skill when:

  • Connecting to Claude, OpenAI, or other LLM APIs
  • Designing effective prompts and system messages
  • Optimizing token usage and costs
  • Implementing streaming responses

Parameter Schema

ParameterTypeRequiredDescriptionDefault
providerenumYesanthropic, openai, google, local-
taskstringYesIntegration goal-
streamingboolNoEnable streamingtrue
max_tokensintNoResponse token limit4096

Quick Start

# Anthropic Claude
from anthropic import Anthropic

client = Anthropic()
response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello!"}]
)

# OpenAI
from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4-turbo",
    messages=[{"role": "user", "content": "Hello!"}]
)

Prompt Templates

System Prompt

SYSTEM = """You are {role}, an expert in {domain}.
Your task: {task}
Constraints: {constraints}
Output format: {format}"""

Chain-of-Thought

COT = """Think step by step:
1. Understand the problem
2. Break it down
3. Solve each part
4. Combine results"""

Cost Optimization

ModelInput $/1MOutput $/1MBest For
Claude Haiku$0.25$1.25High volume
Claude Sonnet$3$15Complex tasks
Claude Opus$15$75Most demanding

Troubleshooting

IssueSolution
429 Rate LimitedExponential backoff
Context overflowTruncate/summarize
Poor output qualityAdd examples, lower temp
High costsUse cheaper model, cache

Best Practices

  • Always implement retry with backoff
  • Use streaming for better UX
  • Cache repeated queries
  • Monitor token usage

Related Skills

  • ai-agent-basics - Agent architecture
  • rag-systems - Retrieval augmentation
  • tool-calling - Function calling

References