Marketplace

using-ai-engineering

Route AI/ML tasks to correct Yzmir pack - frameworks, training, RL, LLMs, architectures, production

$ 安裝

git clone https://github.com/tachyon-beep/skillpacks /tmp/skillpacks && cp -r /tmp/skillpacks/plugins/yzmir-ai-engineering-expert/skills/using-ai-engineering ~/.claude/skills/skillpacks

// tip: Run this command in your terminal to install the skill


name: using-ai-engineering description: Route AI/ML tasks to correct Yzmir pack - frameworks, training, RL, LLMs, architectures, production

Using AI Engineering

Overview

This meta-skill routes you to the right AI/ML engineering pack based on your task. Load this skill when you need ML/AI expertise but aren't sure which specific pack to use.

Core Principle: Problem type determines routing - clarify before guessing.

When to Use

Load this skill when:

  • Starting any AI/ML engineering task
  • User mentions: "neural network", "train a model", "RL agent", "fine-tune LLM", "deploy model"
  • You recognize ML/AI work but unsure which pack applies
  • Need to combine multiple domains (e.g., train RL + deploy)

How to Access Reference Sheets

IMPORTANT: All reference sheets are located in the SAME DIRECTORY as this SKILL.md file.

When this skill is loaded from: skills/using-ai-engineering/SKILL.md

Reference sheets are at: skills/using-ai-engineering/routing-examples.md

NOT at: skills/routing-examples.md ← WRONG PATH


STOP - Mandatory Clarification Triggers

Before routing, if query contains ANY of these ambiguous patterns, ASK ONE clarifying question:

Ambiguous TermWhat to AskWhy
"Model not working""What's not working - architecture, training, or deployment?"Could be 3+ packs
"Improve performance""Performance in what sense - training speed, inference speed, or accuracy?"Different domains
"Learning chatbot/agent""Fine-tuning language generation or optimizing dialogue policy?"LLM vs RL vs both
"Train/deploy model""Both training AND deployment, or just one?"May need multiple packs
Framework not mentioned"What framework are you using?"PyTorch-specific vs generic

If you catch yourself about to guess the domain, STOP and clarify.


Routing by Problem Type

Keywords/SignalsRoute ToWhy
PyTorch, CUDA, memory, distributed, tensor, GPUpytorch-engineeringFoundation issues
NaN loss, converge, unstable, hyperparameters, gradients, LRtraining-optimizationTraining problems
Agent, policy, reward, environment, MDP, game, explorationdeep-rlRL domain
LLM, fine-tune, RLHF, LoRA, GPT, prompt, instruction tuningllm-specialistLanguage models
Which architecture, CNN vs transformer, model selectionneural-architecturesArchitecture choice
Deploy, serve, production, quantize, inference, latency, mobileml-productionDeployment

Cross-Cutting Scenarios

When task spans domains, route to ALL relevant packs in execution order:

QueryRoute ToOrder
"Train RL agent and deploy"deep-rl + ml-productionTrain before deploy
"Fine-tune LLM with distributed training"llm-specialist + pytorch-engineeringDomain first, then infrastructure
"LLM memory error during fine-tuning"pytorch-engineering + llm-specialistFoundation first
"RL training unstable"training-optimization + deep-rlGeneral training first

Principle: Load in order of dependency. Fix foundation before domain. Complete training before deployment.


Common Routing Mistakes

SymptomWrong RouteCorrect RouteWhy
"Train agent faster"deep-rltraining-optimization FIRSTCould be general training issue
"LLM memory error"llm-specialistpytorch-engineering FIRSTFoundation issue
"Deploy RL model"deep-rlml-productionDeployment problem
"Transformer for chess"neural-architecturesdeep-rl FIRSTRL problem
"Chatbot learning"llm-specialistASK FIRSTCould be LLM OR RL

Pressure Resistance - Critical Discipline

Time/Emergency Pressure

RationalizationReality CheckCorrect Action
"Emergency means skip diagnostics"Wrong diagnosis wastes MORE timeFast systematic diagnosis IS emergency protocol
"Quick question means quick answer"Wrong answer slower than 30-sec clarificationAsk ONE clarifying question
"Production down, no time for routing"Wrong pack = longer outageCorrect routing (60 sec) prevents 20-min detour

Emergency Protocol:

  1. Acknowledge urgency
  2. Fast clarification (30 sec)
  3. Route to correct pack
  4. Let pack provide emergency-appropriate approach

Authority/Hierarchy Pressure

RationalizationReality CheckCorrect Action
"PM/architect said use X"Authority can be wrong about routingVerify task type regardless
"Questioning authority is risky"Professional duty = correct routingFrame as verification
"They have more context"Context ≠ correct technical routingRoute based on problem type

Authority Protocol: "I see [authority] suggested X - to apply it correctly, let me verify problem type"

Sunk Cost Pressure

RationalizationReality CheckCorrect Action
"Already spent N hours in X, continue"Sunk cost fallacy - wrong direction stays wrongCut losses immediately
"Redirecting invalidates their effort"Correct routing validates effort by enabling successRedirect now
"Too invested to change direction"More investment in wrong direction = more waste"Stop digging when in hole"

Sunk Cost Protocol: "I see N hours invested - redirecting now prevents more wasted hours"

Keyword/Anchoring Pressure

RationalizationReality CheckCorrect Action
"They mentioned transformer"Keywords mislead; problem type matters"Transformer for what problem type?"
"LLM mentioned, must be llm-specialist"LLM could have foundation issuesCheck problem type first
"They asked to 'fix RL'"User's framing can be wrongVerify RL is correct approach

Red Flags Checklist - STOP Immediately

Basic Routing Red Flags

  • ❌ "I'll guess this domain" → ASK clarifying question
  • ❌ "They probably mean X" → Verify, don't assume
  • ❌ "Just give generic advice" → Route to specific pack

Time/Emergency Red Flags

  • ❌ "Emergency means skip clarification" → Fast clarification IS emergency protocol
  • ❌ "Production issue means guess quickly" → Wrong guess = longer outage
  • ❌ "I'll skip asking to save time" → Clarifying (30 sec) faster than wrong route (5+ min)

Authority/Social Red Flags

  • ❌ "Authority figure suggested X, so route to X" → Verify task requirements
  • ❌ "PM/senior has more context, trust them" → Route based on problem type
  • ❌ "They're frustrated/exhausted, avoid redirect" → Continuing wrong path makes it worse

Sunk Cost Red Flags

  • ❌ "They invested N hours in X, continue there" → Sunk cost fallacy, cut losses
  • ❌ "Redirecting invalidates their effort" → Correct routing enables success
  • ❌ "Too much sunk cost to change direction" → More investment = more waste

Keyword/Anchoring Red Flags

  • ❌ "They mentioned transformer/CNN, discuss architecture" → Check problem type first
  • ❌ "LLM/RL mentioned, route to that domain" → Could be foundation issue
  • ❌ "Technical jargon means they know domain" → Vocabulary ≠ correct self-diagnosis

All of these mean: Either ASK ONE clarifying question, or reconsider your routing logic.


Comprehensive Rationalization Prevention Table

Pressure TypeRationalizationCounter-NarrativeCorrect Action
Time"Emergency means skip diagnostics"Wrong diagnosis wastes MORE time"Fast clarification ensures fastest fix"
Time"Quick question means quick answer"Wrong answer slower than clarification"Quick clarification prevents wrong path"
Time"Production down, no time for routing"Wrong pack = longer outage"60-second routing prevents 20-minute detour"
Authority"PM/architect said use X pack"Authority can be wrong"To apply X correctly, let me verify"
Authority"Senior colleague suggested X"Seniority ≠ correct routing"To use suggestion effectively: [verify]"
Sunk Cost"Already spent 6 hours in pack X"Sunk cost fallacy"Redirecting now prevents more wasted hours"
Sunk Cost"Redirecting invalidates effort"Correct routing enables success"Redirect so effort succeeds"
Keywords"User mentioned transformers"Keywords mislead"Clarifying problem type first"
Keywords"They said LLM, route to llm-specialist"LLM could have foundation issues"Memory error is foundation issue"
Anchoring"They asked to 'fix RL'"User's framing can be wrong"Before fixing, verify RL is correct"
Complexity"Too many domains, just pick one"Cross-cutting needs multi-packRoute to ALL relevant packs
Social"They're frustrated, don't redirect"Continuing wrong path increases frustration"Redirecting prevents more frustration"
Demanding"They said 'just tell me', skip questions"Tone doesn't change routing needs"To help effectively, I need: [question]"

When NOT to Use Yzmir Skills

Skip AI/ML skills when:

  • Simple data processing without ML
  • Statistical analysis without neural networks
  • Data cleaning/ETL without model training

Red flag: If you're not training/deploying a neural network or implementing ML algorithms, probably don't need Yzmir.


Routing Summary Flowchart

User Query
    ↓
Is query ambiguous? → YES → ASK clarifying question
    ↓ NO
Identify problem type:
    - Framework error? → pytorch-engineering
    - Training not working? → training-optimization
    - RL problem? → deep-rl
    - LLM fine-tuning? → llm-specialist
    - Architecture choice? → neural-architectures
    - Production deployment? → ml-production
    ↓
Cross-cutting? → YES → Route to MULTIPLE packs (order by dependency)
    ↓ NO
Route to single pack

Examples

See routing-examples.md for detailed worked examples:

  • Ambiguous queries
  • Cross-cutting scenarios
  • Misleading keywords
  • Time pressure handling
  • Foundation issues disguised as domain issues
  • Emergency + authority pressure
  • Sunk cost + frustration
  • Multiple pressures combined

AI Engineering Plugin Router Catalog

This meta-router directs you to the appropriate Yzmir AI/ML plugin:

  1. yzmir-pytorch-engineering - PyTorch framework: CUDA, memory, distributed, tensor operations
  2. yzmir-training-optimization - Training problems: NaN losses, convergence, gradients, learning rate
  3. yzmir-deep-rl - Reinforcement learning: Agents, policies, rewards, environments, MDP
  4. yzmir-llm-specialist - Large language models: Fine-tuning, RLHF, LoRA, prompt engineering
  5. yzmir-neural-architectures - Architecture selection: CNN vs transformer, model design
  6. yzmir-ml-production - Production deployment: Serving, quantization, inference, MLOps

Remember: When in doubt, ASK. Clarification takes seconds, wrong routing takes minutes.

Repository

tachyon-beep
tachyon-beep
Author
tachyon-beep/skillpacks/plugins/yzmir-ai-engineering-expert/skills/using-ai-engineering
4
Stars
1
Forks
Updated4d ago
Added1w ago