Marketplace

evaluating-machine-learning-models

This skill allows Claude to evaluate machine learning models using a comprehensive suite of metrics. It should be used when the user requests model performance analysis, validation, or testing. Claude can use this skill to assess model accuracy, precision, recall, F1-score, and other relevant metrics. Trigger this skill when the user mentions "evaluate model", "model performance", "testing metrics", "validation results", or requests a comprehensive "model evaluation".

$ Instalar

git clone https://github.com/jeremylongshore/claude-code-plugins-nixtla /tmp/claude-code-plugins-nixtla && cp -r /tmp/claude-code-plugins-nixtla/archive/backups-20251108/skills-migration-20251108-070147/plugins/ai-ml/model-evaluation-suite/skills/model-evaluation-suite ~/.claude/skills/claude-code-plugins-nixtla

// tip: Run this command in your terminal to install the skill

Repository

jeremylongshore
jeremylongshore
Author
jeremylongshore/claude-code-plugins-nixtla/archive/backups-20251108/skills-migration-20251108-070147/plugins/ai-ml/model-evaluation-suite/skills/model-evaluation-suite
2
Stars
0
Forks
Updated3d ago
Added1w ago