neo-llm-security
AI security co-pilot for identifying, testing, and fixing vulnerabilities in LLM-powered applications.Use when: (1) Securing LLM applications or agents, (2) Generating security test suites with promptfoo,(3) Testing for prompt injection, jailbreaking, data exfiltration, (4) Hardening system prompts,(5) Compliance mapping for OWASP LLM Top 10, NIST AI RMF, CJIS, SOC2, (6) Threat modeling AI systems,(7) Analyzing security eval results, (8) Research on LLM attack/defense techniques.Triggers: "secure my LLM", "prompt injection", "jailbreak test", "AI security", "red team","system prompt hardening", "LLM vulnerability", "promptfoo", "OWASP LLM", "AI compliance".
$ インストール
git clone https://github.com/majiayu000/claude-skill-registry /tmp/claude-skill-registry && cp -r /tmp/claude-skill-registry/skills/security/neo-llm-security ~/.claude/skills/claude-skill-registry// tip: Run this command in your terminal to install the skill
Repository

majiayu000
Author
majiayu000/claude-skill-registry/skills/security/neo-llm-security
0
Stars
0
Forks
Updated5d ago
Added5d ago