ground

Verify external libraries, APIs, and frameworks against current documentation. Use when implementing external dependencies, when unsure if a pattern is current, when the user mentions "ground", "verify docs", "exa", or "current API".

$ 安裝

git clone https://github.com/Mburdo/knowledge_and_vibes /tmp/knowledge_and_vibes && cp -r /tmp/knowledge_and_vibes/.claude/skills/ground ~/.claude/skills/knowledge_and_vibes

// tip: Run this command in your terminal to install the skill


name: ground description: Verify external libraries, APIs, and frameworks against current documentation. Use when implementing external dependencies, when unsure if a pattern is current, when the user mentions "ground", "verify docs", "exa", or "current API".

Ground — External Documentation

Verify external dependencies against current documentation before implementation. Direct execution.

Design rationale: This skill executes directly rather than spawning subagents because grounding is a simple query + verification sequence (~300 tokens), not substantial analytical work. Per Lita research: "Simple agents achieve 97% of complex system performance with 15x less code."

When This Applies

SignalAction
About to write import for external libGround first
Using API/SDK methodsVerify current syntax
Framework-specific patternsCheck version compatibility
Auth/security codeAlways verify current best practices
User says "ground" or "verify"Run full grounding check
New library or major versionDeep grounding

Default: When uncertain about external APIs, ground.


Tool Reference

Exa MCP Tools

ToolPurpose
web_search_exa(query)General documentation search
get_code_context_exa(query)Code examples from GitHub/tutorials
crawling(url)Extract content from specific URLs

Decision Tree

Where does truth live?

EXTERNAL DOCS ──► /ground (this skill)
                 "What's the current API for X?"

CODEBASE ───────► /explore (warp-grep)
                 "How does X work in our code?"

HISTORY ────────► /recall (cm + cass)
                 "How did we do this before?"

TASKS ──────────► bv --robot-*
                 "What should I work on?"

Execution Flow

Execute these steps directly. No subagents needed.

Step 1: Identify What Needs Grounding

Categories that need verification:

CategoryWhyRisk
Imports/InitializationSyntax changes between versionsHigh
API MethodsMethods get renamed/deprecatedHigh
ConfigurationOptions/flags evolveMedium
Async PatternsAsync APIs vary significantlyMedium
Auth/SecuritySecurity best practices changeCritical
Data ValidationValidators/schemas evolveMedium

Step 2: Construct Query

Formula:

{library_name} {specific_feature} {version_if_known} 2024 2025

Examples:

FastAPI Pydantic v2 model_validator 2024 2025
Next.js 14 app router server components
React useOptimistic hook 2024
Prisma findMany where clause 2024

Strengthening queries:

  • Add version number if known
  • Add year for recency (2024, 2025)
  • Add "official" or "docs" for authoritative sources
  • Add "migration" if moving between versions

Step 3: Execute Search

For documentation:

web_search_exa("{library} {feature} {version} 2024 2025")

For code examples:

get_code_context_exa("{library} {pattern} implementation example")

For specific page:

crawling("{url}")

Step 4: Verify Results

CriterionPass If
SourceOfficial docs or reputable repo
FreshnessUpdated within 12 months
VersionMatches your dependency
CompletenessFull import + usage pattern
StatusNot deprecated

Step 5: Record Grounding Status

Track in your work:

## Grounding Status
| Pattern | Query | Source | Status |
|---------|-------|--------|--------|
| `@model_validator` | "Pydantic v2 2024" | docs.pydantic.dev | ✅ Verified |
| `useOptimistic` | "React 19 2024" | react.dev | ✅ Verified |

Status values:

  • ✅ Verified — Matches current docs
  • ⚠️ Changed — API changed, updated approach
  • ❌ Deprecated — Found alternative
  • ❓ Unverified — Couldn't confirm, flagged

Query Patterns

Current API Documentation

"{library} {method} documentation 2024"
"{library} {feature} API reference"
"{library} official docs {feature}"

Migration Between Versions

"{library} v{old} to v{new} migration"
"{library} {version} breaking changes"
"{library} upgrade guide {version}"

Code Examples

get_code_context_exa("{library} {pattern} implementation example")
get_code_context_exa("{library} {use_case} tutorial")

Security/Auth Patterns

"{auth_method} best practices 2024"
"{library} authentication {pattern} security"
"OAuth PKCE {language} 2024"

Error Resolution

"{library} {error_message} fix"
"{library} {error_type} troubleshooting"

Grounding Depth Levels

DepthWhenWhat to Check
QuickFamiliar pattern, just confirmingOne query, verify method exists
StandardNormal implementationQuery + check for deprecation warnings
DeepSecurity/auth, new library, major versionMultiple queries, read changelog, check issues

Version Sensitivity Signals

Ground more carefully when you see:

SignalRisk
Major version in deps (v1 → v2)Breaking changes likely
Library < 2 years oldAPI still evolving
"experimental" or "beta" in docsMay change without notice
Security-related codeBest practices evolve
AI training data gapLibs released after training cutoff

Failure Handling

IssueResponse
No resultsBroaden query, try alternate terms
Conflicting infoOfficial docs > GitHub > tutorials
Only outdated infoMark ❓, proceed with caution, add TODO
Can't verifyFlag for human review

Query Anti-Patterns

Bad QueryProblemBetter Query
"how to use {library}"Too vague"{library} {specific_feature} 2024"
"{library} tutorial"May be outdated"{library} {feature} official docs"
"best {library}"Opinion, not docs"{library} {pattern} documentation"
"{library}" aloneNo specificityAdd feature + version + year

Query Strengthening

If initial query returns poor results:

  1. Add version: "React 19 useOptimistic" vs "React useOptimistic"
  2. Add year: "FastAPI middleware 2024" vs "FastAPI middleware"
  3. Add "official": "Next.js official docs app router"
  4. Be more specific: "Prisma findMany where clause" vs "Prisma queries"
  5. Try alternate terms: "authentication" vs "auth" vs "login"

Progressive Grounding

For large implementations:

  1. Start: Ground the core imports/setup
  2. As you go: Ground each new external method before using
  3. Before commit: Review grounding table, verify nothing ❓

Don't try to ground everything upfront — ground just-in-time as you encounter external deps.


Integration with Workflow

Before implementing (via /advance)

# Check external dependencies
web_search_exa("{library} {feature} 2024")

During implementation

# Just-in-time verification
get_code_context_exa("{specific_method} example")

Before commit

Review grounding table, ensure all ❓ are resolved or documented.


Requirements

Requires Exa API key configured:

claude mcp add exa -s user \
  -e EXA_API_KEY=your-key \
  -- npx -y @anthropic-labs/exa-mcp-server

Quick Reference

# Documentation search
web_search_exa("{library} {feature} {version} 2024 2025")

# Code examples
get_code_context_exa("{library} {pattern} implementation example")

# Specific page
crawling("{url}")

Query formula:

{library} {feature} {version} 2024 2025

Anti-Patterns

Don'tWhyDo Instead
Skip grounding for external APIsTraining data may be staleGround before using
Use tutorials over docsTutorials get outdatedPrefer official docs
Ignore version mismatchesBreaking changes existVerify version compatibility
Ground everything upfrontWastes timeGround just-in-time
Skip grounding for "familiar" libsAPIs changeQuick verify is still worth it

See Also

  • /recall — Past session patterns
  • /explore — Codebase search
  • /advance — Bead workflow (includes grounding step)