Marketplace

doc-scraper

Scrape documentation websites into organized reference files. Use when converting docs sites to searchable references or building Claude skills.

$ Installieren

git clone https://github.com/jmagly/ai-writing-guide /tmp/ai-writing-guide && cp -r /tmp/ai-writing-guide/agentic/code/addons/doc-intelligence/skills/doc-scraper ~/.claude/skills/ai-writing-guide

// tip: Run this command in your terminal to install the skill


name: doc-scraper description: Scrape documentation websites into organized reference files. Use when converting docs sites to searchable references or building Claude skills. tools: Read, Write, Bash, WebFetch

Documentation Scraper Skill

Purpose

Single responsibility: Convert documentation websites into organized, categorized reference files suitable for Claude skills or offline archives. (BP-4)

Grounding Checkpoint (Archetype 1 Mitigation)

Before executing, VERIFY:

  • Target URL is accessible (test with curl -I)
  • Documentation structure is identifiable (inspect page for content selectors)
  • Output directory is writable
  • Rate limiting requirements are known (check robots.txt)

DO NOT proceed without verification. Inspect before scraping.

Uncertainty Escalation (Archetype 2 Mitigation)

ASK USER instead of guessing when:

  • Content selector is ambiguous (multiple <article> or <main> elements)
  • URL patterns unclear (can't determine include/exclude rules)
  • Category mapping uncertain (content doesn't fit predefined categories)
  • Rate limiting unknown (no robots.txt, unclear ToS)

NEVER substitute missing configuration with assumptions.

Context Scope (Archetype 3 Mitigation)

Context TypeIncludedExcluded
RELEVANTTarget URL, selectors, output pathUnrelated documentation
PERIPHERALSimilar site examples for selector hintsHistorical scrape data
DISTRACTOROther projects, unrelated URLsPrevious failed attempts

Workflow Steps

Step 1: Verify Target (Grounding)

# Test URL accessibility
curl -I <target-url>

# Check robots.txt
curl <base-url>/robots.txt

# Inspect page structure (use browser dev tools or fetch sample)

Step 2: Create Configuration

Generate scraper config based on inspection:

{
  "name": "skill-name",
  "description": "When to use this skill",
  "base_url": "https://docs.example.com/",
  "selectors": {
    "main_content": "article",
    "title": "h1",
    "code_blocks": "pre code"
  },
  "url_patterns": {
    "include": ["/docs", "/guide", "/api"],
    "exclude": ["/blog", "/changelog", "/releases"]
  },
  "categories": {
    "getting_started": ["intro", "quickstart", "installation"],
    "api_reference": ["api", "reference", "methods"],
    "guides": ["guide", "tutorial", "how-to"]
  },
  "rate_limit": 0.5,
  "max_pages": 500
}

Step 3: Execute Scraping

Option A: With skill-seekers (if installed)

# Verify skill-seekers is available
pip show skill-seekers

# Run scraper
skill-seekers scrape --config config.json

# For large docs, use async mode
skill-seekers scrape --config config.json --async --workers 8

Option B: Manual scraping guidance

  1. Use sitemap.xml or crawl starting URL
  2. Extract content using configured selectors
  3. Categorize pages based on URL patterns and keywords
  4. Save to organized directory structure

Step 4: Validate Output

# Check output structure
ls -la output/<skill-name>/

# Verify content quality
head -50 output/<skill-name>/references/index.md

# Count extracted pages
find output/<skill-name>_data/pages -name "*.json" | wc -l

Recovery Protocol (Archetype 4 Mitigation)

On error:

  1. PAUSE - Stop scraping, preserve already-fetched pages
  2. DIAGNOSE - Check error type:
    • Connection error → Verify URL, check network
    • Selector not found → Re-inspect page structure
    • Rate limited → Increase delay, reduce workers
    • Memory/disk → Reduce batch size, clear temp files
  3. ADAPT - Adjust configuration based on diagnosis
  4. RETRY - Resume from checkpoint (max 3 attempts)
  5. ESCALATE - Ask user for guidance

Checkpoint Support

State saved to: .aiwg/working/checkpoints/doc-scraper/

Resume interrupted scrape:

skill-seekers scrape --config config.json --resume

Clear checkpoint and start fresh:

skill-seekers scrape --config config.json --fresh

Output Structure

output/<skill-name>/
├── SKILL.md              # Main skill description
├── references/           # Categorized documentation
│   ├── index.md          # Category index
│   ├── getting_started.md
│   ├── api_reference.md
│   └── guides.md
├── scripts/              # (empty, for user additions)
└── assets/               # (empty, for user additions)

output/<skill-name>_data/
├── pages/                # Raw scraped JSON (one per page)
└── summary.json          # Scrape statistics

Configuration Templates

Minimal Config

{
  "name": "myframework",
  "base_url": "https://docs.example.com/",
  "max_pages": 100
}

Full Config

{
  "name": "myframework",
  "description": "MyFramework documentation for building web apps",
  "base_url": "https://docs.example.com/",
  "selectors": {
    "main_content": "article, main, div[role='main']",
    "title": "h1, .title",
    "code_blocks": "pre code, .highlight code",
    "navigation": "nav, .sidebar"
  },
  "url_patterns": {
    "include": ["/docs/", "/api/", "/guide/"],
    "exclude": ["/blog/", "/changelog/", "/v1/", "/v2/"]
  },
  "categories": {
    "getting_started": ["intro", "quickstart", "install", "setup"],
    "concepts": ["concept", "overview", "architecture"],
    "api": ["api", "reference", "method", "function"],
    "guides": ["guide", "tutorial", "how-to", "example"],
    "advanced": ["advanced", "internals", "customize"]
  },
  "rate_limit": 0.5,
  "max_pages": 1000,
  "checkpoint": {
    "enabled": true,
    "interval": 100
  }
}

Troubleshooting

IssueDiagnosisSolution
No content extractedSelector mismatchInspect page, update main_content selector
Wrong pages scrapedURL pattern issueCheck include/exclude patterns
Rate limitedToo aggressiveIncrease rate_limit to 1.0+ seconds
Memory issuesToo many pagesAdd max_pages limit, enable checkpoints
Categories wrongKeyword mismatchUpdate category keywords in config

References

Repository

jmagly
jmagly
Author
jmagly/ai-writing-guide/agentic/code/addons/doc-intelligence/skills/doc-scraper
51
Stars
4
Forks
Updated4d ago
Added6d ago