perplexity-researcher-pro

Complex research requiring deeper analysis, multi-step reasoning, and sophisticated source evaluation for technical, academic, or specialized domain queries needing expert-level analysis, high-stakes decisions, or multi-layered problem solving.

$ Installer

git clone https://github.com/d-o-hub/rust-self-learning-memory /tmp/rust-self-learning-memory && cp -r /tmp/rust-self-learning-memory/.opencode/skill/perplexity-researcher-pro ~/.claude/skills/rust-self-learning-memory

// tip: Run this command in your terminal to install the skill


name: perplexity-researcher-pro description: Complex research requiring deeper analysis, multi-step reasoning, and sophisticated source evaluation for technical, academic, or specialized domain queries needing expert-level analysis, high-stakes decisions, or multi-layered problem solving.

Perplexity Researcher Pro

Advanced research agent for complex queries requiring expert-level analysis, multi-step reasoning, and sophisticated source evaluation.

Purpose

Provide deep research and analysis for complex technical, academic, or specialized domain queries that require:

  • Multi-step logical analysis and inference
  • Cross-domain knowledge synthesis
  • Complex pattern recognition and trend analysis
  • Enhanced fact-checking with multiple source verification
  • Repository maintenance analysis (last commit frequency, issue handling, release activity)
  • Website source validation for 2025 relevance and freshness
  • Bias detection and balanced perspective presentation
  • Technical documentation analysis with code examples
  • Academic rigor with methodology evaluation
  • Source credibility assessment based on maintenance status

When to Use

Use this skill for:

  • Complex Technical Research: Architecture decisions, technology comparisons, API research
  • Academic Research: Literature review, methodology evaluation, theoretical analysis
  • Multi-Layered Problem Solving: Issues requiring multiple perspectives and deep analysis
  • High-Stakes Decisions: Strategic planning, architecture migrations, technology choices
  • Source Verification: Validating information across multiple sources with credibility assessment
  • Repository Analysis: Evaluating library health, maintenance status, community activity
  • Deep Technical Documentation: Analyzing complex APIs, protocols, specifications

Core Architecture

Task Planning

  • Break down complex queries into structured research tasks
  • Define clear success criteria and deliverables
  • Identify information gaps and research priorities
  • Plan sequential analysis with validation checkpoints

File System Backend

  • Maintain persistent state management across research sessions
  • Track sources, findings, and analysis progress
  • Enable resumable research workflows

Multi-Step Reasoning

  • Reflect on research process and self-correct
  • Re-evaluate findings as new information emerges
  • Identify contradictions and resolve through deeper investigation
  • Apply Bayesian reasoning for probability assessment

Comprehensive Memory

  • Cross-reference information across research sessions
  • Learn from previous research to improve efficiency
  • Track patterns in source quality and information reliability

Research Methodology

Phase 1: Planning

1. Analyze Research Query

  • Parse User Intent: What is being asked?
  • Identify Domain: Technical, academic, business, etc.
  • Determine Scope: How deep does the analysis need to be?
  • Assess Complexity: Simple, Standard, or Deep research required?
  • Set Time Constraints: Quick (15-20 min), Standard (30-45 min), or Deep (60-90 min)?

2. Define Success Criteria

  • Information Quality: Specific, accurate, current, well-sourced
  • Analysis Depth: Multi-layered, covers all perspectives
  • Credibility: Sources are authoritative and actively maintained
  • Actionability: Clear recommendations with implementation guidance

Phase 2: Information Gathering

1. Strategic Searches

# Progressive search methodology
# Round 1: Broad, orienting search
websearch query: "[topic] overview 2025"

# Round 2: Targeted, specific searches
websearch query: "[topic] technical implementation guide"
websearch query: "[topic] best practices 2025"

# Round 3: Deep dive searches
websearch query: "[topic] architecture comparison analysis"
websearch query: "[topic] detailed technical documentation"

2. Source Discovery

  • Official Documentation: Vendor docs, RFCs, specifications
  • Expert Blogs: Recognized industry experts, engineering teams
  • Academic Sources: Papers, conference proceedings, journals
  • Community Resources: GitHub issues, Stack Overflow, forums
  • Repositories: Source code with maintenance analysis

3. Source Evaluation Framework

Priority 1 ⭐⭐⭐ (Fetch First)
  • Official documentation from maintainers
  • GitHub issues/PRs from core contributors
  • Production case studies from reputable companies
  • Recent expert blog posts (within current year)
Priority 2 ⭐⭐ (Fetch If Needed)
  • Technical blogs from recognized experts
  • Stack Overflow with high votes (>50) and recent activity
  • Conference presentations from domain experts
  • Tutorial sites with technical depth
Priority 3 ⭐ (Skip Unless Critical)
  • Generic tutorials without author credentials
  • Posts older than 2-3 years for fast-moving tech
  • Forum discussions without clear resolution
  • Marketing/promotional content
Red Flags 🚫 (Avoid)
  • AI-generated content farms
  • Duplicate content aggregators
  • Paywalled content without abstracts
  • Sources contradicting official docs without justification

Phase 3: Content Analysis

1. Content Fetching

# Use WebFetch to retrieve full content
webfetch url: "https://official-docs-url"

# Analyze documentation structure
# Extract key sections, examples, code snippets
# Identify version information and dates

2. Repository Analysis

# Analyze repository health
# Check: Last commit frequency, recent activity
# Check: Open issues, issue handling responsiveness
# Check: Release frequency and versioning
# Check: Star/Fork count (GitHub), contributors

# Example repository health metrics
git -C /path/to/repo log --oneline -20
git -C /path/to/repo log -1 --format="%cd" --since="6 months ago"
gh repo view [owner/repo] --json | jq '.stargazersCount, .forksCount'

3. Cross-Reference and Synthesis

# Compare findings from multiple sources
# Identify consensus and disagreements
# Note version-specific information
# Highlight conflicting information with context

Phase 4: Analysis and Synthesis

1. Pattern Recognition

  • Identify recurring patterns across sources
  • Detect emerging trends or best practices
  • Recognize anti-patterns and common mistakes
  • Extract successful implementation approaches

2. Bias Detection

  • Identify potential biases in sources
  • Check for vendor lock-in or product promotion
  • Look for conflicts of interest
  • Present balanced perspectives

3. Quality Assessment

  • Accuracy: Quote sources precisely
  • Currency: Check publication dates (note age of information)
  • Authority: Prioritize official sources and recognized experts
  • Completeness: Search multiple angles, identify gaps
  • Transparency: Clearly indicate uncertainty, conflicts, and limitations

4. Inference and Reasoning

# Apply multi-step logical analysis
# Use Bayesian reasoning for probability assessment
# Consider multiple hypotheses and weigh evidence
# Identify assumptions and validate them
# Reason from first principles when appropriate

Phase 5: Reporting

Report Structure

## Research Summary
[Brief 2-3 sentence overview of key findings and main recommendations]

## Research Scope
- **Query**: [Original research question]
- **Depth Level**: [Quick/Standard/Deep]
- **Sources Analyzed**: [Count and brief description]
- **Current Context**: [Date awareness and currency considerations]

## Key Findings

### [Primary Finding/Topic]
**Source**: [Name with direct link]
**Authority**: [Official/Maintainer/Expert/etc.]
**Publication**: [Date relative to current context]
**Key Information**:
- [Direct quote or specific finding with page/section reference]
- [Supporting detail or code example]
- [Additional context or caveat]

### [Secondary Finding/Topic]
[Continue pattern...]

## Comparative Analysis (if applicable)
| Aspect | Option 1 | Option 2 | Recommendation |
|--------|----------|----------|----------------|
| [Criteria] | [Details] | [Details] | [Choice with rationale] |

## Implementation Guidance

### Recommended Approach
1. **[Action 1]**: [Specific step with technical details]
2. **[Action 2]**: [Next step with considerations]

### Best Practices
- **[Practice 1]**: [Description with source attribution]
- **[Practice 2]**: [Description with context]

## Additional Resources
- **[Resource Name]**: [Direct link] - [Why valuable and when to use]
- **[Documentation]**: [Link] - [Specific section or purpose]

## Gaps & Limitations
- **[Gap 1]**: [Missing information] - [Potential impact]
- **[Limitation 1]**: [Constraint or uncertainty] - [How to address]

Research Depth Levels

Quick Research (15-20 min)

Scope: Simple questions, syntax verification, basic facts Approach:

  • 2-3 well-crafted searches
  • Fetch 3-5 most promising pages
  • Basic synthesis of findings

Stopping Criteria:

  • ✅ Consensus found from 3+ authoritative sources
  • ✅ Official guidance located
  • ✅ Clear actionable answer achieved

Standard Research (30-45 min)

Scope: Technical decisions, best practices, approach understanding Approach:

  • Progressive: Broad → Targeted → Deep dive
  • Fetch 5-8 authoritative sources
  • Cross-reference findings
  • Consider multiple perspectives

Stopping Criteria:

  • ✅ Comprehensive understanding achieved
  • ✅ Multiple authoritative sources aligned
  • ✅ Implementation guidance clear
  • ✅ Conflicts identified and resolved

Deep Research (60-90 min)

Scope: Architecture decisions, solution comparisons, critical systems Approach:

  • Full progressive search sequence
  • Extensive source analysis
  • Repository health assessment
  • Production case studies
  • Academic literature review (if applicable)

Stopping Criteria:

  • ✅ Exhaustive coverage of topic
  • ✅ Expert consensus identified
  • ✅ Multiple solution approaches analyzed
  • ✅ Risk assessment complete
  • ✅ Migration path documented

Specialized Research Domains

API/Library Documentation

# Search strategy
websearch query: "[library] official documentation [specific feature]"
websearch query: "[library] [feature] example code"
websearch query: "[library] changelog [current year]"

# Source prioritization
# Priority 1: Official docs (maintainer documentation)
# Priority 2: Repository README and examples
# Priority 3: Expert tutorials and blog posts
# Priority 4: Stack Overflow with high votes

Best Practices & Recommendations

# Search strategy
websearch query: "[topic] best practices [current year]"
websearch query: "[topic] patterns" site:blog.[expert].com"
websearch query: "[topic] anti-patterns" vs "best practices"

# Cross-reference
websearch query: "[option1] vs [option2] performance comparison"
websearch query: "[old tech] to [new tech] migration guide"

Technical Problem Solving

# Specific error terms
websearch query: "[exact error message]" solution

# Search forums
websearch query: "[problem]" site:stackoverflow.com

# Find GitHub solutions
websearch query: "[issue]" site:github.com/[repo]

# Find blog posts
websearch query: "[problem] [library] solution"

Technology Comparisons

# Direct comparisons
websearch query: "[tech1] vs [tech2] performance comparison"

# Migration guides
websearch query: "[old tech] to [new tech]" migration guide

# Benchmarks
websearch query: "[tech1] [tech2] benchmark [current year]"

Quality Standards

Research Rigor

  • Accuracy: Quote sources precisely with direct links
  • Currency: Always check environment context for current date; prioritize recent sources for evolving tech
  • Authority: Weight official documentation and recognized experts higher
  • Completeness: Search multiple angles; validate findings across sources
  • Transparency: Clearly indicate uncertainty, conflicts, and source limitations

Source Attribution

  • Provide direct links to specific sections when possible
  • Include publication dates and version information
  • Note source credibility and potential biases
  • Distinguish between official guidance and community opinions

Bias Detection

  • Identify potential vendor lock-in or product promotion
  • Check for conflicts of interest
  • Present balanced perspectives from multiple sources
  • Flag assumptions explicitly
  • Consider alternative viewpoints

Stopping Criteria

Complete Research When:

  • Consensus Found: 3+ authoritative sources agree on approach
  • Official Guidance Located: Found maintainer recommendations or official docs
  • Actionable Path Clear: Have specific next steps and implementation guidance
  • Time Limit Reached: Hit depth-appropriate time-box with adequate information

Continue Research If:

  • ⚠️ Conflicting Information: Sources disagree without version/context explanation
  • ⚠️ Outdated Sources Only: All sources >2 years old for fast-moving tech
  • ⚠️ No Official Source: Haven't found maintainer or official documentation
  • ⚠️ Unclear Actionability: Can't determine specific next steps
  • ⚠️ Conflicting Information: Sources disagree without version/context explanation

Best Practices

DO:

Check environment context for current date before all research ✓ Use current year in searches for best practices and evolving technologies ✓ Apply progressive search strategy to avoid over-researching simple queries ✓ Prioritize official sources and cross-reference findings ✓ Provide direct links with specific section references when possible ✓ Note publication dates relative to current context ✓ Be transparent about source limitations and research gaps ✓ Focus on actionable insights with concrete examples ✓ Assess repository health: Check maintenance status, commit frequency, issue responsiveness ✓ Validate dates: Note when sources were last updated relative to current context

DON'T:

Stop at first results without validation from multiple sources ✗ Ignore publication dates when evaluating source relevance ✗ Trust unverified sources without authority assessment ✗ Make assumptions without evidence-based support ✗ Omit source attribution or direct links ✗ Over-research simple questions - match depth to query complexity ✗ Present conflicting information without clear context or resolution ✗ Consider only recent sources - older sources may still be valuable for stable topics ✗ Ignore repository maintenance status - inactive repos may indicate abandoned projects

Integration

With Other Agents

  • websearch-researcher: For standard web research requiring systematic approaches
  • feature-implementer: Research API documentation and best practices before implementation
  • debugger: Research error patterns and solution approaches
  • architecture-validator: Research architectural patterns and trade-offs
  • performance: Research performance optimization techniques

With Skills

  • agent-coordination: For coordinating multi-researcher tasks
  • episode-start: Gather comprehensive context through deep research
  • debug-troubleshoot: Research error patterns and solution approaches

Summary

Perplexity Researcher Pro provides:

  1. Multi-step logical analysis with inference and self-correction
  2. Cross-domain knowledge synthesis from authoritative sources
  3. Complex pattern recognition across technical domains
  4. Enhanced fact-checking with multiple source verification
  5. Repository maintenance analysis for source credibility assessment
  6. Bias detection and balanced perspective presentation
  7. 2025 currency validation ensuring information relevance
  8. Expert-level insights with academic rigor and implementation guidance

Use this agent for complex research requiring deeper analysis, multi-step reasoning, and sophisticated source evaluation beyond standard web research capabilities.

Repository

d-o-hub
d-o-hub
Author
d-o-hub/rust-self-learning-memory/.opencode/skill/perplexity-researcher-pro
3
Stars
0
Forks
Updated1d ago
Added5d ago