log-analysis

Log file analysis including reading, searching, filtering, and pattern matching. Use when investigating issues in logs, searching for errors, or analyzing application behavior.

allowed_tools: Bash, Read, Grep, Glob

$ 安裝

git clone https://github.com/ShunsukeHayashi/miyabi-mcp-bundle /tmp/miyabi-mcp-bundle && cp -r /tmp/miyabi-mcp-bundle/.claude/skills/log-analysis ~/.claude/skills/miyabi-mcp-bundle

// tip: Run this command in your terminal to install the skill


name: log-analysis description: Log file analysis including reading, searching, filtering, and pattern matching. Use when investigating issues in logs, searching for errors, or analyzing application behavior. allowed-tools: Bash, Read, Grep, Glob mcp_tools:

  • "log_read"
  • "log_search"
  • "log_tail"
  • "log_stats"
  • "log_errors"
  • "log_filter"
  • "log_aggregate"

Log Analysis Skill

Version: 1.0.0 Purpose: Log file analysis and pattern discovery


Triggers

TriggerExamples
Read"read logs", "view log", "ログ表示"
Search"search logs", "find error", "エラー検索"
Errors"show errors", "エラー一覧"
Analyze"analyze logs", "ログ分析"

Integrated MCP Tools

ToolPurpose
log_readRead log file contents
log_searchSearch for patterns
log_tailLast N lines of log
log_statsLog statistics
log_errorsExtract error entries
log_filterFilter by level/time
log_aggregateAggregate log entries

Workflow: Error Investigation

Phase 1: Initial Scan

Step 1.1: Check Recent Logs

Use log_tail with:
- file: Log file path
- lines: 100

Step 1.2: Get Error Summary

Use log_errors with:
- file: Log file path
- levels: ["ERROR", "FATAL", "CRITICAL"]

Phase 2: Pattern Search

Step 2.1: Search for Specific Error

Use log_search with:
- file: Log file path
- pattern: "Exception|Error|Failed"
- context: 3 (lines before/after)

Step 2.2: Filter by Time

Use log_filter with:
- file: Log file path
- start_time: "2024-01-01 10:00"
- end_time: "2024-01-01 11:00"

Phase 3: Analysis

Step 3.1: Statistics

Use log_stats to get:
- Total entries
- Entries by level
- Error frequency
- Time distribution

Step 3.2: Aggregate Patterns

Use log_aggregate to find:
- Repeated errors
- Common patterns
- Trending issues

Common Log Formats

Apache/Nginx

IP - - [timestamp] "METHOD /path HTTP/1.1" status size

Application (JSON)

{"timestamp":"...","level":"ERROR","message":"..."}

Syslog

Mon DD HH:MM:SS hostname process[pid]: message

Search Patterns

PatternMatches
ERROR|WARNError or warning
Exception.*Exception with message
\d{3}HTTP status codes
timeout|timed outTimeout issues
connection refusedConnection failures

Best Practices

✅ GOOD:

  • Start with recent logs
  • Filter by time first
  • Look for patterns, not just single errors
  • Check error frequency

❌ BAD:

  • Read entire large log files
  • Search without time bounds
  • Focus on single error instance
  • Ignore warning patterns

Checklist

  • Log file accessible
  • Time range identified
  • Error pattern found
  • Context lines reviewed
  • Frequency analyzed
  • Root cause identified