architecture-paradigm-pipeline
Compose processing stages using a pipes-and-filters model for ETL, media processing, or compiler-like workloads. Triggers: pipeline architecture, pipes and filters, ETL, data transformation, stream processing, CI/CD pipeline, media processing, batch processing Use when: data flows through fixed sequence of transformations, stages can be independently developed and tested, parallel processing of stages is beneficial DO NOT use when: selecting from multiple paradigms - use architecture-paradigms first. DO NOT use when: data flow isn't sequential or predictable. DO NOT use when: complex branching/merging logic dominates. Consult this skill when designing data pipelines or transformation workflows.
$ インストール
git clone https://github.com/athola/claude-night-market /tmp/claude-night-market && cp -r /tmp/claude-night-market/plugins/archetypes/skills/architecture-paradigm-pipeline ~/.claude/skills/claude-night-market// tip: Run this command in your terminal to install the skill
name: architecture-paradigm-pipeline description: | Compose processing stages using a pipes-and-filters model for ETL, media processing, or compiler-like workloads.
Triggers: pipeline architecture, pipes and filters, ETL, data transformation, stream processing, CI/CD pipeline, media processing, batch processing
Use when: data flows through fixed sequence of transformations, stages can be independently developed and tested, parallel processing of stages is beneficial
DO NOT use when: selecting from multiple paradigms - use architecture-paradigms first. DO NOT use when: data flow isn't sequential or predictable. DO NOT use when: complex branching/merging logic dominates.
Consult this skill when designing data pipelines or transformation workflows. version: 1.0.0 category: architectural-pattern tags: [architecture, pipeline, pipes-filters, ETL, streaming, data-processing] dependencies: [] tools: [stream-processor, message-queue, data-validator] usage_patterns:
- paradigm-implementation
- data-transformation
- workflow-automation complexity: medium estimated_tokens: 700
The Pipeline (Pipes and Filters) Paradigm
When to Employ This Paradigm
- When data must flow through a fixed sequence of discrete transformations, such as in ETL jobs, streaming analytics, or CI/CD pipelines.
- When reusing individual processing stages is needed, either independently or to scale bottleneck stages separately from others.
- When failure isolation between stages is a critical requirement.
Adoption Steps
- Define Filters: Design each stage (filter) to perform a single, well-defined transformation. Each filter must have a clear input and output data schema.
- Connect via Pipes: Connect the filters using "pipes," which can be implemented as streams, message queues, or in-memory channels. validate these pipes support back-pressure and buffering.
- Maintain Stateless Filters: Where possible, design filters to be stateless. Any required state should be persisted externally or managed at the boundaries of the pipeline.
- Instrument Each Stage: Implement monitoring for each filter to track key metrics such as latency, throughput, and error rates.
- Orchestrate Deployments: Design the deployment strategy to allow each stage to be scaled horizontally and upgraded independently.
Key Deliverables
- An Architecture Decision Record (ADR) documenting the filters, the chosen pipe technology, the error-handling strategy, and the tools for replaying data.
- A suite of contract tests for each filter, plus integration tests that cover representative end-to-end pipeline executions.
- Observability dashboards that visualize stage-level Key Performance Indicators (KPIs).
Risks & Mitigations
- Single-Stage Bottlenecks:
- Mitigation: Implement auto-scaling for individual filters. If a single filter remains a bottleneck, consider refactoring it into a more granular sub-pipeline.
- Schema Drift Between Stages:
- Mitigation: Centralize schema definitions in a shared repository and enforce compatibility tests as part of the CI/CD process to prevent breaking changes.
- Back-Pressure Failures:
- Mitigation: Conduct rigorous load testing to simulate high-volume scenarios. Validate that buffering, retry logic, and back-pressure mechanisms behave as expected under stress.
Repository
