Marketplace

quality-attributes-taxonomy

The "-ilities" framework for non-functional requirements. Use when defining NFRs, evaluating architecture trade-offs, or ensuring quality attributes are addressed in system design. Covers scalability, reliability, availability, performance, security, maintainability, and more.

allowed_tools: Read, Glob, Grep

$ Installer

git clone https://github.com/melodic-software/claude-code-plugins /tmp/claude-code-plugins && cp -r /tmp/claude-code-plugins/plugins/systems-design/skills/quality-attributes-taxonomy ~/.claude/skills/claude-code-plugins

// tip: Run this command in your terminal to install the skill


name: quality-attributes-taxonomy description: The "-ilities" framework for non-functional requirements. Use when defining NFRs, evaluating architecture trade-offs, or ensuring quality attributes are addressed in system design. Covers scalability, reliability, availability, performance, security, maintainability, and more. allowed-tools: Read, Glob, Grep

Quality Attributes Taxonomy

This skill provides a comprehensive framework for understanding and applying quality attributes (non-functional requirements) in system design.

When to Use This Skill

Keywords: NFR, non-functional requirements, quality attributes, -ilities, scalability, reliability, availability, performance, security, maintainability, ISO 25010

Use this skill when:

  • Defining non-functional requirements for a system
  • Evaluating architectural trade-offs
  • Conducting architecture reviews
  • Preparing for system design interviews
  • Ensuring all quality dimensions are considered
  • Translating business needs to technical requirements

What Are Quality Attributes?

Quality attributes (QAs) describe HOW a system performs, not WHAT it does. They're often called:

  • Non-Functional Requirements (NFRs)
  • The "-ilities" (scalability, reliability, etc.)
  • Cross-cutting concerns
  • System qualities

Key insight: Functional requirements define features; quality attributes define how well those features work.

The Core Quality Attributes

Primary Attributes (The Big 6)

AttributeDefinitionKey Question
ScalabilityHandle growing loadCan we grow 10x? 100x?
ReliabilityConsistent correct operationDoes it work correctly every time?
AvailabilitySystem uptimeIs it running when needed?
PerformanceSpeed and throughputHow fast is it?
SecurityProtection from threatsIs it safe from attacks?
MaintainabilityEase of changeCan we update it easily?

Secondary Attributes

AttributeDefinitionKey Question
TestabilityEase of verificationCan we test it effectively?
ObservabilitySystem visibilityCan we see what's happening?
OperabilityEase of operationCan we run it in production?
PortabilityPlatform independenceCan we move it?
InteroperabilitySystem integrationCan it work with others?
Cost EfficiencyResource optimizationIs it cost-effective?

Detailed Quality Attribute Definitions

Scalability

Definition: The ability to handle increased load by adding resources.

TypeDescriptionExample
VerticalAdd more power to existing machinesUpgrade to larger instance
HorizontalAdd more machinesAdd more servers behind load balancer
ElasticAutomatic scaling based on loadAuto-scaling groups

Measurement:

- Maximum concurrent users
- Requests per second at given latency
- Data volume supported
- Cost per transaction at scale

Trade-offs:

  • Scalability often conflicts with consistency (CAP theorem)
  • More scalability = more complexity
  • Horizontal scaling requires stateless design

Reliability

Definition: The probability of correct operation over time.

ConceptDefinition
MTBFMean Time Between Failures
MTTRMean Time To Recovery
Fault ToleranceContinue despite component failures
ResilienceRecover from failures gracefully

Measurement:

- Error rate (errors / total requests)
- Failure rate (failures / time period)
- Data accuracy percentage
- Successful transaction rate

Trade-offs:

  • Higher reliability = higher cost (redundancy)
  • Reliability vs performance (checksums, validation)
  • Reliability vs complexity (more failure modes to handle)

Availability

Definition: The proportion of time a system is operational.

LevelUptimeDowntime/YearDowntime/Month
99%Two 9s3.65 days7.31 hours
99.9%Three 9s8.76 hours43.8 minutes
99.99%Four 9s52.6 minutes4.38 minutes
99.999%Five 9s5.26 minutes26.3 seconds

Measurement:

Availability = Uptime / (Uptime + Downtime)
            = MTBF / (MTBF + MTTR)

Trade-offs:

  • Each additional "9" is exponentially more expensive
  • Availability vs consistency (CAP theorem)
  • Planned maintenance affects availability

Performance

Definition: How fast and efficient the system operates.

MetricDefinition
LatencyTime to complete one request
ThroughputRequests processed per unit time
Response TimeTotal time user waits
UtilizationResource usage percentage

Common Targets:

- Web page load: < 2 seconds
- API response: < 100 ms (p99)
- Database query: < 10 ms
- Batch job: < scheduled window

Trade-offs:

  • Performance vs cost (faster hardware costs more)
  • Latency vs throughput (batching improves throughput, hurts latency)
  • Performance vs consistency (caching improves speed, may serve stale data)

Security

Definition: Protection of data and systems from unauthorized access.

PrincipleDescription
ConfidentialityData accessible only to authorized
IntegrityData is accurate and unaltered
AvailabilitySystems accessible when needed
Non-repudiationActions are attributable

Measurement:

- Time to detect breaches
- Number of vulnerabilities
- Compliance audit results
- Mean time to patch

Trade-offs:

  • Security vs usability (more security = more friction)
  • Security vs performance (encryption adds latency)
  • Security vs cost (security tools and expertise are expensive)

Maintainability

Definition: Ease of modifying the system over time.

AspectDescription
ModularityComponents can change independently
ReusabilityComponents can be repurposed
AnalyzabilityEasy to understand the system
ModifiabilityEasy to make changes
TestabilityEasy to verify changes

Measurement:

- Time to implement typical change
- Defect injection rate per change
- Code complexity metrics
- Documentation coverage

Trade-offs:

  • Maintainability vs performance (abstractions add overhead)
  • Maintainability vs time-to-market (good design takes time)
  • Maintainability vs specialization (generic = slower)

Quality Attribute Scenarios

How to Specify Quality Attributes

Use this template to make QAs measurable:

Source:     [Who or what generates the stimulus?]
Stimulus:   [What event occurs?]
Artifact:   [What part of the system is affected?]
Environment:[Under what conditions?]
Response:   [How should the system respond?]
Measure:    [How do we know it succeeded?]

Example Scenarios

Scalability Scenario:

Source:     Marketing campaign
Stimulus:   10x traffic spike
Artifact:   Web application
Environment:Normal operation
Response:   Auto-scale to handle load
Measure:    Latency stays under 200ms at p99

Availability Scenario:

Source:     Hardware failure
Stimulus:   Database server dies
Artifact:   Order processing system
Environment:Peak business hours
Response:   Failover to replica
Measure:    Recovery in < 30 seconds, no data loss

Security Scenario:

Source:     External attacker
Stimulus:   SQL injection attempt
Artifact:   User authentication
Environment:Production
Response:   Block attack, alert security team
Measure:    Zero successful injections, alert within 5 minutes

ISO 25010 Quality Model

The ISO 25010 standard defines 8 quality characteristics:

CharacteristicSub-characteristics
Functional SuitabilityCompleteness, correctness, appropriateness
Performance EfficiencyTime behavior, resource utilization, capacity
CompatibilityCo-existence, interoperability
UsabilityLearnability, operability, accessibility
ReliabilityMaturity, availability, fault tolerance, recoverability
SecurityConfidentiality, integrity, non-repudiation, accountability
MaintainabilityModularity, reusability, analyzability, modifiability, testability
PortabilityAdaptability, installability, replaceability

Quality Attributes in System Design Interviews

How to Address QAs

  1. Ask about requirements: "What's the expected latency? Availability target?"
  2. State assumptions: "I'll assume we need 99.9% availability"
  3. Justify decisions: "I'm adding a cache here for performance"
  4. Acknowledge trade-offs: "This improves scalability but complicates consistency"

Common QA Trade-offs in Interviews

DecisionImprovesHurts
Add cachingPerformanceConsistency, complexity
Add replicationAvailabilityConsistency, cost
Use async processingThroughputLatency, complexity
Shard databaseScalabilityCross-shard queries
Add encryptionSecurityPerformance
Use microservicesMaintainability, scalabilityLatency, complexity

QA Checklist for Design Reviews

Before finalizing a design, verify:

  • Scalability: Can handle 10x growth?
  • Reliability: Handles component failures?
  • Availability: Meets uptime target?
  • Performance: Meets latency/throughput targets?
  • Security: Protects data and access?
  • Maintainability: Easy to update and debug?
  • Cost: Within budget at scale?
  • Observability: Can monitor and troubleshoot?

Architectural Tactics by Quality Attribute

Scalability Tactics

TacticDescription
Horizontal scalingAdd more instances
Load balancingDistribute traffic
ShardingPartition data
CachingReduce repeated work
Async processingDecouple components

Availability Tactics

TacticDescription
RedundancyMultiple instances of components
FailoverAutomatic switch to backup
Health checksDetect failures early
Graceful degradationReduce functionality vs complete failure
Geographic distributionSurvive datacenter failures

Performance Tactics

TacticDescription
CachingReduce computation/IO
CDNServe content closer to users
Connection poolingReuse expensive connections
CompressionReduce data transfer
IndexingSpeed up queries

Security Tactics

TacticDescription
EncryptionProtect data at rest and in transit
AuthenticationVerify identity
AuthorizationControl access
Audit loggingTrack actions
Input validationPrevent injection attacks

From Business Requirements to Quality Attributes

Translation Guide

Business RequirementQuality AttributeTechnical Implication
"Must handle Black Friday traffic"ScalabilityAuto-scaling, elastic capacity
"Cannot lose orders"Reliability, durabilityReplication, backups, transactions
"Always available"AvailabilityRedundancy, failover, monitoring
"Fast checkout"PerformanceCaching, optimization, CDN
"Protect customer data"SecurityEncryption, access control, auditing
"Easy to add features"MaintainabilityModular design, clean architecture
"Regulatory compliance"Security, auditabilityLogging, encryption, access control
"Global users"Performance, availabilityCDN, geographic distribution

Related Skills

  • design-interview-methodology - Overall interview framework
  • estimation-techniques - Quantify capacity requirements
  • cap-theorem - Consistency/availability trade-offs (Phase 2)
  • trade-off-analysis - ATAM and decision frameworks (Phase 5)
  • architectural-tactics - Detailed tactics per attribute (Phase 5)

Related Commands

  • /sd:analyze-nfrs [scope] - Analyze quality attributes in code (Phase 5)
  • /sd:explain <concept> - Explain any quality attribute

Related Agents

  • trade-off-analyzer - Evaluate design trade-offs (Phase 2)
  • sre-persona - Reliability/observability perspective (Phase 5)
  • security-architect - Security implications (Phase 5)

Version History

  • v1.0.0 (2025-12-26): Initial release

Last Updated

Date: 2025-12-26 Model: claude-opus-4-5-20251101

Repository

melodic-software
melodic-software
Author
melodic-software/claude-code-plugins/plugins/systems-design/skills/quality-attributes-taxonomy
3
Stars
0
Forks
Updated2d ago
Added1w ago