qa-testing-strategy

Test strategy, QA patterns, and automation practices across unit, integration, E2E, performance, BDD, and security testing with modern frameworks (Jest, Vitest, Playwright, k6, Cucumber).

$ インストール

git clone https://github.com/vasilyu1983/AI-Agents-public /tmp/AI-Agents-public && cp -r /tmp/AI-Agents-public/frameworks/claude-code-kit/framework/skills/qa-testing-strategy ~/.claude/skills/AI-Agents-public

// tip: Run this command in your terminal to install the skill


name: qa-testing-strategy description: Test strategy, QA patterns, and automation practices across unit, integration, E2E, performance, BDD, and security testing with modern frameworks (Jest, Vitest, Playwright, k6, Cucumber).

Software Testing & Automation Skill — Quick Reference

Use this skill when the primary focus is how to test software effectively rather than how to implement features. This skill provides execution-ready patterns for building reliable, maintainable test suites across all testing layers.


When to Use This Skill

Invoke when users ask for:

  • Test strategy for a new service or feature
  • Unit testing with Jest or Vitest
  • Integration testing with databases, APIs, external services
  • E2E testing with Playwright or Cypress
  • Performance and load testing with k6
  • BDD with Cucumber and Gherkin
  • API contract testing with Pact
  • Visual regression testing
  • Test automation CI/CD integration
  • Test data management and fixtures
  • Security and accessibility testing
  • Test coverage analysis and improvement
  • Flaky test diagnosis and fixes
  • Mobile app testing (iOS/Android)

Quick Reference Table

Test TypeFrameworkCommandWhen to Use
Unit TestsVitestvitest runPure functions, business logic (40-60% of tests)
Component TestsReact Testing Libraryvitest --uiReact components, user interactions (20-30%)
Integration TestsSupertest + Dockervitest run integration.test.tsAPI endpoints, database operations (15-25%)
E2E TestsPlaywrightplaywright testCritical user journeys, cross-browser (5-10%)
Performance Testsk6k6 run load-test.jsLoad testing, stress testing (nightly/pre-release)
API Contract TestsPactpact testMicroservices, consumer-provider contracts
Visual RegressionPercy/Chromaticpercy snapshotUI consistency, design system validation
Security TestsOWASP ZAPzap-baseline.pyVulnerability scanning (every PR)
Accessibility Testsaxe-corevitest run a11y.test.tsWCAG compliance (every component)
Mutation TestsStrykerstryker runTest quality validation (weekly)

Decision Tree: Test Strategy

Need to test: [Feature Type]
    │
    ├─ Pure business logic?
    │   └─ Unit tests (Jest/Vitest) — Fast, isolated, AAA pattern
    │       ├─ Has dependencies? → Mock them
    │       ├─ Complex calculations? → Property-based testing (fast-check)
    │       └─ State machine? → State transition tests
    │
    ├─ UI Component?
    │   ├─ Isolated component?
    │   │   └─ Component tests (React Testing Library)
    │   │       ├─ User interactions → fireEvent/userEvent
    │   │       └─ Accessibility → axe-core integration
    │   │
    │   └─ User journey?
    │       └─ E2E tests (Playwright)
    │           ├─ Critical path → Always test
    │           ├─ Edge cases → Selective E2E
    │           └─ Visual → Percy/Chromatic
    │
    ├─ API Endpoint?
    │   ├─ Single service?
    │   │   └─ Integration tests (Supertest + test DB)
    │   │       ├─ CRUD operations → Test all verbs
    │   │       ├─ Auth/permissions → Test unauthorized paths
    │   │       └─ Error handling → Test error responses
    │   │
    │   └─ Microservices?
    │       └─ Contract tests (Pact) + integration tests
    │           ├─ Consumer defines expectations
    │           └─ Provider verifies contracts
    │
    ├─ Performance-critical?
    │   ├─ Load capacity?
    │   │   └─ k6 load testing (ramp-up, stress, spike)
    │   │
    │   └─ Response time?
    │       └─ k6 performance benchmarks (SLO validation)
    │
    └─ External dependency?
        ├─ Mock it (unit tests) → Use test doubles
        └─ Real implementation (integration) → Docker containers (Testcontainers)

Decision Tree: Choosing Test Framework

What are you testing?
    │
    ├─ JavaScript/TypeScript?
    │   ├─ New project? → Vitest (faster, modern)
    │   ├─ Existing Jest project? → Keep Jest
    │   └─ Browser-specific? → Playwright component testing
    │
    ├─ Python?
    │   ├─ General testing? → pytest
    │   ├─ Django? → pytest-django
    │   └─ FastAPI? → pytest + httpx
    │
    ├─ Go?
    │   ├─ Unit tests? → testing package
    │   ├─ Mocking? → gomock or testify
    │   └─ Integration? → testcontainers-go
    │
    ├─ Rust?
    │   ├─ Unit tests? → Built-in #[test]
    │   └─ Property-based? → proptest
    │
    └─ E2E (any language)?
        ├─ Web app? → Playwright (recommended)
        ├─ API only? → k6 or Postman/Newman
        └─ Mobile? → Detox (RN), XCUITest (iOS), Espresso (Android)

Decision Tree: Flaky Test Diagnosis

Test is flaky?
    │
    ├─ Timing-related?
    │   ├─ Race condition? → Add proper waits (not sleep)
    │   ├─ Animation? → Disable animations in test mode
    │   └─ Network timeout? → Increase timeout, add retry
    │
    ├─ Data-related?
    │   ├─ Shared state? → Isolate test data
    │   ├─ Random data? → Use seeded random
    │   └─ Order-dependent? → Fix test isolation
    │
    ├─ Environment-related?
    │   ├─ CI-only failures? → Check resource constraints
    │   ├─ Timezone issues? → Use UTC in tests
    │   └─ Locale issues? → Set consistent locale
    │
    └─ External dependency?
        ├─ Third-party API? → Mock it
        └─ Database? → Use test containers

Test Pyramid

                    /\
                   /  \
                  / E2E \         5-10% - Critical user journeys
                 /--------\       - Slow, expensive, high confidence
                /Integration\     15-25% - API, database, services
               /--------------\   - Medium speed, good coverage
              /     Unit       \  40-60% - Functions, components
             /------------------\ - Fast, cheap, foundation

Target coverage by layer:

LayerCoverageSpeedConfidence
Unit80%+~1000/secLow (isolated)
Integration70%+~10/secMedium
E2ECritical paths~1/secHigh

Core Capabilities

Unit Testing

  • Frameworks: Vitest, Jest, pytest, Go testing
  • Patterns: AAA (Arrange-Act-Assert), Given-When-Then
  • Mocking: Dependency injection, test doubles
  • Coverage: Line, branch, function coverage

Integration Testing

  • Database: Testcontainers, in-memory DBs
  • API: Supertest, httpx, REST-assured
  • Services: Docker Compose, localstack
  • Fixtures: Factory patterns, seeders

E2E Testing

  • Web: Playwright, Cypress
  • Mobile: Detox, XCUITest, Espresso
  • API: k6, Postman/Newman
  • Patterns: Page Object Model, test locators

Performance Testing

  • Load: k6, Locust, Gatling
  • Profiling: Browser DevTools, Lighthouse
  • Monitoring: Real User Monitoring (RUM)
  • Benchmarks: Response time, throughput, error rate

Common Patterns

AAA Pattern (Arrange-Act-Assert)

describe('calculateDiscount', () => {
  it('should apply 10% discount for orders over $100', () => {
    // Arrange
    const order = { total: 150, customerId: 'user-1' };

    // Act
    const result = calculateDiscount(order);

    // Assert
    expect(result.discount).toBe(15);
    expect(result.finalTotal).toBe(135);
  });
});

Page Object Model (E2E)

// pages/login.page.ts
class LoginPage {
  async login(email: string, password: string) {
    await this.page.fill('[data-testid="email"]', email);
    await this.page.fill('[data-testid="password"]', password);
    await this.page.click('[data-testid="submit"]');
  }

  async expectLoggedIn() {
    await expect(this.page.locator('[data-testid="dashboard"]')).toBeVisible();
  }
}

// tests/login.spec.ts
test('user can login with valid credentials', async ({ page }) => {
  const loginPage = new LoginPage(page);
  await loginPage.login('user@example.com', 'password');
  await loginPage.expectLoggedIn();
});

Test Data Factory

// factories/user.factory.ts
export const createUser = (overrides = {}) => ({
  id: faker.string.uuid(),
  email: faker.internet.email(),
  name: faker.person.fullName(),
  createdAt: new Date(),
  ...overrides,
});

// Usage in tests
const admin = createUser({ role: 'admin' });
const guest = createUser({ role: 'guest', email: 'guest@test.com' });

CI/CD Integration

GitHub Actions Example

name: Test Suite
on: [push, pull_request]

jobs:
  unit-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
      - run: npm ci
      - run: npm run test:unit -- --coverage
      - uses: codecov/codecov-action@v3

  integration-tests:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:15
        env:
          POSTGRES_PASSWORD: test
    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - run: npm run test:integration

  e2e-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - run: npx playwright install --with-deps
      - run: npm run test:e2e

Quality Gates

GateThresholdAction on Failure
Unit test coverage80%Block merge
All tests pass100%Block merge
No new critical bugs0Block merge
Performance regression<10%Warning
Security vulnerabilities0 criticalBlock deploy

Anti-Patterns to Avoid

Anti-PatternProblemSolution
Testing implementationBreaks on refactorTest behavior, not internals
Shared mutable stateFlaky testsIsolate test data
sleep() in testsSlow, unreliableUse proper waits/assertions
Testing everything E2ESlow, expensiveUse test pyramid
No test data cleanupTest pollutionReset state between tests
Ignoring flaky testsFalse confidenceFix or quarantine immediately
Copy-paste testsHard to maintainUse factories and helpers
Testing third-party codeWasted effortTrust libraries, test integration

AI-Assisted Testing (2025 Trend)

72% of teams are exploring AI-driven testing workflows. Key patterns:

ToolUse CaseExample
GitHub CopilotGenerate unit tests"Write tests for this function" in editor
Playwright + MCPAI-generated E2EModel Context Protocol enables AI agents to create/execute tests
Visual AISmart visual regressionApplitools, Percy AI ignore irrelevant changes
Test GenerationEdge case discoveryAI analyzes code paths for missing coverage

When to use AI testing:

  • Generating boilerplate test scaffolding
  • Suggesting edge cases from code analysis
  • Visual regression with intelligent diffing
  • Test data generation from schemas

When NOT to use AI testing:

  • Critical business logic (review manually)
  • Security-sensitive assertions
  • Performance benchmarks (needs human baseline)

Navigation

Resources

Templates

Data


Related Skills