ServerlessBase Blog
  • Unit Testing in CI/CD Pipelines

    A comprehensive guide to implementing effective unit testing strategies within continuous integration and continuous deployment workflows

    Unit Testing in CI/CD Pipelines

    You've spent hours writing clean, modular code. You've followed SOLID principles. You've used dependency injection to make your components testable. Then you push to production and discover a bug that should have been caught weeks ago. The problem isn't your code quality. It's that your unit tests aren't running automatically in your CI/CD pipeline.

    Unit testing in CI/CD pipelines isn't optional. It's the difference between shipping confidence and shipping accidents. When tests run automatically with every code change, you catch regressions before users do. When they don't, you're flying blind.

    Why Unit Testing Matters in CI/CD

    Unit tests verify individual components in isolation. They test functions, methods, and classes with mocked dependencies. They're fast, focused, and repeatable. In a CI/CD pipeline, they serve as a quality gate. Every pull request must pass all tests before merging.

    Without automated testing, you rely on manual QA cycles. These are slow, error-prone, and often skipped under deadline pressure. You might catch bugs in staging, but production bugs still slip through. The cost of fixing bugs grows exponentially the later you find them. A unit test caught in CI costs minutes. The same bug caught in production costs hours of incident response, customer communication, and reputation damage.

    Test Coverage Goals

    Coverage metrics are useful but not sacred. A 100% coverage claim often hides poorly written tests that exercise every line of code without testing behavior. Focus on testing critical paths, edge cases, and error conditions.

    Aim for at least 70-80% coverage for business logic. This is achievable without excessive test writing. Below 70%, you're likely missing important scenarios. Above 90%, you're probably writing tests for implementation details rather than behavior.

    Writing Effective Unit Tests

    The AAA Pattern

    Structure every test with the Arrange-Act-Assert pattern:

    // Arrange - Set up test data and dependencies
    const mockLogger = jest.fn();
    const service = new MyService(mockLogger);
     
    // Act - Execute the code under test
    const result = service.processData(input);
     
    // Assert - Verify the outcome
    expect(result).toEqual(expected);
    expect(mockLogger).toHaveBeenCalledWith('processed');

    This pattern makes tests readable and predictable. Each test has a clear beginning, middle, and end.

    Test Isolation

    Each test must run independently. No shared mutable state between tests. Use fresh test data for each test case. Mock external dependencies like databases, APIs, and file systems.

    // BAD - Shared state between tests
    let globalService;
     
    beforeEach(() => {
      globalService = new MyService();
    });
     
    test('test 1', () => {
      globalService.doSomething();
    });
     
    test('test 2', () => {
      globalService.doSomethingElse(); // May fail if test 1 changed state
    });
     
    // GOOD - Each test creates its own instance
    test('test 1', () => {
      const service = new MyService();
      service.doSomething();
    });
     
    test('test 2', () => {
      const service = new MyService();
      service.doSomethingElse();
    });

    Meaningful Test Names

    Your test names should describe what they test and why. Avoid generic names like test1, test2, or shouldWork. Use descriptive names that explain the scenario:

    // BAD
    test('calculateTotal', () => { ... });
     
    // GOOD
    test('calculates total with multiple items', () => { ... });
    test('throws error when quantity is negative', () => { ... });

    Testing Edge Cases

    Don't just test happy paths. Test invalid inputs, boundary conditions, and error scenarios:

    describe('calculateDiscount', () => {
      test('applies 10% discount for orders over $100', () => {
        const result = calculateDiscount(150);
        expect(result).toBe(15);
      });
     
      test('applies 20% discount for orders over $500', () => {
        const result = calculateDiscount(600);
        expect(result).toBe(120);
      });
     
      test('applies no discount for orders under $100', () => {
        const result = calculateDiscount(50);
        expect(result).toBe(0);
      });
     
      test('throws error for negative order total', () => {
        expect(() => calculateDiscount(-50)).toThrow('Order total must be positive');
      });
    });

    Integrating Tests into CI/CD

    Pipeline Structure

    A typical CI/CD pipeline has stages: build, test, deploy. Tests run after the build stage and before deployment:

    # GitHub Actions example
    name: CI/CD Pipeline
     
    on: [push, pull_request]
     
    jobs:
      test:
        runs-on: ubuntu-latest
        steps:
          - uses: actions/checkout@v3
     
          - name: Setup Node.js
            uses: actions/setup-node@v3
            with:
              node-version: '18'
     
          - name: Install dependencies
            run: npm ci
     
          - name: Run linter
            run: npm run lint
     
          - name: Run type check
            run: npm run typecheck
     
          - name: Run unit tests
            run: npm run test:unit
     
          - name: Generate coverage report
            run: npm run test:coverage
     
          - name: Upload coverage to Codecov
            uses: codecov/codecov-action@v3

    Test Execution Commands

    Configure npm scripts for different test types:

    {
      "scripts": {
        "test": "jest",
        "test:watch": "jest --watch",
        "test:coverage": "jest --coverage",
        "test:ci": "jest --ci --coverage --maxWorkers=2",
        "test:unit": "jest --testPathPattern=unit",
        "test:integration": "jest --testPathPattern=integration"
      }
    }

    The test:ci script is optimized for CI environments. It runs tests in parallel with limited output and generates coverage reports.

    Coverage Thresholds

    Enforce minimum coverage thresholds in CI:

    - name: Check coverage thresholds
      run: |
        npm run test:coverage
        npx codecov --token=$CODECOV_TOKEN
        # Or use a coverage tool like nyc
        npx nyc check-coverage --lines 70 --functions 70 --branches 70

    If coverage falls below thresholds, the pipeline fails. This prevents regression in test coverage over time.

    Test Quality Gates

    Failing Fast

    Fail fast on test failures. Don't continue running tests after the first failure. This saves time and highlights the most critical issues first:

    - name: Run tests
      run: npm run test:ci
      continue-on-error: false

    Parallel Test Execution

    Run tests in parallel to reduce pipeline time. Most test frameworks support parallel execution:

    # Jest with maxWorkers
    npm run test:ci -- --maxWorkers=4
     
    # Jest with --detectOpenHandles to catch resource leaks
    npm run test:ci -- --detectOpenHandles

    Test Retries

    Some flaky tests pass locally but fail in CI. Configure retries for transient failures:

    - name: Run tests with retries
      uses: nick-fields/retry-action@v2
      with:
        timeout_minutes: 5
        max_attempts: 3
        command: npm run test:ci

    Common Pitfalls

    Tests That Pass But Don't Test Anything

    // BAD - This test passes but doesn't verify behavior
    test('should do something', () => {
      const service = new MyService();
      service.doSomething();
      expect(true).toBe(true);
    });
     
    // GOOD - Verify actual behavior
    test('calls the logger with the correct message', () => {
      const mockLogger = jest.fn();
      const service = new MyService(mockLogger);
      service.doSomething();
      expect(mockLogger).toHaveBeenCalledWith('something happened');
    });

    Tests That Are Too Slow

    Unit tests should run in milliseconds. If a test takes seconds, it's probably testing integration points. Move it to an integration test suite.

    // BAD - This test is too slow (hits a real database)
    test('saves user to database', async () => {
      const user = { name: 'John' };
      await userRepository.create(user);
      const saved = await userRepository.findById(user.id);
      expect(saved).toEqual(user);
    });
     
    // GOOD - This test is fast (uses a mock)
    test('saves user to repository', async () => {
      const mockRepo = {
        create: jest.fn().mockResolvedValue({ id: 1, name: 'John' })
      };
      const service = new UserService(mockRepo);
      await service.createUser({ name: 'John' });
      expect(mockRepo.create).toHaveBeenCalledWith({ name: 'John' });
    });

    Tests That Are Too Broad

    // BAD - Tests everything at once
    test('handles all user operations', () => {
      const service = new UserService();
      service.createUser();
      service.updateUser();
      service.deleteUser();
      service.login();
      service.logout();
      // Hard to know which test failed
    });
     
    // GOOD - Tests one thing at a time
    test('creates a user', () => { ... });
    test('updates a user', () => { ... });
    test('deletes a user', () => { ... });
    test('logs in a user', () => { ... });
    test('logs out a user', () => { ... });

    Advanced Patterns

    Property-Based Testing

    Instead of testing specific inputs, test properties that should hold for all inputs:

    import { testProp } from '@fast-check/jest';
     
    testProp(
      'calculateDiscount always returns a non-negative value',
      [fc.integer({ min: 0, max: 1000 })],
      (orderTotal) => {
        const discount = calculateDiscount(orderTotal);
        expect(discount).toBeGreaterThanOrEqual(0);
      }
    );

    This finds edge cases you might not have thought of.

    Snapshot Testing

    For UI components and complex objects, snapshots capture the expected output:

    test('renders user profile correctly', () => {
      const user = { name: 'John', email: 'john@example.com' };
      const tree = renderer.create(<UserProfile user={user} />);
      expect(tree.toJSON()).toMatchSnapshot();
    });

    Update snapshots when the component changes intentionally. Don't update them blindly.

    Contract Testing

    For APIs and microservices, contract tests verify that implementations match their contracts:

    test('user API returns correct response structure', async () => {
      const response = await fetch('/api/users/1');
      const data = await response.json();
     
      expect(response.status).toBe(200);
      expect(data).toHaveProperty('id');
      expect(data).toHaveProperty('name');
      expect(data).toHaveProperty('email');
    });

    Monitoring Test Health

    Test Execution Time

    Track test execution time over time. If tests are getting slower, investigate:

    - name: Report test execution time
      run: |
        npm run test:ci -- --verbose --json > test-results.json
        node scripts/analyze-test-times.js test-results.json

    Flaky Test Rate

    Monitor the percentage of flaky tests (tests that pass locally but fail in CI). High flakiness indicates unstable tests or environmental issues.

    Track coverage over time. Declining coverage is a warning sign that tests are being removed or skipped.

    Best Practices Summary

    • Run tests in every CI/CD pipeline
    • Fail fast on test failures
    • Use parallel test execution
    • Enforce coverage thresholds
    • Write isolated, fast unit tests
    • Test edge cases and error conditions
    • Use meaningful test names
    • Structure tests with AAA pattern
    • Avoid shared test state
    • Don't test implementation details
    • Monitor test health metrics

    Conclusion

    Unit testing in CI/CD pipelines is non-negotiable for modern software development. It catches bugs early, provides confidence in refactoring, and serves as living documentation. The investment in test quality pays dividends in reduced debugging time, faster deployments, and happier teams.

    Platforms like ServerlessBase make it easy to deploy applications with automated testing integrated into your workflow. By treating tests as first-class citizens in your CI/CD pipeline, you build software that's reliable, maintainable, and ready for production.

    The next time you write code, write a test for it first. Your future self will thank you when you're debugging a different feature instead of chasing down a regression you could have caught with a simple unit test.

    Leave comment