Skip to content

Testing Guide

This guide covers the comprehensive testing strategy for Gunicorn Prometheus Exporter, following the Test Pyramid.

Test Pyramid

The project uses a three-tier testing approach:

┌─────────────────────────────────────┐
│  e2e/ (End-to-End Tests)            │  ← Slow, expensive, few tests
│  • Docker deployment                │
│  • Kubernetes orchestration         │
│  • Production-like environments     │
├─────────────────────────────────────┤
│  integration/ (Integration Tests)   │  ← Medium speed, moderate cost
│  • Exporter + Gunicorn + Storage    │
│  • Component interaction            │
│  • No containers                    │
├─────────────────────────────────────┤
│  tests/ (Unit Tests)                │  ← Fast, cheap, many tests
│  • Individual functions             │
│  • pytest-based                     │
│  • Mocked dependencies              │
└─────────────────────────────────────┘

Unit Tests (tests/)

Purpose

Unit tests verify individual functions and methods in isolation.

Characteristics

  • Fast: Run in seconds
  • Isolated: Use mocks for dependencies
  • Focused: Test single units of code
  • Coverage: Aim for high coverage (80%+)

Running Unit Tests

# Run all unit tests
pytest

# Run with coverage
pytest --cov=src/gunicorn_prometheus_exporter --cov-report=html

# Run specific test file
pytest tests/test_metrics.py

# Run specific test function
pytest tests/test_metrics.py::test_worker_requests

# Run in parallel
pytest -n auto

Using tox

# Run all Python versions
tox

# Run specific Python version
tox -e py312

# Run with coverage
tox -e py312 -- --cov=gunicorn_prometheus_exporter --cov-report=html

Writing Unit Tests

import pytest
from gunicorn_prometheus_exporter.metrics import WorkerMetrics

def test_worker_metrics_initialization():
    """Test WorkerMetrics initializes correctly."""
    # Arrange
    worker_id = 1

    # Act
    metrics = WorkerMetrics(worker_id)

    # Assert
    assert metrics.worker_id == worker_id
    assert metrics.requests_total.describe() is not None

Best Practices

  1. Use descriptive test names that explain what is being tested
  2. Follow AAA pattern: Arrange, Act, Assert
  3. Mock external dependencies (Redis, files, network)
  4. Use fixtures for common setup
  5. Test edge cases and error conditions

Integration Tests (integration/)

Purpose

Integration tests verify that components work together correctly without containers.

Characteristics

  • Medium speed: Run in 30-60 seconds
  • Real dependencies: Use actual Redis, Gunicorn
  • No containers: Run directly on host
  • Component interaction: Test exporter + Gunicorn + storage

Running Integration Tests

cd e2e

# File-based storage integration test
make integration-test-file-storage          # Full test
make integration-test-file-storage-quick    # Quick test

# Redis integration test
make integration-test-redis-full            # Redis integration test (auto-starts Redis)
make integration-test-redis-quick           # Requires Redis running
make integration-test-redis-ci              # CI-optimized

# YAML configuration integration test
make integration-test-yaml-config           # Full test
make integration-test-yaml-config-quick     # Quick test

Available Integration Tests

1. File-Based Storage (integration/test_file_storage_integration.sh)

Tests the exporter with file-based multiprocess storage:

  • Gunicorn worker startup
  • Metrics collection in files
  • Multi-worker coordination
  • Metrics endpoint exposure
  • Graceful shutdown

Requirements: None (no Redis needed)

2. Redis Storage (integration/test_redis_integration.sh)

Tests the exporter with Redis-based storage:

  • Redis connection and storage
  • Multi-worker metrics sharing
  • Redis key management and TTL
  • Prometheus scraping from Redis
  • Graceful cleanup

Requirements: Redis (auto-started or use --no-redis)

3. YAML Configuration (integration/test_yaml_config_integration.sh)

Tests YAML-based configuration:

  • Configuration file parsing
  • Environment variable overrides
  • Validation and error handling
  • Multiple worker types

Requirements: None

Writing Integration Tests

Integration tests are bash scripts that:

  1. Start dependencies (Redis, if needed)
  2. Run Gunicorn with the exporter
  3. Generate test requests
  4. Verify metrics
  5. Clean up processes

See integration/README.md for detailed guidelines.

End-to-End Tests (e2e/)

Purpose

E2E tests verify the complete system in production-like environments.

Characteristics

  • Slow: Run in 2-5 minutes
  • Full stack: Docker containers, Kubernetes clusters
  • Production-like: Network policies, service discovery
  • Complete flow: Build → Deploy → Test → Verify

Running E2E Tests

cd e2e

# Docker tests
bash docker/test_docker_compose.sh        # Docker Compose stack
bash docker/test_sidecar_redis.sh         # Sidecar with Redis
bash docker/test_standalone_images.sh     # Image validation

# Kubernetes tests
bash kubernetes/test_sidecar_deployment.sh    # Sidecar pattern
bash kubernetes/test_daemonset_deployment.sh  # DaemonSet pattern

# Or use Make targets
make docker-test        # Run Docker tests via workflows

Available E2E Tests

Docker Tests (e2e/docker/)

  1. Docker Compose (test_docker_compose.sh):
  2. Multi-container orchestration
  3. Sidecar pattern validation
  4. Prometheus + Grafana integration
  5. Complete monitoring stack

  6. Sidecar with Redis (test_sidecar_redis.sh):

  7. Container networking
  8. Redis connectivity
  9. Sidecar communication

  10. Standalone Images (test_standalone_images.sh):

  11. Image build validation
  12. Entrypoint modes testing
  13. Health checks

Kubernetes Tests (e2e/kubernetes/)

  1. Sidecar Deployment (test_sidecar_deployment.sh):
  2. Standard K8s deployment
  3. Service discovery
  4. Pod communication

  5. DaemonSet Deployment (test_daemonset_deployment.sh):

  6. Multi-node Kind cluster
  7. DaemonSet to all nodes
  8. Node-level monitoring
  9. Comprehensive metrics validation

Writing E2E Tests

E2E tests are bash scripts that:

  1. Set up infrastructure (Kind cluster, Docker network)
  2. Build and load images
  3. Deploy manifests or Compose files
  4. Wait for readiness
  5. Generate traffic
  6. Verify metrics and behavior
  7. Clean up infrastructure

See e2e/README.md for detailed guidelines.

CI/CD Testing

GitHub Actions Workflows

The project uses three main testing workflows:

  1. Unit Tests (.github/workflows/ci.yml):
  2. Runs on every push/PR
  3. Tests Python 3.9-3.12
  4. Checks linting and formatting
  5. Generates coverage reports

  6. Integration Tests (.github/workflows/system-test.yml):

  7. Tests Redis integration
  8. Tests file-based storage
  9. Tests YAML configuration
  10. Runs in Docker containers

  11. Smoke Tests - Docker (.github/workflows/docker-test.yml):

  12. Tests Docker image builds
  13. Tests Docker Compose
  14. Tests sidecar functionality
  15. Validates Kubernetes manifests

  16. Smoke Tests - Kubernetes (.github/workflows/kubernetes-test.yml):

  17. Tests Kubernetes deployments
  18. Tests DaemonSet pattern
  19. Tests Sidecar pattern
  20. Validates metrics collection

Running CI Tests Locally

# Unit tests (same as CI)
tox

# Integration tests (Docker-based, same as CI)
cd e2e
docker build -f fixtures/dockerfiles/default.Dockerfile -t test ..
docker run --rm test

# E2E tests (requires Docker/Kind)
bash e2e/docker/test_docker_compose.sh
bash e2e/kubernetes/test_daemonset_deployment.sh

Test Coverage

Checking Coverage

# Unit test coverage
pytest --cov=src/gunicorn_prometheus_exporter --cov-report=html
open htmlcov/index.html

# With tox
tox -e py312 -- --cov=gunicorn_prometheus_exporter --cov-report=html

Coverage Goals

  • Unit tests: 80%+ coverage
  • Integration tests: Cover all storage backends
  • E2E tests: Cover all deployment patterns

Testing Best Practices

General

  1. Follow the Test Pyramid: Many unit tests, fewer integration tests, even fewer E2E tests
  2. Keep tests fast: Unit tests should run in seconds
  3. Keep tests isolated: Tests should not depend on each other
  4. Keep tests deterministic: Tests should always produce the same result
  5. Keep tests readable: Use descriptive names and clear assertions

Unit Tests

  • Test one thing at a time
  • Use mocks for external dependencies
  • Test both success and failure cases
  • Use parameterized tests for multiple scenarios

Integration Tests

  • Use real dependencies
  • Clean up after tests
  • Test error recovery
  • Verify complete workflows

E2E Tests

  • Test production-like scenarios
  • Validate complete deployments
  • Check metrics thoroughly
  • Clean up infrastructure

Debugging Tests

Unit Tests

# Run with verbose output
pytest -v -s

# Run with pdb on failure
pytest --pdb

# Run specific test with print statements
pytest -v -s tests/test_metrics.py::test_worker_requests

Integration Tests

# Enable verbose mode
export VERBOSE=1
cd e2e
bash ../integration/test_redis_integration.sh

# Check logs
cat prometheus.log

E2E Tests

# Docker logs
docker logs <container-name>

# Kubernetes logs
kubectl logs <pod-name>
kubectl describe pod <pod-name>

# Kind cluster logs
kind export logs /tmp/kind-logs