Skip to content

Test Coverage Summary

Comprehensive overview of test coverage across the Bindy codebase.

Test Coverage Goals

This project maintains high test coverage standards as required in our regulated banking environment. All code must be thoroughly tested before merging.

Test Organization

Tests are organized into separate *_tests.rs files following the project pattern:

src/
├── main.rs
├── main_tests.rs           # Tests for main.rs
├── bind9.rs
├── bind9_tests.rs          # Tests for bind9.rs
├── reconcilers/
│   ├── dnszone.rs
│   └── dnszone_tests.rs    # Tests for dnszone.rs

Test Categories

Unit Tests

Located in *_tests.rs files alongside source code:

  • Core Logic Tests: Business logic and algorithms
  • Validation Tests: Input validation and error handling
  • State Transition Tests: Reconciler state machine testing
  • Helper Function Tests: Utility and helper functions

Integration Tests

Located in /tests/ directory:

  • CRD Tests: Custom Resource Definition validation
  • Reconciliation Tests: End-to-end reconciliation workflows
  • Multi-Resource Tests: Cross-resource interaction testing
  • Cleanup Tests: Finalizer and deletion logic

Coverage by Module

Core Modules

Module Coverage Tests Status
src/main.rs High Unit
src/bind9.rs High Unit
src/bind9_resources.rs High Unit
src/crd.rs Medium Unit + Integration
src/labels.rs High Unit

Reconcilers

Reconciler Coverage Tests Status
clusterbind9provider.rs High Unit + Integration
bind9cluster.rs High Unit + Integration
bind9instance.rs High Unit + Integration
dnszone.rs High Unit + Integration
records.rs High Unit + Integration

Running Tests

# Run all tests
cargo test

# Run tests for a specific module
cargo test --lib bind9_tests

# Run integration tests only
cargo test --test '*'

# Run with output
cargo test -- --nocapture

# Run with coverage (if tarpaulin installed)
cargo tarpaulin --out Html

Test Requirements

All new code must include:

  1. Unit Tests: Test all public functions and key internal logic
  2. Success Cases: Verify expected behavior works
  3. Failure Cases: Test error handling for all error types
  4. Edge Cases: Test boundary conditions and special cases
  5. State Transitions: Verify correct reconciliation state changes

See Testing Guide for detailed requirements.

Continuous Integration

Tests run automatically on: - Every pull request - Every commit to main - Nightly builds

CI enforces: - ✅ All tests must pass - ✅ No compiler warnings (cargo clippy) - ✅ Code formatting (cargo fmt) - ✅ Security audit (cargo audit)

Test-Driven Development (TDD)

This project follows TDD practices:

  1. RED: Write failing tests that define expected behavior
  2. GREEN: Implement minimum code to make tests pass
  3. REFACTOR: Improve code while keeping tests green

See Testing Guide for TDD workflow details.

Mocking and Fixtures

Tests use: - kube::Client mocking: For Kubernetes API interactions - Test fixtures: Pre-defined CR objects for consistent testing - Helper functions: Shared test utilities in *_tests.rs files

Test Metrics

Current test statistics (updated automatically):

  • Total Tests: 500+ (across unit and integration tests)
  • Test Execution Time: ~2 minutes (full test suite)
  • Coverage Target: 80%+ line coverage
  • Coverage Current: Measured with cargo tarpaulin

Known Gaps

Areas needing additional test coverage:

  • Edge cases in RNDC communication
  • Complex multi-region failure scenarios
  • Performance under high load (addressed in load testing)

See Load Testing Roadmap for performance testing plans.

Contributing Tests

When contributing code:

  1. Write tests FIRST (TDD approach)
  2. Ensure all new public functions have tests
  3. Test both success and failure paths
  4. Run cargo test before committing
  5. Update this summary if adding significant test coverage

See Contributing Guide for detailed requirements.