Testing Guide¶
CalcBridge maintains high test coverage to ensure reliability for financial calculations. This guide covers our testing philosophy, test categories, and how to write effective tests.
Testing Philosophy¶
Core Principles¶
- Test behavior, not implementation - Tests should verify what the code does, not how it does it
- Use real dependencies when practical - Prefer testcontainers over mocks for integration tests
- Every bug gets a test - When fixing bugs, write a test that reproduces the issue first
- Fast feedback - Unit tests should run in milliseconds; integration tests in seconds
- Coverage is a guide, not a goal - Aim for meaningful coverage, not 100% line coverage
Coverage Requirements¶
| Category | Minimum Coverage | Target Coverage |
|---|---|---|
| Overall | 80% | 90% |
| Core calculations | 95% | 100% |
| API routes | 85% | 95% |
| Repositories | 80% | 90% |
Test Categories¶
Unit Tests¶
Unit tests verify individual functions and classes in isolation. They run without external dependencies.
Location: tests/unit/
Characteristics:
- No database, cache, or network calls
- Fast execution (< 100ms per test)
- Test edge cases and error conditions
- Use fixtures for test data
# Run unit tests
PYTHONPATH=. pytest tests/unit/ -v
# Run specific test file
PYTHONPATH=. pytest tests/unit/test_parser.py -v
# Run specific test
PYTHONPATH=. pytest tests/unit/test_parser.py::TestFormulaParser::test_parse_sum -v
Integration Tests¶
Integration tests verify that components work together correctly with real infrastructure.
Location: tests/integration/
Characteristics:
- Use testcontainers for PostgreSQL and Valkey
- Test API endpoints end-to-end
- Verify database operations
- Test multi-component workflows
# Run integration tests (requires Docker)
PYTHONPATH=. pytest tests/integration/ -v
# Skip testcontainers (use local services)
USE_TESTCONTAINERS=0 \
TEST_DATABASE_URL=postgresql+psycopg://calcbridge:calcbridge_dev@localhost:5433/calcbridge \
PYTHONPATH=. pytest tests/integration/ -v
Security Tests¶
Security tests verify protection against common vulnerabilities.
Location: tests/security/
Characteristics:
- Test authentication and authorization
- Verify input validation
- Check for injection vulnerabilities
- Test tenant isolation
Load Tests¶
Load tests verify system performance under stress.
Location: tests/load/
Tool: Locust
# Run load tests (requires running API)
cd tests/load
locust -f scenarios/smoke_test.py --headless -u 10 -r 5 --run-time 1m
# Interactive mode
locust -f scenarios/load_test.py
# Open http://localhost:8089
Running Tests¶
Full Test Suite¶
# Run all tests
PYTHONPATH=. pytest tests/ -v
# With coverage report
PYTHONPATH=. pytest tests/ --cov=src --cov-report=html --cov-report=term
# View coverage report
open htmlcov/index.html
Test Selection¶
# Run tests matching a pattern
PYTHONPATH=. pytest tests/ -k "parser" -v
# Run tests with a specific marker
PYTHONPATH=. pytest tests/ -m "slow" -v
# Exclude slow tests
PYTHONPATH=. pytest tests/ -m "not slow" -v
Test Options¶
| Option | Description |
|---|---|
-v | Verbose output |
-vv | More verbose output |
-x | Stop on first failure |
--pdb | Drop into debugger on failure |
-k "pattern" | Run tests matching pattern |
-m "marker" | Run tests with marker |
--timeout=N | Set timeout per test (seconds) |
--tb=short | Shorter traceback format |
Writing Unit Tests¶
Test Structure (AAA Pattern)¶
Follow the Arrange-Act-Assert pattern:
import pytest
from src.calculations.functions.vectorized import vectorized_if
import numpy as np
import pandas as pd
class TestVectorizedIf:
"""Tests for the vectorized_if function."""
def test_basic_condition(self):
"""Test basic true/false condition evaluation."""
# Arrange
condition = pd.Series([True, False, True, False])
value_if_true = 100
value_if_false = 0
# Act
result = vectorized_if(condition, value_if_true, value_if_false)
# Assert
expected = np.array([100, 0, 100, 0])
np.testing.assert_array_equal(result, expected)
def test_with_series_values(self):
"""Test with Series for both true and false values."""
# Arrange
condition = pd.Series([True, False, True])
true_values = pd.Series([10, 20, 30])
false_values = pd.Series([1, 2, 3])
# Act
result = vectorized_if(condition, true_values, false_values)
# Assert
expected = np.array([10, 2, 30])
np.testing.assert_array_equal(result, expected)
def test_empty_input(self):
"""Test handling of empty input arrays."""
# Arrange
condition = pd.Series([], dtype=bool)
# Act
result = vectorized_if(condition, 1, 0)
# Assert
assert len(result) == 0
Parametrized Tests¶
Use pytest.mark.parametrize for testing multiple inputs:
import pytest
from src.calculations.functions.vectorized import safe_division
class TestSafeDivision:
"""Tests for safe division with zero handling."""
@pytest.mark.parametrize(
"numerator,denominator,expected",
[
(10, 2, 5.0),
(10, 0, 0.0), # Division by zero returns 0
(0, 5, 0.0),
(100, 4, 25.0),
(-10, 2, -5.0),
],
ids=["normal", "div_by_zero", "zero_numerator", "exact_division", "negative"],
)
def test_safe_division(self, numerator, denominator, expected):
"""Test safe division with various inputs."""
result = safe_division(numerator, denominator)
assert result == expected
Testing Exceptions¶
import pytest
from src.calculations.parser import parse_formula, FormulaParseError
class TestFormulaParser:
"""Tests for formula parsing."""
def test_invalid_syntax_raises_error(self):
"""Test that invalid syntax raises FormulaParseError."""
with pytest.raises(FormulaParseError) as exc_info:
parse_formula("=SUM(A1:A10") # Missing closing paren
assert "Unexpected end of formula" in str(exc_info.value)
def test_unsupported_function_raises_error(self):
"""Test that unsupported functions raise appropriate error."""
with pytest.raises(FormulaParseError, match="Unknown function"):
parse_formula("=UNSUPPORTED_FUNC(A1)")
Writing Integration Tests¶
Database Tests with Fixtures¶
import pytest
from httpx import AsyncClient
@pytest.mark.asyncio
async def test_create_workbook(authenticated_client: AsyncClient):
"""Test workbook creation via API."""
# Act
response = await authenticated_client.post(
"/api/v1/workbooks",
json={
"name": "Test Workbook",
"description": "Integration test workbook",
},
)
# Assert
assert response.status_code == 201
data = response.json()
assert data["name"] == "Test Workbook"
assert "id" in data
assert "created_at" in data
@pytest.mark.asyncio
async def test_workbook_not_found(authenticated_client: AsyncClient):
"""Test 404 response for non-existent workbook."""
# Act
response = await authenticated_client.get(
"/api/v1/workbooks/00000000-0000-0000-0000-000000000000"
)
# Assert
assert response.status_code == 404
data = response.json()
assert data["detail"]["code"] == "NOT_FOUND"
Testing with Test Database¶
import pytest
from sqlalchemy.ext.asyncio import AsyncSession
from src.db.models import Workbook
from src.db.repositories.workbook import WorkbookRepository
@pytest.mark.asyncio
async def test_repository_create(db_session: AsyncSession, test_tenant_id: str):
"""Test repository create operation."""
# Arrange
repo = WorkbookRepository(db_session)
# Act
workbook = await repo.create(
tenant_id=test_tenant_id,
name="Test Workbook",
description="Test description",
)
# Assert
assert workbook.id is not None
assert workbook.name == "Test Workbook"
assert workbook.tenant_id == test_tenant_id
# Verify in database
fetched = await repo.get_by_id(workbook.id)
assert fetched is not None
assert fetched.name == "Test Workbook"
Test Fixtures¶
Available Fixtures¶
CalcBridge provides several fixtures in tests/conftest.py:
| Fixture | Scope | Description |
|---|---|---|
db_engine | session | Async SQLAlchemy engine with testcontainer |
db_session | function | Database session with rollback |
client | function | AsyncClient for API testing |
authenticated_client | function | Client with auth headers |
registered_user | function | User with tokens |
test_workbook | function | Pre-created workbook |
sample_excel_content | function | Excel file bytes |
test_tenant_id | function | UUID for tenant |
Custom Fixtures¶
# tests/conftest.py or test file
@pytest.fixture
def sample_portfolio_data() -> dict:
"""Sample portfolio data for testing calculations."""
return {
"positions": [
{"cusip": "123456789", "value": 1000000, "weight": 0.10},
{"cusip": "987654321", "value": 2000000, "weight": 0.20},
{"cusip": "456789123", "value": 7000000, "weight": 0.70},
],
"total_value": 10000000,
}
@pytest.fixture
def compliance_thresholds() -> dict:
"""Standard compliance thresholds for testing."""
return {
"max_single_obligor": 0.05,
"min_diversity_score": 40,
"max_ccc_concentration": 0.075,
}
Mocking Guidelines¶
When to Mock¶
Mock external services and non-deterministic behavior:
- External API calls (Geneva, JPM)
- Current time (
datetime.now()) - Random values
- File system operations (when appropriate)
- Email/notification sending
When NOT to Mock¶
Prefer real implementations for:
- Database operations (use testcontainers)
- Cache operations (use testcontainers)
- Internal service interactions
- Calculation functions
Mocking Example¶
from unittest.mock import AsyncMock, patch
import pytest
@pytest.mark.asyncio
async def test_send_notification(authenticated_client):
"""Test notification endpoint with mocked email sender."""
with patch("src.services.notifications.email_sender") as mock_sender:
mock_sender.send_async = AsyncMock(return_value=True)
response = await authenticated_client.post(
"/api/v1/notifications/send",
json={
"recipient": "user@example.com",
"subject": "Test",
"body": "Test message",
},
)
assert response.status_code == 200
mock_sender.send_async.assert_called_once()
Performance Testing¶
Benchmark Tests¶
Use pytest-benchmark for performance regression testing:
import pytest
from src.calculations.parser import parse_formula
def test_parse_performance(benchmark):
"""Benchmark formula parsing performance."""
formula = "=SUM(A1:A100) + IF(B1>0, VLOOKUP(C1, D:E, 2, FALSE), 0)"
result = benchmark(parse_formula, formula)
assert result is not None
# Verify performance (adjust threshold as needed)
assert benchmark.stats["mean"] < 0.001 # < 1ms
Running Benchmarks¶
# Run benchmarks
PYTHONPATH=. pytest tests/benchmarks/ -v --benchmark-only
# Compare against baseline
PYTHONPATH=. pytest tests/benchmarks/ --benchmark-compare
Debugging Tests¶
Using pdb¶
# Drop into debugger on failure
PYTHONPATH=. pytest tests/unit/test_parser.py -x --pdb
# Set breakpoint in code
def test_something():
import pdb; pdb.set_trace()
result = function_under_test()
Verbose Output¶
# Show print statements
PYTHONPATH=. pytest tests/ -v -s
# Show local variables on failure
PYTHONPATH=. pytest tests/ -v --tb=long -l
CI/CD Integration¶
Tests run automatically on every pull request:
# .github/workflows/test.yml (excerpt)
- name: Run tests
run: |
PYTHONPATH=. pytest tests/ \
--cov=src \
--cov-report=xml \
--cov-fail-under=80 \
-v
Required Checks¶
Pull requests require:
- All tests passing
- Coverage >= 80%
- No security test failures
- Ruff check passing
Summary Checklist¶
Before submitting tests:
- Tests follow AAA pattern (Arrange-Act-Assert)
- Test names describe the behavior being tested
- Edge cases are covered (empty input, null values, etc.)
- Error conditions are tested
- No hardcoded test data that could become stale
- Integration tests clean up after themselves
- Tests run in isolation (no order dependency)
- Performance-sensitive code has benchmark tests