Testing Guide¶
Testing practices and guidelines for mockapi-server.
Quick Start¶
make test # Tests with coverage
make test-fast # Fast (no coverage)
make benchmark # Performance benchmarks
Test Structure¶
tests/
├── conftest.py # Shared fixtures
├── fixtures/ # Test data models
├── test_*.py # Unit tests
└── benchmarks/ # Performance tests
Test Naming¶
Examples:
def test_parser_with_valid_models_extracts_all_fields():
"""Parser should extract all fields from valid Pydantic models."""
pass
def test_generator_with_email_field_creates_valid_email():
"""Generator should create valid email for fields named 'email'."""
pass
Writing Tests¶
Basic Structure¶
def test_feature():
"""Test description."""
# Arrange
parser = SchemaParser()
# Act
result = parser.parse_file("test.py")
# Assert
assert result is not None
assert "User" in result
Parametrized Tests¶
import pytest
@pytest.mark.parametrize("field_name,field_type", [
("email", str),
("age", int),
("active", bool),
])
def test_field_types(field_name, field_type):
"""Test field type extraction."""
# Test implementation
pass
Testing Exceptions¶
def test_parser_with_missing_file_raises_error():
"""Parser should raise FileNotFoundError."""
parser = SchemaParser()
with pytest.raises(FileNotFoundError):
parser.parse_file("nonexistent.py")
Running Tests¶
Make Commands¶
make test # Tests with coverage
make test-fast # Fast (no coverage)
make test-unit # Unit tests only
make test-cov # View HTML coverage report
make benchmark # Run benchmarks
Coverage¶
Targets¶
| Component | Target |
|---|---|
| Parser | 95% |
| Generator | 90% |
| Store | 100% |
| Router | 95% |
| Overall | 90% |
Checking Coverage¶
pytest --cov=mock_api --cov-report=term-missing
pytest --cov=mock_api --cov-report=html
open htmlcov/index.html
Benchmarks¶
We use pytest-benchmark to track performance across 53 tests covering parser, generator, store, router, and server components.
make benchmark # Run all benchmarks (~45s)
make benchmark-compare # Compare with baseline (fails if >20% slower)
make benchmark-save # Save new baseline after optimizations
make benchmark-report # Generate histogram in .benchmarks/
Performance Targets: - Parser: <10ms per model - Generator: <20ms for 100 records - Store CRUD: <10μs per operation - API requests: <2ms
CI Integration: Benchmarks run on develop and main branches, posting results as commit comments and failing PRs with >20% performance regression.
Writing Benchmarks¶
def test_parser_performance(benchmark):
"""Benchmark: parser-simple."""
parser = SchemaParser()
result = benchmark(parser.parse_file, "tests/fixtures/models.py")
assert len(result) == 10
Best Practices¶
DO¶
- Write tests first (TDD for bugs)
- Test one thing per test
- Use descriptive names
- Keep tests fast (< 100ms)
- Test edge cases
DON'T¶
- Test implementation details
- Create test dependencies
- Skip tests without reason
- Use sleep() for timing
CI Integration¶
Tests run on every push and PR. CI runs:
Includes:
- Format check
- Linting
- Type checking
- Tests with coverage
- Coverage threshold (90%)