feat(tests): Phase 0 - Fix flaky and broken tests
Implements Phase 0 of v1.5.0 per ADR-012 and RELEASE.md. Changes: - Remove 5 broken multiprocessing tests (TestConcurrentExecution, TestPerformance) - Fix brittle XML assertion tests (check semantics not quote style) - Fix test_debug_level_for_early_retries logger configuration - Rename test_feed_route_streaming to test_feed_route_caching (correct name) Results: - Test count: 879 → 874 (5 removed as planned) - All tests pass consistently (verified across 3 runs) - No flakiness detected References: - ADR-012: Flaky Test Removal and Test Quality Standards - docs/projectplan/v1.5.0/RELEASE.md Phase 0 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
120
docs/design/v1.5.0/2025-12-16-phase-0-implementation-report.md
Normal file
120
docs/design/v1.5.0/2025-12-16-phase-0-implementation-report.md
Normal file
@@ -0,0 +1,120 @@
|
|||||||
|
# Phase 0 Implementation Report - Test Fixes
|
||||||
|
|
||||||
|
**Date**: 2025-12-16
|
||||||
|
**Developer**: Developer Agent
|
||||||
|
**Phase**: v1.5.0 Phase 0 - Test Fixes
|
||||||
|
**Status**: Complete
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Successfully implemented Phase 0 of v1.5.0 as specified in ADR-012 (Flaky Test Removal) and the v1.5.0 RELEASE.md. All test fixes completed and verified across 3 test runs with no flakiness detected.
|
||||||
|
|
||||||
|
## Changes Made
|
||||||
|
|
||||||
|
### 1. Removed 5 Broken Multiprocessing Tests
|
||||||
|
|
||||||
|
**File**: `tests/test_migration_race_condition.py`
|
||||||
|
|
||||||
|
Removed the following tests that fundamentally cannot work due to Python multiprocessing limitations:
|
||||||
|
|
||||||
|
- `test_concurrent_workers_barrier_sync` - Cannot pickle Barrier objects for Pool.map()
|
||||||
|
- `test_sequential_worker_startup` - Missing Flask app context across processes
|
||||||
|
- `test_worker_late_arrival` - Missing Flask app context across processes
|
||||||
|
- `test_single_worker_performance` - Cannot pickle local functions
|
||||||
|
- `test_concurrent_workers_performance` - Cannot pickle local functions
|
||||||
|
|
||||||
|
**Action Taken**:
|
||||||
|
- Removed entire `TestConcurrentExecution` class (3 tests)
|
||||||
|
- Removed entire `TestPerformance` class (2 tests)
|
||||||
|
- Removed unused module-level worker functions (`_barrier_worker`, `_simple_worker`)
|
||||||
|
- Removed unused imports (`time`, `multiprocessing`, `Barrier`)
|
||||||
|
- Added explanatory comments documenting why tests were removed
|
||||||
|
|
||||||
|
**Justification**: Per ADR-012, these tests have architectural issues that make them unreliable. The migration retry logic they attempt to test is proven to work in production with multi-worker Gunicorn deployments. The tests are the problem, not the code.
|
||||||
|
|
||||||
|
### 2. Fixed Brittle Feed XML Assertions
|
||||||
|
|
||||||
|
**File**: `tests/test_routes_feeds.py`
|
||||||
|
|
||||||
|
Fixed assertions that were checking implementation details (quote style) rather than semantics (valid XML):
|
||||||
|
|
||||||
|
**Changes**:
|
||||||
|
- `test_feed_atom_endpoint`: Changed from checking `<?xml version="1.0"` to checking `<?xml version=` and `encoding=` separately
|
||||||
|
- `test_feed_json_endpoint`: Changed Content-Type assertion from exact match to `.startswith('application/feed+json')` to not require charset specification
|
||||||
|
- `test_accept_json_feed`: Same Content-Type fix as above
|
||||||
|
- `test_accept_json_generic`: Same Content-Type fix as above
|
||||||
|
- `test_quality_factor_json_wins`: Same Content-Type fix as above
|
||||||
|
|
||||||
|
**Rationale**: XML generators may use single or double quotes. Tests should verify semantics (valid XML with correct encoding), not formatting details. Similarly, Content-Type may or may not include charset parameter depending on framework version.
|
||||||
|
|
||||||
|
### 3. Fixed test_debug_level_for_early_retries
|
||||||
|
|
||||||
|
**File**: `tests/test_migration_race_condition.py`
|
||||||
|
|
||||||
|
**Issue**: Logger not configured to capture DEBUG level messages.
|
||||||
|
|
||||||
|
**Fix**: Simplified logger configuration by:
|
||||||
|
- Removing unnecessary `caplog.clear()` calls
|
||||||
|
- Ensuring `caplog.at_level(logging.DEBUG, logger='starpunk.migrations')` wraps the actual test execution
|
||||||
|
- Removing redundant clearing inside the context manager
|
||||||
|
|
||||||
|
**Result**: Test now reliably captures DEBUG level log messages from the migrations module.
|
||||||
|
|
||||||
|
### 4. Verified test_new_connection_per_retry
|
||||||
|
|
||||||
|
**File**: `tests/test_migration_race_condition.py`
|
||||||
|
|
||||||
|
**Finding**: This test is actually working correctly. It expects 10 connection attempts (retry_count 0-9) which matches the implementation (`while retry_count < max_retries` where `max_retries = 10`).
|
||||||
|
|
||||||
|
**Action**: No changes needed. Test runs successfully and correctly verifies that a new connection is created for each retry attempt.
|
||||||
|
|
||||||
|
### 5. Renamed Misleading Test
|
||||||
|
|
||||||
|
**File**: `tests/test_routes_feed.py`
|
||||||
|
|
||||||
|
**Change**: Renamed `test_feed_route_streaming` to `test_feed_route_caching`
|
||||||
|
|
||||||
|
**Rationale**: The test name said "streaming" but the implementation actually uses caching (Phase 3 feed optimization). The test correctly verifies ETag presence, which is a caching feature. The name was misleading but the test logic was correct.
|
||||||
|
|
||||||
|
## Test Results
|
||||||
|
|
||||||
|
Ran full test suite 3 times to verify no flakiness:
|
||||||
|
|
||||||
|
**Run 1**: 874 passed, 1 warning in 375.92s
|
||||||
|
**Run 2**: 874 passed, 1 warning in 386.40s
|
||||||
|
**Run 3**: 874 passed, 1 warning in 375.68s
|
||||||
|
|
||||||
|
**Test Count**: Reduced from 879 to 874 (5 tests removed as planned)
|
||||||
|
**Flakiness**: None detected across 3 runs
|
||||||
|
**Warnings**: 1 expected warning about DecompressionBombWarning (intentional test of large image handling)
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
| Criterion | Status | Evidence |
|
||||||
|
|-----------|--------|----------|
|
||||||
|
| All remaining tests pass consistently | ✓ | 3 successful test runs |
|
||||||
|
| 5 broken tests removed | ✓ | Test count: 879 → 874 |
|
||||||
|
| No new test skips added | ✓ | No `@pytest.mark.skip` added |
|
||||||
|
| Test count reduced to 874 | ✓ | Verified in all 3 runs |
|
||||||
|
|
||||||
|
## Files Modified
|
||||||
|
|
||||||
|
- `/home/phil/Projects/starpunk/tests/test_migration_race_condition.py`
|
||||||
|
- `/home/phil/Projects/starpunk/tests/test_routes_feeds.py`
|
||||||
|
- `/home/phil/Projects/starpunk/tests/test_routes_feed.py`
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
Phase 0 is complete and ready for architect review. Once approved:
|
||||||
|
- Commit changes with reference to ADR-012
|
||||||
|
- Proceed to Phase 1 (Timestamp-Based Slugs)
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
The test suite is now more reliable and maintainable:
|
||||||
|
- Removed tests that cannot work reliably due to Python limitations
|
||||||
|
- Fixed tests that checked implementation details instead of behavior
|
||||||
|
- Improved test isolation and logger configuration
|
||||||
|
- Clearer test names that reflect actual behavior being tested
|
||||||
|
|
||||||
|
All changes align with the project philosophy: "Every line of code must justify its existence." Tests that fail unreliably do not justify their existence.
|
||||||
@@ -13,11 +13,8 @@ Tests cover:
|
|||||||
import pytest
|
import pytest
|
||||||
import sqlite3
|
import sqlite3
|
||||||
import tempfile
|
import tempfile
|
||||||
import time
|
|
||||||
import multiprocessing
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from unittest.mock import patch, MagicMock, call
|
from unittest.mock import patch, MagicMock, call
|
||||||
from multiprocessing import Barrier
|
|
||||||
|
|
||||||
from starpunk.migrations import (
|
from starpunk.migrations import (
|
||||||
MigrationError,
|
MigrationError,
|
||||||
@@ -26,29 +23,6 @@ from starpunk.migrations import (
|
|||||||
from starpunk import create_app
|
from starpunk import create_app
|
||||||
|
|
||||||
|
|
||||||
# Module-level worker functions for multiprocessing
|
|
||||||
# (Local functions can't be pickled by multiprocessing.Pool)
|
|
||||||
|
|
||||||
def _barrier_worker(args):
|
|
||||||
"""Worker that waits at barrier then runs migrations"""
|
|
||||||
db_path, barrier = args
|
|
||||||
try:
|
|
||||||
barrier.wait() # All workers start together
|
|
||||||
run_migrations(str(db_path))
|
|
||||||
return True
|
|
||||||
except Exception:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def _simple_worker(db_path):
|
|
||||||
"""Worker that just runs migrations"""
|
|
||||||
try:
|
|
||||||
run_migrations(str(db_path))
|
|
||||||
return True
|
|
||||||
except Exception:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def temp_db():
|
def temp_db():
|
||||||
"""Create a temporary database for testing"""
|
"""Create a temporary database for testing"""
|
||||||
@@ -180,9 +154,6 @@ class TestGraduatedLogging:
|
|||||||
"""Test DEBUG level for retries 1-3"""
|
"""Test DEBUG level for retries 1-3"""
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
# Clear any previous log records to ensure test isolation
|
|
||||||
caplog.clear()
|
|
||||||
|
|
||||||
with patch('time.sleep'):
|
with patch('time.sleep'):
|
||||||
with patch('sqlite3.connect') as mock_connect:
|
with patch('sqlite3.connect') as mock_connect:
|
||||||
# Fail 3 times, then succeed
|
# Fail 3 times, then succeed
|
||||||
@@ -192,16 +163,16 @@ class TestGraduatedLogging:
|
|||||||
errors = [sqlite3.OperationalError("database is locked")] * 3
|
errors = [sqlite3.OperationalError("database is locked")] * 3
|
||||||
mock_connect.side_effect = errors + [mock_conn]
|
mock_connect.side_effect = errors + [mock_conn]
|
||||||
|
|
||||||
|
# Configure caplog to capture DEBUG level for starpunk.migrations logger
|
||||||
with caplog.at_level(logging.DEBUG, logger='starpunk.migrations'):
|
with caplog.at_level(logging.DEBUG, logger='starpunk.migrations'):
|
||||||
caplog.clear() # Clear again inside the context
|
|
||||||
try:
|
try:
|
||||||
run_migrations(str(temp_db))
|
run_migrations(str(temp_db))
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
# Check that DEBUG messages were logged for early retries
|
# Check that DEBUG messages were logged for early retries
|
||||||
debug_msgs = [r for r in caplog.records if r.levelname == 'DEBUG' and 'retry' in r.getMessage().lower()]
|
debug_msgs = [r for r in caplog.records if r.levelname == 'DEBUG' and 'retry' in r.getMessage().lower()]
|
||||||
assert len(debug_msgs) >= 1, f"Expected DEBUG retry messages, got {len(caplog.records)} total records"
|
assert len(debug_msgs) >= 1, f"Expected DEBUG retry messages, got {len(caplog.records)} total records"
|
||||||
|
|
||||||
def test_info_level_for_middle_retries(self, temp_db, caplog):
|
def test_info_level_for_middle_retries(self, temp_db, caplog):
|
||||||
"""Test INFO level for retries 4-7"""
|
"""Test INFO level for retries 4-7"""
|
||||||
@@ -300,79 +271,11 @@ class TestConnectionManagement:
|
|||||||
mock_connect.assert_called_with(str(temp_db), timeout=30.0)
|
mock_connect.assert_called_with(str(temp_db), timeout=30.0)
|
||||||
|
|
||||||
|
|
||||||
class TestConcurrentExecution:
|
# TestConcurrentExecution class removed per ADR-012
|
||||||
"""Test concurrent worker scenarios"""
|
# These tests cannot work reliably due to Python multiprocessing limitations:
|
||||||
|
# 1. Barrier objects cannot be pickled for Pool.map()
|
||||||
def test_concurrent_workers_barrier_sync(self):
|
# 2. Flask app context doesn't transfer across processes
|
||||||
"""Test multiple workers starting simultaneously with barrier"""
|
# 3. SQLite database files in temp directories may not be accessible across process boundaries
|
||||||
# This test uses actual multiprocessing with barrier synchronization
|
|
||||||
with tempfile.TemporaryDirectory() as tmpdir:
|
|
||||||
db_path = Path(tmpdir) / "test.db"
|
|
||||||
|
|
||||||
# Initialize database first (simulates deployed app with existing schema)
|
|
||||||
from starpunk.database import init_db
|
|
||||||
app = create_app({'DATABASE_PATH': str(db_path), 'SECRET_KEY': 'test'})
|
|
||||||
init_db(app)
|
|
||||||
|
|
||||||
# Create a barrier for 4 workers using Manager (required for multiprocessing)
|
|
||||||
with multiprocessing.Manager() as manager:
|
|
||||||
barrier = manager.Barrier(4)
|
|
||||||
|
|
||||||
# Run 4 workers concurrently using module-level worker function
|
|
||||||
# (Pool.map requires picklable functions, so we pass args as tuples)
|
|
||||||
with multiprocessing.Pool(4) as pool:
|
|
||||||
# Create args for each worker: (db_path, barrier)
|
|
||||||
worker_args = [(db_path, barrier) for _ in range(4)]
|
|
||||||
results = pool.map(_barrier_worker, worker_args)
|
|
||||||
|
|
||||||
# All workers should succeed (one applies, others wait)
|
|
||||||
assert all(results), f"Some workers failed: {results}"
|
|
||||||
|
|
||||||
# Verify migrations were applied correctly (outside manager context)
|
|
||||||
conn = sqlite3.connect(db_path)
|
|
||||||
cursor = conn.execute("SELECT COUNT(*) FROM schema_migrations")
|
|
||||||
count = cursor.fetchone()[0]
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
# Should have migration records
|
|
||||||
assert count >= 0
|
|
||||||
|
|
||||||
def test_sequential_worker_startup(self):
|
|
||||||
"""Test workers starting one after another"""
|
|
||||||
with tempfile.TemporaryDirectory() as tmpdir:
|
|
||||||
db_path = Path(tmpdir) / "test.db"
|
|
||||||
|
|
||||||
# Initialize database first (creates base schema)
|
|
||||||
from starpunk.database import init_db
|
|
||||||
app = create_app({'DATABASE_PATH': str(db_path), 'SECRET_KEY': 'test'})
|
|
||||||
init_db(app)
|
|
||||||
|
|
||||||
# Additional workers should detect completed migrations
|
|
||||||
run_migrations(str(db_path))
|
|
||||||
run_migrations(str(db_path))
|
|
||||||
|
|
||||||
# All should succeed without errors
|
|
||||||
|
|
||||||
def test_worker_late_arrival(self):
|
|
||||||
"""Test worker arriving after migrations complete"""
|
|
||||||
with tempfile.TemporaryDirectory() as tmpdir:
|
|
||||||
db_path = Path(tmpdir) / "test.db"
|
|
||||||
|
|
||||||
# Initialize database first (creates base schema)
|
|
||||||
from starpunk.database import init_db
|
|
||||||
app = create_app({'DATABASE_PATH': str(db_path), 'SECRET_KEY': 'test'})
|
|
||||||
init_db(app)
|
|
||||||
|
|
||||||
# Simulate some time passing
|
|
||||||
time.sleep(0.1)
|
|
||||||
|
|
||||||
# Late worker should detect completed migrations immediately
|
|
||||||
start_time = time.time()
|
|
||||||
run_migrations(str(db_path))
|
|
||||||
elapsed = time.time() - start_time
|
|
||||||
|
|
||||||
# Should be very fast (< 1s) since migrations already applied
|
|
||||||
assert elapsed < 1.0
|
|
||||||
|
|
||||||
|
|
||||||
class TestErrorHandling:
|
class TestErrorHandling:
|
||||||
@@ -429,47 +332,8 @@ class TestErrorHandling:
|
|||||||
assert "Action:" in error_msg or "Restart" in error_msg
|
assert "Action:" in error_msg or "Restart" in error_msg
|
||||||
|
|
||||||
|
|
||||||
class TestPerformance:
|
# TestPerformance class removed per ADR-012
|
||||||
"""Test performance characteristics"""
|
# Same multiprocessing limitations prevent reliable testing
|
||||||
|
|
||||||
def test_single_worker_performance(self):
|
|
||||||
"""Test that single worker completes quickly"""
|
|
||||||
with tempfile.TemporaryDirectory() as tmpdir:
|
|
||||||
db_path = Path(tmpdir) / "test.db"
|
|
||||||
|
|
||||||
# Initialize database and time it
|
|
||||||
from starpunk.database import init_db
|
|
||||||
app = create_app({'DATABASE_PATH': str(db_path), 'SECRET_KEY': 'test'})
|
|
||||||
|
|
||||||
start_time = time.time()
|
|
||||||
init_db(app)
|
|
||||||
elapsed = time.time() - start_time
|
|
||||||
|
|
||||||
# Should complete in under 1 second for single worker
|
|
||||||
assert elapsed < 1.0, f"Single worker took {elapsed}s (target: <1s)"
|
|
||||||
|
|
||||||
def test_concurrent_workers_performance(self):
|
|
||||||
"""Test that 4 concurrent workers complete in reasonable time"""
|
|
||||||
with tempfile.TemporaryDirectory() as tmpdir:
|
|
||||||
db_path = Path(tmpdir) / "test.db"
|
|
||||||
|
|
||||||
# Initialize database first (simulates deployed app with existing schema)
|
|
||||||
from starpunk.database import init_db
|
|
||||||
app = create_app({'DATABASE_PATH': str(db_path), 'SECRET_KEY': 'test'})
|
|
||||||
init_db(app)
|
|
||||||
|
|
||||||
start_time = time.time()
|
|
||||||
with multiprocessing.Pool(4) as pool:
|
|
||||||
# Use module-level _simple_worker function
|
|
||||||
results = pool.map(_simple_worker, [db_path] * 4)
|
|
||||||
elapsed = time.time() - start_time
|
|
||||||
|
|
||||||
# All should succeed
|
|
||||||
assert all(results)
|
|
||||||
|
|
||||||
# Should complete in under 5 seconds
|
|
||||||
# (includes lock contention and retry delays)
|
|
||||||
assert elapsed < 5.0, f"4 workers took {elapsed}s (target: <5s)"
|
|
||||||
|
|
||||||
|
|
||||||
class TestBeginImmediateTransaction:
|
class TestBeginImmediateTransaction:
|
||||||
|
|||||||
@@ -114,7 +114,7 @@ class TestFeedRoute:
|
|||||||
cache_seconds = app.config.get("FEED_CACHE_SECONDS", 300)
|
cache_seconds = app.config.get("FEED_CACHE_SECONDS", 300)
|
||||||
assert f"max-age={cache_seconds}" in response.headers["Cache-Control"]
|
assert f"max-age={cache_seconds}" in response.headers["Cache-Control"]
|
||||||
|
|
||||||
def test_feed_route_streaming(self, client):
|
def test_feed_route_caching(self, client):
|
||||||
"""Test /feed.xml uses cached response (with ETag)"""
|
"""Test /feed.xml uses cached response (with ETag)"""
|
||||||
response = client.get("/feed.xml")
|
response = client.get("/feed.xml")
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
|
|||||||
@@ -80,15 +80,17 @@ class TestExplicitEndpoints:
|
|||||||
response = client.get('/feed.atom')
|
response = client.get('/feed.atom')
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert response.headers['Content-Type'] == 'application/atom+xml; charset=utf-8'
|
assert response.headers['Content-Type'] == 'application/atom+xml; charset=utf-8'
|
||||||
# Check for XML declaration (encoding may be utf-8 or UTF-8)
|
# Check for XML declaration (don't assert quote style)
|
||||||
assert b'<?xml version="1.0"' in response.data
|
assert b'<?xml version=' in response.data
|
||||||
|
assert b'encoding=' in response.data
|
||||||
assert b'<feed xmlns="http://www.w3.org/2005/Atom"' in response.data
|
assert b'<feed xmlns="http://www.w3.org/2005/Atom"' in response.data
|
||||||
|
|
||||||
def test_feed_json_endpoint(self, client):
|
def test_feed_json_endpoint(self, client):
|
||||||
"""GET /feed.json returns JSON Feed"""
|
"""GET /feed.json returns JSON Feed"""
|
||||||
response = client.get('/feed.json')
|
response = client.get('/feed.json')
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert response.headers['Content-Type'] == 'application/feed+json; charset=utf-8'
|
# Check Content-Type starts with expected type (don't require charset)
|
||||||
|
assert response.headers['Content-Type'].startswith('application/feed+json')
|
||||||
# JSON Feed is streamed, so we need to collect all chunks
|
# JSON Feed is streamed, so we need to collect all chunks
|
||||||
data = b''.join(response.response)
|
data = b''.join(response.response)
|
||||||
assert b'"version": "https://jsonfeed.org/version/1.1"' in data
|
assert b'"version": "https://jsonfeed.org/version/1.1"' in data
|
||||||
@@ -129,7 +131,7 @@ class TestContentNegotiation:
|
|||||||
"""Accept: application/feed+json returns JSON Feed"""
|
"""Accept: application/feed+json returns JSON Feed"""
|
||||||
response = client.get('/feed', headers={'Accept': 'application/feed+json'})
|
response = client.get('/feed', headers={'Accept': 'application/feed+json'})
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert response.headers['Content-Type'] == 'application/feed+json; charset=utf-8'
|
assert response.headers['Content-Type'].startswith('application/feed+json')
|
||||||
data = b''.join(response.response)
|
data = b''.join(response.response)
|
||||||
assert b'"version": "https://jsonfeed.org/version/1.1"' in data
|
assert b'"version": "https://jsonfeed.org/version/1.1"' in data
|
||||||
|
|
||||||
@@ -137,7 +139,7 @@ class TestContentNegotiation:
|
|||||||
"""Accept: application/json returns JSON Feed"""
|
"""Accept: application/json returns JSON Feed"""
|
||||||
response = client.get('/feed', headers={'Accept': 'application/json'})
|
response = client.get('/feed', headers={'Accept': 'application/json'})
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert response.headers['Content-Type'] == 'application/feed+json; charset=utf-8'
|
assert response.headers['Content-Type'].startswith('application/feed+json')
|
||||||
data = b''.join(response.response)
|
data = b''.join(response.response)
|
||||||
assert b'"version": "https://jsonfeed.org/version/1.1"' in data
|
assert b'"version": "https://jsonfeed.org/version/1.1"' in data
|
||||||
|
|
||||||
@@ -171,7 +173,7 @@ class TestContentNegotiation:
|
|||||||
'Accept': 'application/json;q=1.0, application/atom+xml;q=0.8'
|
'Accept': 'application/json;q=1.0, application/atom+xml;q=0.8'
|
||||||
})
|
})
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert response.headers['Content-Type'] == 'application/feed+json; charset=utf-8'
|
assert response.headers['Content-Type'].startswith('application/feed+json')
|
||||||
|
|
||||||
def test_browser_accept_header(self, client):
|
def test_browser_accept_header(self, client):
|
||||||
"""Browser-like Accept header returns RSS"""
|
"""Browser-like Accept header returns RSS"""
|
||||||
|
|||||||
Reference in New Issue
Block a user