feat(tags): Add tag archive route and admin interface integration
Implement Phase 3 of v1.3.0 tags feature per microformats-tags-design.md: Routes (starpunk/routes/public.py): - Add /tag/<tag> archive route with normalization and 404 handling - Pre-load tags in index route for all notes - Pre-load tags in note route for individual notes Admin (starpunk/routes/admin.py): - Parse comma-separated tag input in create route - Parse tag input in update route - Pre-load tags when displaying edit form - Empty tag field removes all tags Templates: - Add tag input field to templates/admin/edit.html - Add tag input field to templates/admin/new.html - Use Jinja2 map filter to display existing tags Implementation details: - Tag URL parameter normalized to lowercase before lookup - Tags pre-loaded using object.__setattr__ pattern (like media) - parse_tag_input() handles trim, dedupe, normalization - All existing tests pass (micropub categories, admin routes) Per architect design: - No pagination on tag archives (acceptable for v1.3.0) - No autocomplete in admin (out of scope) - Follows existing media loading patterns Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -1,665 +0,0 @@
|
||||
# Bug Fixes and Edge Cases Specification
|
||||
|
||||
## Overview
|
||||
This specification details the bug fixes and edge case handling improvements planned for v1.1.1, focusing on test stability, Unicode handling, memory optimization, and session management.
|
||||
|
||||
## Bug Fixes
|
||||
|
||||
### 1. Migration Race Condition in Tests
|
||||
|
||||
#### Problem
|
||||
10 tests exhibit flaky behavior due to race conditions during database migration execution. Tests occasionally fail when migrations are executed concurrently or when the test database isn't properly initialized.
|
||||
|
||||
#### Root Cause
|
||||
- Concurrent test execution without proper isolation
|
||||
- Shared database state between tests
|
||||
- Migration lock not properly acquired
|
||||
- Test fixtures not waiting for migration completion
|
||||
|
||||
#### Solution
|
||||
```python
|
||||
# starpunk/testing/fixtures.py
|
||||
import threading
|
||||
import tempfile
|
||||
from contextlib import contextmanager
|
||||
|
||||
# Global lock for test database operations
|
||||
_test_db_lock = threading.Lock()
|
||||
|
||||
@contextmanager
|
||||
def isolated_test_database():
|
||||
"""Create isolated database for testing"""
|
||||
with _test_db_lock:
|
||||
# Create unique temp database
|
||||
temp_db = tempfile.NamedTemporaryFile(
|
||||
suffix='.db',
|
||||
delete=False
|
||||
)
|
||||
db_path = temp_db.name
|
||||
temp_db.close()
|
||||
|
||||
try:
|
||||
# Initialize database with migrations
|
||||
run_migrations_sync(db_path)
|
||||
|
||||
# Yield database for test
|
||||
yield db_path
|
||||
finally:
|
||||
# Cleanup
|
||||
try:
|
||||
os.unlink(db_path)
|
||||
except:
|
||||
pass
|
||||
|
||||
def run_migrations_sync(db_path: str):
|
||||
"""Run migrations synchronously with proper locking"""
|
||||
conn = sqlite3.connect(db_path)
|
||||
|
||||
# Use exclusive lock during migrations
|
||||
conn.execute("BEGIN EXCLUSIVE")
|
||||
|
||||
try:
|
||||
migrator = DatabaseMigrator(conn)
|
||||
migrator.run_all()
|
||||
conn.commit()
|
||||
except Exception:
|
||||
conn.rollback()
|
||||
raise
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
# Test base class
|
||||
class StarPunkTestCase(unittest.TestCase):
|
||||
"""Base test case with proper database isolation"""
|
||||
|
||||
def setUp(self):
|
||||
"""Set up test with isolated database"""
|
||||
self.db_context = isolated_test_database()
|
||||
self.db_path = self.db_context.__enter__()
|
||||
self.app = create_app(database=self.db_path)
|
||||
self.client = self.app.test_client()
|
||||
|
||||
def tearDown(self):
|
||||
"""Clean up test database"""
|
||||
self.db_context.__exit__(None, None, None)
|
||||
|
||||
# Example test with proper isolation
|
||||
class TestMigrations(StarPunkTestCase):
|
||||
def test_migration_idempotency(self):
|
||||
"""Test that migrations can be run multiple times"""
|
||||
# First run happens in setUp
|
||||
|
||||
# Second run should be safe
|
||||
run_migrations_sync(self.db_path)
|
||||
|
||||
# Verify database state
|
||||
with sqlite3.connect(self.db_path) as conn:
|
||||
tables = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table'"
|
||||
).fetchall()
|
||||
self.assertIn(('notes',), tables)
|
||||
```
|
||||
|
||||
#### Test Timing Improvements
|
||||
```python
|
||||
# starpunk/testing/wait.py
|
||||
import time
|
||||
from typing import Callable
|
||||
|
||||
def wait_for_condition(
|
||||
condition: Callable[[], bool],
|
||||
timeout: float = 5.0,
|
||||
interval: float = 0.1
|
||||
) -> bool:
|
||||
"""Wait for condition to become true"""
|
||||
start = time.time()
|
||||
|
||||
while time.time() - start < timeout:
|
||||
if condition():
|
||||
return True
|
||||
time.sleep(interval)
|
||||
|
||||
return False
|
||||
|
||||
# Usage in tests
|
||||
def test_async_operation(self):
|
||||
"""Test with proper waiting"""
|
||||
self.client.post('/notes', data={'content': 'Test'})
|
||||
|
||||
# Wait for indexing to complete
|
||||
success = wait_for_condition(
|
||||
lambda: search_index_updated(),
|
||||
timeout=2.0
|
||||
)
|
||||
self.assertTrue(success)
|
||||
```
|
||||
|
||||
### 2. Unicode Edge Cases in Slug Generation
|
||||
|
||||
#### Problem
|
||||
Slug generation fails or produces invalid slugs for certain Unicode inputs, including emoji, RTL text, and combining characters.
|
||||
|
||||
#### Current Issues
|
||||
- Emoji in titles break slug generation
|
||||
- RTL languages produce confusing slugs
|
||||
- Combining characters aren't normalized
|
||||
- Zero-width characters remain in slugs
|
||||
|
||||
#### Solution
|
||||
```python
|
||||
# starpunk/utils/slugify.py
|
||||
import unicodedata
|
||||
import re
|
||||
|
||||
def generate_slug(text: str, max_length: int = 50) -> str:
|
||||
"""Generate URL-safe slug from text with Unicode handling"""
|
||||
|
||||
if not text:
|
||||
return generate_random_slug()
|
||||
|
||||
# Normalize Unicode (NFKD = compatibility decomposition)
|
||||
text = unicodedata.normalize('NFKD', text)
|
||||
|
||||
# Remove non-ASCII characters but keep numbers and letters
|
||||
text = text.encode('ascii', 'ignore').decode('ascii')
|
||||
|
||||
# Convert to lowercase
|
||||
text = text.lower()
|
||||
|
||||
# Replace spaces and punctuation with hyphens
|
||||
text = re.sub(r'[^a-z0-9]+', '-', text)
|
||||
|
||||
# Remove leading/trailing hyphens
|
||||
text = text.strip('-')
|
||||
|
||||
# Collapse multiple hyphens
|
||||
text = re.sub(r'-+', '-', text)
|
||||
|
||||
# Truncate to max length (at word boundary if possible)
|
||||
if len(text) > max_length:
|
||||
text = text[:max_length].rsplit('-', 1)[0]
|
||||
|
||||
# If we end up with empty string, generate random
|
||||
if not text:
|
||||
return generate_random_slug()
|
||||
|
||||
return text
|
||||
|
||||
def generate_random_slug() -> str:
|
||||
"""Generate random slug when text-based generation fails"""
|
||||
import random
|
||||
import string
|
||||
|
||||
return 'note-' + ''.join(
|
||||
random.choices(string.ascii_lowercase + string.digits, k=8)
|
||||
)
|
||||
|
||||
# Extended test cases
|
||||
TEST_CASES = [
|
||||
("Hello World", "hello-world"),
|
||||
("Hello 👋 World", "hello-world"), # Emoji removed
|
||||
("مرحبا بالعالم", "note-a1b2c3d4"), # Arabic -> random
|
||||
("Ĥëłłö Ŵöŕłđ", "hello-world"), # Diacritics removed
|
||||
("Hello\u200bWorld", "helloworld"), # Zero-width space
|
||||
("---Hello---", "hello"), # Multiple hyphens
|
||||
("123", "123"), # Numbers only
|
||||
("!@#$%", "note-x1y2z3a4"), # Special chars -> random
|
||||
("a" * 100, "a" * 50), # Truncation
|
||||
("", "note-r4nd0m12"), # Empty -> random
|
||||
]
|
||||
|
||||
def test_slug_generation():
|
||||
"""Test slug generation with Unicode edge cases"""
|
||||
for input_text, expected in TEST_CASES:
|
||||
result = generate_slug(input_text)
|
||||
if expected.startswith("note-"):
|
||||
# Random slug - just check format
|
||||
assert result.startswith("note-")
|
||||
assert len(result) == 13
|
||||
else:
|
||||
assert result == expected
|
||||
```
|
||||
|
||||
### 3. RSS Feed Memory Optimization
|
||||
|
||||
#### Problem
|
||||
RSS feed generation for sites with thousands of notes causes high memory usage and slow response times.
|
||||
|
||||
#### Current Issues
|
||||
- Loading all notes into memory at once
|
||||
- No pagination or limits
|
||||
- Inefficient XML building
|
||||
- No caching of generated feeds
|
||||
|
||||
#### Solution
|
||||
```python
|
||||
# starpunk/feeds/rss.py
|
||||
from typing import Iterator
|
||||
import sqlite3
|
||||
|
||||
class OptimizedRSSGenerator:
|
||||
"""Memory-efficient RSS feed generator"""
|
||||
|
||||
def __init__(self, base_url: str, limit: int = 50):
|
||||
self.base_url = base_url
|
||||
self.limit = limit
|
||||
|
||||
def generate_feed(self) -> str:
|
||||
"""Generate RSS feed with streaming"""
|
||||
# Use string builder for efficiency
|
||||
parts = []
|
||||
parts.append(self._generate_header())
|
||||
|
||||
# Stream notes from database
|
||||
for note in self._stream_recent_notes():
|
||||
parts.append(self._generate_item(note))
|
||||
|
||||
parts.append(self._generate_footer())
|
||||
|
||||
return ''.join(parts)
|
||||
|
||||
def _stream_recent_notes(self) -> Iterator[dict]:
|
||||
"""Stream notes without loading all into memory"""
|
||||
with get_db() as conn:
|
||||
# Use server-side cursor equivalent
|
||||
conn.row_factory = sqlite3.Row
|
||||
|
||||
cursor = conn.execute(
|
||||
"""
|
||||
SELECT
|
||||
id,
|
||||
content,
|
||||
slug,
|
||||
created_at,
|
||||
updated_at
|
||||
FROM notes
|
||||
WHERE published = 1
|
||||
ORDER BY created_at DESC
|
||||
LIMIT ?
|
||||
""",
|
||||
(self.limit,)
|
||||
)
|
||||
|
||||
# Yield one at a time
|
||||
for row in cursor:
|
||||
yield dict(row)
|
||||
|
||||
def _generate_item(self, note: dict) -> str:
|
||||
"""Generate single RSS item efficiently"""
|
||||
# Pre-calculate values once
|
||||
title = extract_title(note['content'])
|
||||
url = f"{self.base_url}/notes/{note['id']}"
|
||||
|
||||
# Use string formatting for efficiency
|
||||
return f"""
|
||||
<item>
|
||||
<title>{escape_xml(title)}</title>
|
||||
<link>{url}</link>
|
||||
<guid isPermaLink="true">{url}</guid>
|
||||
<description>{escape_xml(note['content'][:500])}</description>
|
||||
<pubDate>{format_rfc822(note['created_at'])}</pubDate>
|
||||
</item>
|
||||
"""
|
||||
|
||||
# Caching layer
|
||||
from functools import lru_cache
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
class CachedRSSFeed:
|
||||
"""RSS feed with caching"""
|
||||
|
||||
def __init__(self):
|
||||
self.cache = {}
|
||||
self.cache_duration = timedelta(minutes=5)
|
||||
|
||||
def get_feed(self) -> str:
|
||||
"""Get RSS feed with caching"""
|
||||
now = datetime.now()
|
||||
|
||||
# Check cache
|
||||
if 'feed' in self.cache:
|
||||
cached_feed, cached_time = self.cache['feed']
|
||||
if now - cached_time < self.cache_duration:
|
||||
return cached_feed
|
||||
|
||||
# Generate new feed
|
||||
generator = OptimizedRSSGenerator(
|
||||
base_url=config.BASE_URL,
|
||||
limit=config.RSS_ITEM_LIMIT
|
||||
)
|
||||
feed = generator.generate_feed()
|
||||
|
||||
# Update cache
|
||||
self.cache['feed'] = (feed, now)
|
||||
|
||||
return feed
|
||||
|
||||
def invalidate(self):
|
||||
"""Invalidate cache when notes change"""
|
||||
self.cache.clear()
|
||||
|
||||
# Memory-efficient XML escaping
|
||||
def escape_xml(text: str) -> str:
|
||||
"""Escape XML special characters efficiently"""
|
||||
if not text:
|
||||
return ""
|
||||
|
||||
# Use replace instead of xml.sax.saxutils for efficiency
|
||||
return (
|
||||
text.replace("&", "&")
|
||||
.replace("<", "<")
|
||||
.replace(">", ">")
|
||||
.replace('"', """)
|
||||
.replace("'", "'")
|
||||
)
|
||||
```
|
||||
|
||||
### 4. Session Timeout Handling
|
||||
|
||||
#### Problem
|
||||
Sessions don't properly timeout, leading to security issues and stale session accumulation.
|
||||
|
||||
#### Current Issues
|
||||
- No automatic session expiration
|
||||
- No cleanup of old sessions
|
||||
- Session extension not working
|
||||
- No timeout configuration
|
||||
|
||||
#### Solution
|
||||
```python
|
||||
# starpunk/auth/session_improved.py
|
||||
from datetime import datetime, timedelta
|
||||
import threading
|
||||
import time
|
||||
|
||||
class ImprovedSessionManager:
|
||||
"""Session manager with proper timeout handling"""
|
||||
|
||||
def __init__(self):
|
||||
self.timeout = config.SESSION_TIMEOUT
|
||||
self.cleanup_interval = 3600 # 1 hour
|
||||
self._start_cleanup_thread()
|
||||
|
||||
def _start_cleanup_thread(self):
|
||||
"""Start background cleanup thread"""
|
||||
def cleanup_loop():
|
||||
while True:
|
||||
try:
|
||||
self.cleanup_expired_sessions()
|
||||
except Exception as e:
|
||||
logger.error(f"Session cleanup error: {e}")
|
||||
time.sleep(self.cleanup_interval)
|
||||
|
||||
thread = threading.Thread(target=cleanup_loop)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
def create_session(self, user_id: str, remember: bool = False) -> dict:
|
||||
"""Create session with appropriate timeout"""
|
||||
session_id = generate_secure_token()
|
||||
|
||||
# Longer timeout for "remember me"
|
||||
if remember:
|
||||
timeout = config.SESSION_TIMEOUT_REMEMBER
|
||||
else:
|
||||
timeout = self.timeout
|
||||
|
||||
expires_at = datetime.now() + timedelta(seconds=timeout)
|
||||
|
||||
with get_db() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO sessions (
|
||||
id, user_id, expires_at, created_at, last_activity
|
||||
)
|
||||
VALUES (?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
session_id,
|
||||
user_id,
|
||||
expires_at,
|
||||
datetime.now(),
|
||||
datetime.now()
|
||||
)
|
||||
)
|
||||
|
||||
logger.info(f"Session created for user {user_id}")
|
||||
|
||||
return {
|
||||
'session_id': session_id,
|
||||
'expires_at': expires_at.isoformat(),
|
||||
'timeout': timeout
|
||||
}
|
||||
|
||||
def validate_and_extend(self, session_id: str) -> Optional[str]:
|
||||
"""Validate session and extend timeout on activity"""
|
||||
now = datetime.now()
|
||||
|
||||
with get_db() as conn:
|
||||
# Get session
|
||||
result = conn.execute(
|
||||
"""
|
||||
SELECT user_id, expires_at, last_activity
|
||||
FROM sessions
|
||||
WHERE id = ? AND expires_at > ?
|
||||
""",
|
||||
(session_id, now)
|
||||
).fetchone()
|
||||
|
||||
if not result:
|
||||
return None
|
||||
|
||||
user_id = result['user_id']
|
||||
last_activity = datetime.fromisoformat(result['last_activity'])
|
||||
|
||||
# Extend session if active
|
||||
if now - last_activity > timedelta(minutes=5):
|
||||
# Only extend if there's been recent activity
|
||||
new_expires = now + timedelta(seconds=self.timeout)
|
||||
|
||||
conn.execute(
|
||||
"""
|
||||
UPDATE sessions
|
||||
SET expires_at = ?, last_activity = ?
|
||||
WHERE id = ?
|
||||
""",
|
||||
(new_expires, now, session_id)
|
||||
)
|
||||
|
||||
logger.debug(f"Session extended for user {user_id}")
|
||||
|
||||
return user_id
|
||||
|
||||
def cleanup_expired_sessions(self):
|
||||
"""Remove expired sessions from database"""
|
||||
with get_db() as conn:
|
||||
result = conn.execute(
|
||||
"""
|
||||
DELETE FROM sessions
|
||||
WHERE expires_at < ?
|
||||
RETURNING id
|
||||
""",
|
||||
(datetime.now(),)
|
||||
)
|
||||
|
||||
deleted_count = len(result.fetchall())
|
||||
|
||||
if deleted_count > 0:
|
||||
logger.info(f"Cleaned up {deleted_count} expired sessions")
|
||||
|
||||
def invalidate_session(self, session_id: str):
|
||||
"""Explicitly invalidate a session"""
|
||||
with get_db() as conn:
|
||||
conn.execute(
|
||||
"DELETE FROM sessions WHERE id = ?",
|
||||
(session_id,)
|
||||
)
|
||||
|
||||
logger.info(f"Session {session_id} invalidated")
|
||||
|
||||
def get_active_sessions(self, user_id: str) -> list:
|
||||
"""Get all active sessions for a user"""
|
||||
with get_db() as conn:
|
||||
result = conn.execute(
|
||||
"""
|
||||
SELECT id, created_at, last_activity, expires_at
|
||||
FROM sessions
|
||||
WHERE user_id = ? AND expires_at > ?
|
||||
ORDER BY last_activity DESC
|
||||
""",
|
||||
(user_id, datetime.now())
|
||||
)
|
||||
|
||||
return [dict(row) for row in result]
|
||||
|
||||
# Session middleware
|
||||
@app.before_request
|
||||
def check_session():
|
||||
"""Check and extend session on each request"""
|
||||
session_id = request.cookies.get('session_id')
|
||||
|
||||
if session_id:
|
||||
user_id = session_manager.validate_and_extend(session_id)
|
||||
|
||||
if user_id:
|
||||
g.user_id = user_id
|
||||
g.authenticated = True
|
||||
else:
|
||||
# Clear invalid session cookie
|
||||
g.clear_session = True
|
||||
g.authenticated = False
|
||||
else:
|
||||
g.authenticated = False
|
||||
|
||||
@app.after_request
|
||||
def update_session_cookie(response):
|
||||
"""Update session cookie if needed"""
|
||||
if hasattr(g, 'clear_session') and g.clear_session:
|
||||
response.set_cookie(
|
||||
'session_id',
|
||||
'',
|
||||
expires=0,
|
||||
secure=config.SESSION_SECURE,
|
||||
httponly=True,
|
||||
samesite='Lax'
|
||||
)
|
||||
|
||||
return response
|
||||
```
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Test Stability Improvements
|
||||
```python
|
||||
# starpunk/testing/stability.py
|
||||
import pytest
|
||||
from unittest.mock import patch
|
||||
|
||||
@pytest.fixture
|
||||
def stable_test_env():
|
||||
"""Provide stable test environment"""
|
||||
with patch('time.time', return_value=1234567890):
|
||||
with patch('random.choice', side_effect=cycle('abcd')):
|
||||
with isolated_test_database() as db:
|
||||
yield db
|
||||
|
||||
def test_with_stability(stable_test_env):
|
||||
"""Test with predictable environment"""
|
||||
# Time and randomness are now deterministic
|
||||
pass
|
||||
```
|
||||
|
||||
### Unicode Test Suite
|
||||
```python
|
||||
# starpunk/testing/unicode.py
|
||||
import pytest
|
||||
|
||||
UNICODE_TEST_STRINGS = [
|
||||
"Simple ASCII",
|
||||
"Émoji 😀🎉🚀",
|
||||
"العربية",
|
||||
"中文字符",
|
||||
"🏳️🌈 flags",
|
||||
"Math: ∑∏∫",
|
||||
"Ñoño",
|
||||
"Combining: é (e + ́)",
|
||||
]
|
||||
|
||||
@pytest.mark.parametrize("text", UNICODE_TEST_STRINGS)
|
||||
def test_unicode_handling(text):
|
||||
"""Test Unicode handling throughout system"""
|
||||
# Test slug generation
|
||||
slug = generate_slug(text)
|
||||
assert slug # Should always produce something
|
||||
|
||||
# Test note creation
|
||||
note = create_note(content=text)
|
||||
assert note.content == text
|
||||
|
||||
# Test search
|
||||
results = search_notes(text)
|
||||
# Should not crash
|
||||
|
||||
# Test RSS
|
||||
feed = generate_rss_feed()
|
||||
# Should be valid XML
|
||||
```
|
||||
|
||||
## Performance Testing
|
||||
|
||||
### Memory Usage Tests
|
||||
```python
|
||||
def test_rss_memory_usage():
|
||||
"""Test RSS generation memory usage"""
|
||||
import tracemalloc
|
||||
|
||||
# Create many notes
|
||||
for i in range(10000):
|
||||
create_note(content=f"Note {i}")
|
||||
|
||||
# Measure memory for RSS generation
|
||||
tracemalloc.start()
|
||||
initial = tracemalloc.get_traced_memory()
|
||||
|
||||
feed = generate_rss_feed()
|
||||
|
||||
peak = tracemalloc.get_traced_memory()
|
||||
tracemalloc.stop()
|
||||
|
||||
memory_used = (peak[0] - initial[0]) / 1024 / 1024 # MB
|
||||
|
||||
assert memory_used < 10 # Should use less than 10MB
|
||||
```
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
### Race Condition Fixes
|
||||
1. ✅ All 10 flaky tests pass consistently
|
||||
2. ✅ Test isolation properly implemented
|
||||
3. ✅ Migration locks prevent concurrent execution
|
||||
4. ✅ Test fixtures properly synchronized
|
||||
|
||||
### Unicode Handling
|
||||
1. ✅ Slug generation handles all Unicode input
|
||||
2. ✅ Never produces invalid/empty slugs
|
||||
3. ✅ Emoji and special characters handled gracefully
|
||||
4. ✅ RTL languages don't break system
|
||||
|
||||
### RSS Memory Optimization
|
||||
1. ✅ Memory usage stays under 10MB for 10,000 notes
|
||||
2. ✅ Response time under 500ms
|
||||
3. ✅ Streaming implementation works correctly
|
||||
4. ✅ Cache invalidation on note changes
|
||||
|
||||
### Session Management
|
||||
1. ✅ Sessions expire after configured timeout
|
||||
2. ✅ Expired sessions automatically cleaned up
|
||||
3. ✅ Active sessions properly extended
|
||||
4. ✅ Session invalidation works correctly
|
||||
|
||||
## Risk Mitigation
|
||||
|
||||
1. **Test Stability**: Run test suite 100 times to verify
|
||||
2. **Unicode Compatibility**: Test with real-world data
|
||||
3. **Memory Leaks**: Monitor long-running instances
|
||||
4. **Session Security**: Security review of implementation
|
||||
@@ -1,400 +0,0 @@
|
||||
# StarPunk v1.1.1 "Polish" - Developer Q&A
|
||||
|
||||
**Date**: 2025-11-25
|
||||
**Developer**: Developer Agent
|
||||
**Architect**: Architect Agent
|
||||
|
||||
This document contains the Q&A session between the developer and architect during v1.1.1 design review.
|
||||
|
||||
## Purpose
|
||||
|
||||
The developer reviewed all v1.1.1 design documentation and prepared questions about implementation details, integration points, and edge cases. This document contains the architect's answers to guide implementation.
|
||||
|
||||
## Critical Questions (Must be answered before implementation)
|
||||
|
||||
### Q1: Configuration System Integration
|
||||
**Developer Question**: The design calls for centralized configuration. I see we have `config.py` at the root for Flask app config. Should the new `starpunk/config.py` module replace this, wrap it, or co-exist as a separate configuration layer? How do we avoid breaking existing code that directly imports from `config`?
|
||||
|
||||
**Architect Answer**: Keep both files with clear separation of concerns. The existing `config.py` remains for Flask app configuration, while the new `starpunk/config.py` becomes a configuration helper module that wraps Flask's app.config for runtime access.
|
||||
|
||||
**Rationale**: This maintains backward compatibility, separates Flask-specific config from application logic, and allows gradual migration without breaking changes.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Create `starpunk/config.py` as a helper that uses `current_app.config`
|
||||
- Provide methods like `get_database_path()`, `get_upload_folder()`, etc.
|
||||
- Gradually replace direct config access with helper methods
|
||||
- Document both in the configuration guide
|
||||
|
||||
---
|
||||
|
||||
### Q2: Database Connection Pool Scope
|
||||
**Developer Question**: The connection pool will replace the current `get_db()` context manager used throughout routes. Should it also replace direct `sqlite3.connect()` calls in migrations and utilities? How do we ensure proper connection lifecycle in Flask's request context?
|
||||
|
||||
**Architect Answer**: Connection pool replaces `get_db()` but NOT migrations. The pool replaces all runtime `sqlite3.connect()` calls but migrations must use direct connections for isolation. Integrate the pool with Flask's `g` object for request-scoped connections.
|
||||
|
||||
**Rationale**: Migrations need isolated transactions without pool interference. The pool improves runtime performance while request-scoped connections via `g` maintain Flask patterns.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Implement pool in `starpunk/database/pool.py`
|
||||
- Use `g.db` for request-scoped connections
|
||||
- Replace `get_db()` in all route files
|
||||
- Keep direct connections for migrations only
|
||||
- Add pool statistics to metrics
|
||||
|
||||
---
|
||||
|
||||
### Q3: Logging vs. Print Statements Migration
|
||||
**Developer Question**: Current code has many print statements for debugging. Should we phase these out gradually or remove all at once? Should we use Python's logging module directly or Flask's app.logger? For CLI commands, should they use logging or click.echo()?
|
||||
|
||||
**Architect Answer**: Phase out print statements immediately in v1.1.1. Remove ALL print statements in this release. Use Flask's `app.logger` as the base, enhanced with structured logging. CLI commands use `click.echo()` for user output and logger for diagnostics.
|
||||
|
||||
**Rationale**: A clean break prevents confusion. Flask's logger integrates with the framework, and click.echo() is the proper CLI output method.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Set up RotatingFileHandler in app factory
|
||||
- Configure structured logging with correlation IDs
|
||||
- Replace all print() with appropriate logging calls
|
||||
- Use click.echo() for CLI user feedback
|
||||
- Use logger for CLI diagnostic output
|
||||
|
||||
---
|
||||
|
||||
### Q4: Error Handling Middleware Integration
|
||||
**Developer Question**: For consistent error handling, should we use Flask's @app.errorhandler decorator or implement custom middleware? How do we ensure Micropub endpoints return spec-compliant error responses while other endpoints return HTML error pages?
|
||||
|
||||
**Architect Answer**: Use Flask's `@app.errorhandler` for all error handling. Register error handlers in the app factory. Micropub endpoints get specialized error handlers for spec compliance. No decorators on individual routes.
|
||||
|
||||
**Rationale**: Flask's error handler is the idiomatic approach. Centralized error handling reduces code duplication, and Micropub spec requires specific error formats.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Create `starpunk/errors.py` with `register_error_handlers(app)`
|
||||
- Check request path to determine response format
|
||||
- Return JSON for `/micropub` endpoints
|
||||
- Return HTML templates for other endpoints
|
||||
- Log all errors with correlation IDs
|
||||
|
||||
---
|
||||
|
||||
### Q5: FTS5 Fallback Search Implementation
|
||||
**Developer Question**: If FTS5 isn't available, should fallback search be in the same module or separate? Should it have the same function signature? How do we detect FTS5 support - at startup or runtime?
|
||||
|
||||
**Architect Answer**: Same module, runtime detection with decorator pattern. Keep in `search.py` module with the same function signature. Determine support at startup and cache for performance.
|
||||
|
||||
**Rationale**: A single module maintains cohesion. Same signature allows transparent switching. Startup detection avoids runtime overhead.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Detect FTS5 support at startup using a test table
|
||||
- Cache the result in a module-level variable
|
||||
- Use function pointer to select implementation
|
||||
- Both implementations use identical signatures
|
||||
- Log which implementation is active
|
||||
|
||||
---
|
||||
|
||||
### Q6: Performance Monitoring Circular Buffer
|
||||
**Developer Question**: For the circular buffer storing performance metrics - in a multi-process deployment (like gunicorn), should each process have its own buffer or should we use shared memory? How do we aggregate metrics across processes?
|
||||
|
||||
**Architect Answer**: Per-process buffer with aggregation endpoint. Each process maintains its own circular buffer. `/admin/metrics` aggregates across all workers. Use `multiprocessing.Manager` for shared state if needed.
|
||||
|
||||
**Rationale**: Per-process avoids locking overhead. Aggregation provides complete picture. This is a standard pattern for multi-process Flask apps.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Create `MetricsBuffer` class with deque
|
||||
- Include process ID in all metrics
|
||||
- Aggregate in `/admin/metrics` endpoint
|
||||
- Consider shared memory for future enhancement
|
||||
- Default to 1000 entries per buffer
|
||||
|
||||
---
|
||||
|
||||
## Important Questions
|
||||
|
||||
### Q7: Session Table Migration
|
||||
**Developer Question**: The session management enhancement requires a new database table. Should this be added to an existing migration file or create a new one? What happens to existing sessions during upgrade?
|
||||
|
||||
**Architect Answer**: New migration file `008_add_session_table.sql`. This is a separate migration that maintains clarity. Drop existing sessions (document in upgrade guide). Use RETURNING clause with version check where supported.
|
||||
|
||||
**Rationale**: Clean migration history is important. Sessions are ephemeral and safe to drop. RETURNING improves performance where available.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Create new migration file
|
||||
- Drop table if exists before creation
|
||||
- Add proper indexes for user_id and expires_at
|
||||
- Document session reset in upgrade guide
|
||||
- Test migration rollback procedure
|
||||
|
||||
---
|
||||
|
||||
### Q8: Unicode Slug Generation
|
||||
**Developer Question**: When slug generation from title fails (e.g., all emoji title), what should the fallback be? Should we return an error to the Micropub client or generate a default slug? What pattern for auto-generated slugs?
|
||||
|
||||
**Architect Answer**: Timestamp-based fallback with warning. Use `YYYYMMDD-HHMMSS` pattern when normalization fails. Log warning with original text for debugging. Return 201 Created to Micropub client (not an error).
|
||||
|
||||
**Rationale**: Timestamp ensures uniqueness. Warning helps identify encoding issues. Micropub spec doesn't define this as an error condition.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Try Unicode normalization first
|
||||
- Fall back to timestamp if result is empty
|
||||
- Log warnings for debugging
|
||||
- Include original text in logs
|
||||
- Never fail the Micropub request
|
||||
|
||||
---
|
||||
|
||||
### Q9: RSS Memory Optimization
|
||||
**Developer Question**: The current RSS generator builds the entire feed in memory. For optimization, should we stream the XML directly to the response or use a generator? How do we handle large feeds (1000+ items)?
|
||||
|
||||
**Architect Answer**: Use generator with `yield` for streaming. Implement as generator function. Use Flask's `Response(generate(), mimetype='application/rss+xml')`. Stream directly to client.
|
||||
|
||||
**Rationale**: Generators minimize memory footprint. Flask handles streaming automatically. This scales to any feed size.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Convert RSS generation to generator function
|
||||
- Yield XML chunks, not individual characters
|
||||
- Query notes in batches if needed
|
||||
- Set appropriate response headers
|
||||
- Test with large feed counts
|
||||
|
||||
---
|
||||
|
||||
### Q10: Health Check Authentication
|
||||
**Developer Question**: Should health check endpoints require authentication? Load balancers need to access them, but detailed health info might be sensitive. How do we balance security with operational needs?
|
||||
|
||||
**Architect Answer**: Basic check public, detailed check requires auth. `/health` returns 200 OK (no auth, for load balancers). `/health?detailed=true` requires authentication. Separate `/admin/health` for full diagnostics (always auth).
|
||||
|
||||
**Rationale**: Load balancers need unauthenticated access. Detailed info could leak sensitive data. This follows industry standard patterns.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Basic health: just return 200 if app responds
|
||||
- Detailed health: check database, disk space, etc.
|
||||
- Admin health: full diagnostics with metrics
|
||||
- Use query parameter to trigger detailed mode
|
||||
- Document endpoints in operations guide
|
||||
|
||||
---
|
||||
|
||||
### Q11: Request Correlation ID Scope
|
||||
**Developer Question**: Should the correlation ID be per-request or per-session? If a request triggers background tasks, should they inherit the correlation ID? What about CLI commands?
|
||||
|
||||
**Architect Answer**: New ID for each HTTP request, inherit in background tasks. Each HTTP request gets a unique ID. Background tasks spawned from requests inherit the parent ID. CLI commands generate their own root ID.
|
||||
|
||||
**Rationale**: This maintains request tracing through async operations. CLI commands are independent operations. It's a standard distributed tracing pattern.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Generate UUID for each request
|
||||
- Store in Flask's `g` object
|
||||
- Pass to background tasks as parameter
|
||||
- Include in all log messages
|
||||
- Add to response headers
|
||||
|
||||
---
|
||||
|
||||
### Q12: Performance Monitoring Sampling
|
||||
**Developer Question**: To reduce overhead, should we sample performance metrics (e.g., only track 10% of requests)? Should sampling be configurable? Apply to all metrics or just specific types?
|
||||
|
||||
**Architect Answer**: Configuration-based sampling with operation types. Default 10% sampling rate with different rates per operation type. Applied at collection point, not in slow query log.
|
||||
|
||||
**Rationale**: Reduces overhead in production. Operation-specific rates allow focused monitoring. Slow query log should capture everything for debugging.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Define sampling rates in config
|
||||
- Different rates for database/http/render
|
||||
- Use random sampling at collection point
|
||||
- Always log slow queries regardless
|
||||
- Make rates runtime configurable
|
||||
|
||||
---
|
||||
|
||||
### Q13: Search Highlighting XSS Prevention
|
||||
**Developer Question**: When highlighting search terms in results, how do we prevent XSS if the search term contains HTML? Should we use a library like bleach or implement our own escaping?
|
||||
|
||||
**Architect Answer**: Use `markupsafe.escape()` with whitelist. Use Flask's standard `markupsafe.escape()`. Whitelist only `<mark>` tags for highlighting. Validate class attribute against whitelist.
|
||||
|
||||
**Rationale**: markupsafe is Flask's security standard. Whitelist approach is most secure. Prevents class-based XSS attacks.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Escape all text first
|
||||
- Then add safe mark tags
|
||||
- Use Markup() for safe strings
|
||||
- Limit to single highlight class
|
||||
- Test with malicious input
|
||||
|
||||
---
|
||||
|
||||
### Q14: Configuration Validation Timing
|
||||
**Developer Question**: When should configuration validation run - at startup, on first use, or both? Should invalid config crash the app or fall back to defaults? Should we validate before or after migrations?
|
||||
|
||||
**Architect Answer**: Validate at startup, fail fast with clear errors. Validate immediately after loading config. Invalid config crashes app with descriptive error. Validate both presence and type. Run BEFORE migrations.
|
||||
|
||||
**Rationale**: Fail fast prevents subtle runtime errors. Clear errors help operators fix issues. Type validation catches common mistakes.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Create validation schema
|
||||
- Check required fields exist
|
||||
- Validate types and ranges
|
||||
- Provide clear error messages
|
||||
- Exit with non-zero status on failure
|
||||
|
||||
---
|
||||
|
||||
## Nice-to-Have Clarifications
|
||||
|
||||
### Q15: Test Race Condition Fix Priority
|
||||
**Developer Question**: Some tests have intermittent failures due to race conditions. Should fixing these block v1.1.1 release, or can we defer to v1.1.2?
|
||||
|
||||
**Architect Answer**: Fix in Phase 2, after core features. Not blocking for v1.1.1 release. Fix after performance monitoring is in place. Add to technical debt backlog.
|
||||
|
||||
**Rationale**: Race conditions are intermittent, not blocking. Focus on user-visible improvements first. Can be addressed in v1.1.2.
|
||||
|
||||
---
|
||||
|
||||
### Q16: Memory Monitoring Thread
|
||||
**Developer Question**: The memory monitoring thread needs to record metrics periodically. How should it handle database unavailability? Should it stop gracefully on shutdown?
|
||||
|
||||
**Architect Answer**: Use threading.Event for graceful shutdown. Stop gracefully using Event. Log warning if database unavailable, don't crash. Reconnect automatically on database recovery.
|
||||
|
||||
**Rationale**: Graceful shutdown prevents data corruption. Monitoring shouldn't crash the app. Self-healing improves reliability.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Use daemon thread with Event
|
||||
- Check stop event in loop
|
||||
- Handle database errors gracefully
|
||||
- Retry with exponential backoff
|
||||
- Log issues but don't propagate
|
||||
|
||||
---
|
||||
|
||||
### Q17: Log Rotation Strategy
|
||||
**Developer Question**: For log rotation, should we use Python's RotatingFileHandler, Linux logrotate, or a custom solution? What size/count limits are appropriate?
|
||||
|
||||
**Architect Answer**: Use RotatingFileHandler with 10MB files. Python's built-in RotatingFileHandler. 10MB per file, keep 10 files. No compression for simplicity.
|
||||
|
||||
**Rationale**: Built-in solution requires no dependencies. 100MB total is reasonable for small deployment. Compression adds complexity for minimal benefit.
|
||||
|
||||
---
|
||||
|
||||
### Q18: Error Budget Tracking
|
||||
**Developer Question**: How should we track error budgets - as a percentage, count, or rate? Over what time window? Should exceeding budget trigger any automatic actions?
|
||||
|
||||
**Architect Answer**: Simple counter-based tracking. Track in metrics buffer. Display in dashboard as percentage. No auto-alerting in v1.1.1 (future enhancement).
|
||||
|
||||
**Rationale**: Simple to implement and understand. Provides visibility without complexity. Alerting can be added later.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Track last 1000 requests
|
||||
- Calculate success rate
|
||||
- Display remaining budget
|
||||
- Log when budget low
|
||||
- Manual monitoring for now
|
||||
|
||||
---
|
||||
|
||||
### Q19: Dashboard UI Framework
|
||||
**Developer Question**: For the admin dashboard, should we use a JavaScript framework (React/Vue), server-side rendering, or a hybrid approach? Any CSS framework preferences?
|
||||
|
||||
**Architect Answer**: Server-side rendering with htmx for updates. No JavaScript framework for simplicity. Use htmx for real-time updates. Chart.js for graphs via CDN. Existing CSS, no new framework.
|
||||
|
||||
**Rationale**: Maintains "works without JavaScript" principle. htmx provides reactivity without complexity. Chart.js is simple and sufficient.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Use Jinja2 templates
|
||||
- Add htmx for auto-refresh
|
||||
- Include Chart.js from CDN
|
||||
- Keep existing CSS styles
|
||||
- Progressive enhancement approach
|
||||
|
||||
---
|
||||
|
||||
### Q20: Micropub Error Response Format
|
||||
**Developer Question**: The Micropub spec defines error responses, but should we add additional debugging info in development mode? How much detail in error_description field?
|
||||
|
||||
**Architect Answer**: Maintain strict Micropub spec compliance. Use spec-defined error format exactly. Add `error_description` for clarity. Log additional details server-side only.
|
||||
|
||||
**Rationale**: Spec compliance is non-negotiable. error_description is allowed by spec. Server logs provide debugging info.
|
||||
|
||||
**Implementation Guidance**:
|
||||
- Use exact error codes from spec
|
||||
- Include helpful error_description
|
||||
- Never expose internal details
|
||||
- Log full context server-side
|
||||
- Keep development/production responses identical
|
||||
|
||||
---
|
||||
|
||||
## Implementation Priorities
|
||||
|
||||
The architect recommends implementing v1.1.1 in three phases:
|
||||
|
||||
### Phase 1: Core Infrastructure (Week 1)
|
||||
Focus on foundational improvements that other features depend on:
|
||||
1. Logging system replacement - Remove all print statements
|
||||
2. Configuration validation - Fail fast on invalid config
|
||||
3. Database connection pool - Improve performance
|
||||
4. Error handling middleware - Consistent error responses
|
||||
|
||||
### Phase 2: Enhancements (Week 2)
|
||||
Add the user-facing improvements:
|
||||
5. Session management - Secure session handling
|
||||
6. Performance monitoring - Track system health
|
||||
7. Health checks - Enable monitoring
|
||||
8. Search improvements - Better search experience
|
||||
|
||||
### Phase 3: Polish (Week 3)
|
||||
Complete the release with final touches:
|
||||
9. Admin dashboard - Visualize metrics
|
||||
10. Memory optimization - RSS streaming
|
||||
11. Documentation - Update all guides
|
||||
12. Testing improvements - Fix flaky tests
|
||||
|
||||
## Additional Architectural Guidance
|
||||
|
||||
### Configuration Integration Strategy
|
||||
The developer should implement configuration in layers:
|
||||
1. Keep existing config.py for Flask settings
|
||||
2. Add starpunk/config.py as helper module
|
||||
3. Migrate gradually by replacing direct config access
|
||||
4. Document both systems in configuration guide
|
||||
|
||||
### Connection Pool Implementation Notes
|
||||
The pool should be transparent to calling code:
|
||||
1. Same interface as get_db()
|
||||
2. Automatic cleanup on request end
|
||||
3. Connection recycling for performance
|
||||
4. Statistics collection for monitoring
|
||||
|
||||
### Validation Specifications
|
||||
Create centralized validation schemas for:
|
||||
- Configuration values (types, ranges, requirements)
|
||||
- Micropub requests (required fields, formats)
|
||||
- Input data (lengths, patterns, encoding)
|
||||
|
||||
### Migration Ordering
|
||||
The developer must run migrations in this specific order:
|
||||
1. 008_add_session_table.sql
|
||||
2. 009_add_performance_indexes.sql
|
||||
3. 010_add_metrics_table.sql
|
||||
|
||||
### Testing Gaps to Address
|
||||
While not blocking v1.1.1, these should be noted for v1.1.2:
|
||||
1. Connection pool stress tests
|
||||
2. Unicode edge cases
|
||||
3. Memory leak detection
|
||||
4. Error recovery scenarios
|
||||
|
||||
### Required Documentation
|
||||
Before release, create these operational guides:
|
||||
1. `/docs/operations/upgrade-to-v1.1.1.md` - Step-by-step upgrade process
|
||||
2. `/docs/operations/troubleshooting.md` - Common issues and solutions
|
||||
3. `/docs/operations/performance-tuning.md` - Optimization guidelines
|
||||
|
||||
## Final Architectural Notes
|
||||
|
||||
These answers prioritize:
|
||||
- **Simplicity** over features - Every addition must justify its complexity
|
||||
- **Compatibility** over clean breaks - Don't break existing deployments
|
||||
- **Gradual migration** over big bang - Incremental improvements reduce risk
|
||||
- **Flask patterns** over custom solutions - Use idiomatic Flask approaches
|
||||
|
||||
The developer should implement in the phase order specified, testing thoroughly between phases. Any blockers or uncertainties should be escalated immediately for architectural review.
|
||||
|
||||
Remember: v1.1.1 is about polish, not new features. Focus on making existing functionality more robust, observable, and maintainable.
|
||||
@@ -1,379 +0,0 @@
|
||||
# v1.1.1 "Polish" Implementation Guide
|
||||
|
||||
## Overview
|
||||
This guide provides the development team with a structured approach to implementing v1.1.1 features. The release focuses on production readiness, performance visibility, and bug fixes without breaking changes.
|
||||
|
||||
## Implementation Order
|
||||
|
||||
The features should be implemented in this order to manage dependencies:
|
||||
|
||||
### Phase 1: Foundation (Day 1-2)
|
||||
1. **Configuration System** (2 hours)
|
||||
- Create `starpunk/config.py` module
|
||||
- Implement configuration loading
|
||||
- Add validation and defaults
|
||||
- Update existing code to use config
|
||||
|
||||
2. **Structured Logging** (2 hours)
|
||||
- Create `starpunk/logging.py` module
|
||||
- Replace print statements with logger calls
|
||||
- Add request correlation IDs
|
||||
- Configure log levels
|
||||
|
||||
3. **Error Handling Framework** (1 hour)
|
||||
- Create `starpunk/errors.py` module
|
||||
- Define error hierarchy
|
||||
- Implement error middleware
|
||||
- Add user-friendly messages
|
||||
|
||||
### Phase 2: Core Improvements (Day 3-5)
|
||||
4. **Database Connection Pooling** (2 hours)
|
||||
- Create `starpunk/database/pool.py`
|
||||
- Implement connection pool
|
||||
- Update database access layer
|
||||
- Add pool monitoring
|
||||
|
||||
5. **Fix Test Race Conditions** (1 hour)
|
||||
- Update test fixtures
|
||||
- Add database isolation
|
||||
- Fix migration locking
|
||||
- Verify test stability
|
||||
|
||||
6. **Unicode Slug Handling** (1 hour)
|
||||
- Update `starpunk/utils/slugify.py`
|
||||
- Add Unicode normalization
|
||||
- Handle edge cases
|
||||
- Add comprehensive tests
|
||||
|
||||
### Phase 3: Search Enhancements (Day 6-7)
|
||||
7. **Search Configuration** (2 hours)
|
||||
- Add search configuration options
|
||||
- Implement FTS5 detection
|
||||
- Create fallback search
|
||||
- Add result highlighting
|
||||
|
||||
8. **Search UI Updates** (1 hour)
|
||||
- Update search templates
|
||||
- Add relevance scoring display
|
||||
- Implement highlighting CSS
|
||||
- Make search optional in UI
|
||||
|
||||
### Phase 4: Performance Monitoring (Day 8-10)
|
||||
9. **Monitoring Infrastructure** (3 hours)
|
||||
- Create `starpunk/monitoring/` package
|
||||
- Implement metrics collector
|
||||
- Add timing instrumentation
|
||||
- Create memory monitor
|
||||
|
||||
10. **Performance Dashboard** (2 hours)
|
||||
- Create dashboard route
|
||||
- Design dashboard template
|
||||
- Add real-time metrics display
|
||||
- Implement data aggregation
|
||||
|
||||
### Phase 5: Production Readiness (Day 11-12)
|
||||
11. **Health Check Enhancements** (1 hour)
|
||||
- Update health endpoints
|
||||
- Add component checks
|
||||
- Implement readiness probe
|
||||
- Add detailed status
|
||||
|
||||
12. **Session Management** (1 hour)
|
||||
- Fix session timeout
|
||||
- Add cleanup thread
|
||||
- Implement extension logic
|
||||
- Update session handling
|
||||
|
||||
13. **RSS Optimization** (1 hour)
|
||||
- Implement streaming RSS
|
||||
- Add feed caching
|
||||
- Optimize memory usage
|
||||
- Add configuration limits
|
||||
|
||||
### Phase 6: Testing & Documentation (Day 13-14)
|
||||
14. **Testing** (2 hours)
|
||||
- Run full test suite
|
||||
- Performance benchmarks
|
||||
- Load testing
|
||||
- Security review
|
||||
|
||||
15. **Documentation** (1 hour)
|
||||
- Update deployment guide
|
||||
- Document configuration
|
||||
- Update API documentation
|
||||
- Create upgrade guide
|
||||
|
||||
## Key Files to Modify
|
||||
|
||||
### New Files to Create
|
||||
```
|
||||
starpunk/
|
||||
├── config.py # Configuration management
|
||||
├── errors.py # Error handling framework
|
||||
├── logging.py # Logging setup
|
||||
├── database/
|
||||
│ └── pool.py # Connection pooling
|
||||
├── monitoring/
|
||||
│ ├── __init__.py
|
||||
│ ├── collector.py # Metrics collection
|
||||
│ ├── db_monitor.py # Database monitoring
|
||||
│ ├── memory.py # Memory tracking
|
||||
│ └── http.py # HTTP monitoring
|
||||
├── testing/
|
||||
│ ├── fixtures.py # Test fixtures
|
||||
│ ├── stability.py # Stability helpers
|
||||
│ └── unicode.py # Unicode test suite
|
||||
└── templates/admin/
|
||||
├── performance.html # Performance dashboard
|
||||
└── performance_disabled.html
|
||||
```
|
||||
|
||||
### Files to Update
|
||||
```
|
||||
starpunk/
|
||||
├── __init__.py # Add version 1.1.1
|
||||
├── app.py # Add middleware, routes
|
||||
├── auth/
|
||||
│ └── session.py # Session management fixes
|
||||
├── utils/
|
||||
│ └── slugify.py # Unicode handling
|
||||
├── search/
|
||||
│ ├── engine.py # FTS5 detection, fallback
|
||||
│ └── highlighting.py # Result highlighting
|
||||
├── feeds/
|
||||
│ └── rss.py # Memory optimization
|
||||
├── web/
|
||||
│ └── routes.py # Health checks, dashboard
|
||||
└── templates/
|
||||
├── search.html # Search UI updates
|
||||
└── base.html # Conditional search UI
|
||||
```
|
||||
|
||||
## Configuration Variables
|
||||
|
||||
All new configuration uses environment variables with `STARPUNK_` prefix:
|
||||
|
||||
```bash
|
||||
# Search Configuration
|
||||
STARPUNK_SEARCH_ENABLED=true
|
||||
STARPUNK_SEARCH_TITLE_LENGTH=100
|
||||
STARPUNK_SEARCH_HIGHLIGHT_CLASS=highlight
|
||||
STARPUNK_SEARCH_MIN_SCORE=0.0
|
||||
|
||||
# Performance Monitoring
|
||||
STARPUNK_PERF_MONITORING_ENABLED=false
|
||||
STARPUNK_PERF_SLOW_QUERY_THRESHOLD=1.0
|
||||
STARPUNK_PERF_LOG_QUERIES=false
|
||||
STARPUNK_PERF_MEMORY_TRACKING=false
|
||||
|
||||
# Database Configuration
|
||||
STARPUNK_DB_CONNECTION_POOL_SIZE=5
|
||||
STARPUNK_DB_CONNECTION_TIMEOUT=10.0
|
||||
STARPUNK_DB_WAL_MODE=true
|
||||
STARPUNK_DB_BUSY_TIMEOUT=5000
|
||||
|
||||
# Logging Configuration
|
||||
STARPUNK_LOG_LEVEL=INFO
|
||||
STARPUNK_LOG_FORMAT=json
|
||||
|
||||
# Production Configuration
|
||||
STARPUNK_SESSION_TIMEOUT=86400
|
||||
STARPUNK_HEALTH_CHECK_DETAILED=false
|
||||
STARPUNK_ERROR_DETAILS_IN_RESPONSE=false
|
||||
```
|
||||
|
||||
## Testing Requirements
|
||||
|
||||
### Unit Test Coverage
|
||||
- Configuration loading and validation
|
||||
- Error handling for all error types
|
||||
- Slug generation with Unicode inputs
|
||||
- Connection pool operations
|
||||
- Session timeout logic
|
||||
- Search with/without FTS5
|
||||
|
||||
### Integration Test Coverage
|
||||
- End-to-end search functionality
|
||||
- Performance dashboard access
|
||||
- Health check endpoints
|
||||
- RSS feed generation
|
||||
- Session management flow
|
||||
|
||||
### Performance Tests
|
||||
```python
|
||||
# Required performance benchmarks
|
||||
def test_search_performance():
|
||||
"""Search should complete in <500ms"""
|
||||
|
||||
def test_rss_memory_usage():
|
||||
"""RSS should use <10MB for 10k notes"""
|
||||
|
||||
def test_monitoring_overhead():
|
||||
"""Monitoring should add <1% overhead"""
|
||||
|
||||
def test_connection_pool_concurrency():
|
||||
"""Pool should handle 20 concurrent requests"""
|
||||
```
|
||||
|
||||
## Database Migrations
|
||||
|
||||
### New Migration: v1.1.1_sessions.sql
|
||||
```sql
|
||||
-- Add session management improvements
|
||||
CREATE TABLE IF NOT EXISTS sessions_new (
|
||||
id TEXT PRIMARY KEY,
|
||||
user_id TEXT NOT NULL,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
expires_at TIMESTAMP NOT NULL,
|
||||
last_activity TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
remember BOOLEAN DEFAULT FALSE
|
||||
);
|
||||
|
||||
-- Migrate existing sessions if any
|
||||
INSERT INTO sessions_new (id, user_id, created_at, expires_at)
|
||||
SELECT id, user_id, created_at,
|
||||
datetime(created_at, '+1 day') as expires_at
|
||||
FROM sessions WHERE EXISTS (SELECT 1 FROM sessions LIMIT 1);
|
||||
|
||||
-- Swap tables
|
||||
DROP TABLE IF EXISTS sessions;
|
||||
ALTER TABLE sessions_new RENAME TO sessions;
|
||||
|
||||
-- Add index for cleanup
|
||||
CREATE INDEX idx_sessions_expires ON sessions(expires_at);
|
||||
CREATE INDEX idx_sessions_user ON sessions(user_id);
|
||||
```
|
||||
|
||||
## Backward Compatibility Checklist
|
||||
|
||||
Ensure NO breaking changes:
|
||||
|
||||
- [ ] All configuration has sensible defaults
|
||||
- [ ] Existing deployments work without changes
|
||||
- [ ] Database migrations are non-destructive
|
||||
- [ ] API responses maintain same format
|
||||
- [ ] URL structure unchanged
|
||||
- [ ] RSS/ATOM feeds compatible
|
||||
- [ ] IndieAuth flow unmodified
|
||||
- [ ] Micropub endpoint unchanged
|
||||
|
||||
## Deployment Validation
|
||||
|
||||
After implementation, verify:
|
||||
|
||||
1. **Fresh Install**
|
||||
```bash
|
||||
# Clean install works
|
||||
pip install starpunk==1.1.1
|
||||
starpunk init
|
||||
starpunk serve
|
||||
```
|
||||
|
||||
2. **Upgrade Path**
|
||||
```bash
|
||||
# Upgrade from 1.1.0 works
|
||||
pip install --upgrade starpunk==1.1.1
|
||||
starpunk migrate
|
||||
starpunk serve
|
||||
```
|
||||
|
||||
3. **Configuration**
|
||||
```bash
|
||||
# All config options work
|
||||
export STARPUNK_SEARCH_ENABLED=false
|
||||
starpunk serve # Search should be disabled
|
||||
```
|
||||
|
||||
4. **Performance**
|
||||
```bash
|
||||
# Run performance tests
|
||||
pytest tests/performance/
|
||||
```
|
||||
|
||||
## Common Pitfalls to Avoid
|
||||
|
||||
1. **Don't Break Existing Features**
|
||||
- Test with existing data
|
||||
- Verify Micropub compatibility
|
||||
- Check RSS feed format
|
||||
|
||||
2. **Handle Missing FTS5 Gracefully**
|
||||
- Don't crash if FTS5 unavailable
|
||||
- Provide clear warnings
|
||||
- Fallback must work correctly
|
||||
|
||||
3. **Maintain Thread Safety**
|
||||
- Connection pool must be thread-safe
|
||||
- Metrics collection must be thread-safe
|
||||
- Use proper locking
|
||||
|
||||
4. **Avoid Memory Leaks**
|
||||
- Circular buffer for metrics
|
||||
- Stream RSS generation
|
||||
- Clean up expired sessions
|
||||
|
||||
5. **Configuration Validation**
|
||||
- Validate all config at startup
|
||||
- Use sensible defaults
|
||||
- Log configuration errors clearly
|
||||
|
||||
## Success Criteria
|
||||
|
||||
The implementation is complete when:
|
||||
|
||||
1. All tests pass (including new ones)
|
||||
2. Performance benchmarks met
|
||||
3. No breaking changes verified
|
||||
4. Documentation updated
|
||||
5. Changelog updated to v1.1.1
|
||||
6. Version number updated
|
||||
7. All features configurable
|
||||
8. Production deployment tested
|
||||
|
||||
## Support Resources
|
||||
|
||||
- Architecture Decisions: `/docs/decisions/ADR-052-055`
|
||||
- Feature Specifications: `/docs/design/v1.1.1/`
|
||||
- Test Suite: `/tests/`
|
||||
- Original Requirements: User request for v1.1.1
|
||||
|
||||
## Timeline
|
||||
|
||||
- **Total Effort**: 12-18 hours
|
||||
- **Calendar Time**: 2 weeks
|
||||
- **Daily Commitment**: 1-2 hours
|
||||
- **Buffer**: 20% for unexpected issues
|
||||
|
||||
## Risk Mitigation
|
||||
|
||||
| Risk | Mitigation |
|
||||
|------|------------|
|
||||
| FTS5 compatibility issues | Comprehensive fallback, clear docs |
|
||||
| Performance regression | Benchmark before/after each change |
|
||||
| Test instability | Fix race conditions first |
|
||||
| Memory issues | Profile RSS generation, limit buffers |
|
||||
| Configuration complexity | Sensible defaults, validation |
|
||||
|
||||
## Questions to Answer Before Starting
|
||||
|
||||
1. Is the current test suite passing reliably?
|
||||
2. Do we have performance baselines measured?
|
||||
3. Is the deployment environment documented?
|
||||
4. Are there any pending v1.1.0 issues to address?
|
||||
5. Is the version control branching strategy clear?
|
||||
|
||||
## Post-Implementation Checklist
|
||||
|
||||
- [ ] All features implemented
|
||||
- [ ] Tests written and passing
|
||||
- [ ] Performance validated
|
||||
- [ ] Documentation complete
|
||||
- [ ] Changelog updated
|
||||
- [ ] Version bumped to 1.1.1
|
||||
- [ ] Migration tested
|
||||
- [ ] Production deployment successful
|
||||
- [ ] Announcement prepared
|
||||
|
||||
---
|
||||
|
||||
This guide should be treated as a living document. Update it as implementation proceeds and lessons are learned.
|
||||
@@ -1,487 +0,0 @@
|
||||
# Performance Monitoring Foundation Specification
|
||||
|
||||
## Overview
|
||||
The performance monitoring foundation provides operators with visibility into StarPunk's runtime behavior, helping identify bottlenecks, track resource usage, and ensure optimal performance in production.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Functional Requirements
|
||||
|
||||
1. **Timing Instrumentation**
|
||||
- Measure execution time for key operations
|
||||
- Track request processing duration
|
||||
- Monitor database query execution time
|
||||
- Measure template rendering time
|
||||
- Track static file serving time
|
||||
|
||||
2. **Database Performance Logging**
|
||||
- Log all queries when enabled
|
||||
- Detect and warn about slow queries
|
||||
- Track connection pool usage
|
||||
- Monitor transaction duration
|
||||
- Count query frequency by type
|
||||
|
||||
3. **Memory Usage Tracking**
|
||||
- Monitor process RSS memory
|
||||
- Track memory growth over time
|
||||
- Detect memory leaks
|
||||
- Per-request memory delta
|
||||
- Memory high water mark
|
||||
|
||||
4. **Performance Dashboard**
|
||||
- Real-time metrics display
|
||||
- Historical data (last 15 minutes)
|
||||
- Slow query log
|
||||
- Memory usage visualization
|
||||
- Endpoint performance table
|
||||
|
||||
### Non-Functional Requirements
|
||||
|
||||
1. **Performance Impact**
|
||||
- Monitoring overhead <1% when enabled
|
||||
- Zero impact when disabled
|
||||
- Efficient memory usage (<1MB for metrics)
|
||||
- No blocking operations
|
||||
|
||||
2. **Usability**
|
||||
- Simple enable/disable via configuration
|
||||
- Clear, actionable metrics
|
||||
- Self-explanatory dashboard
|
||||
- No external dependencies
|
||||
|
||||
## Design
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────┐
|
||||
│ HTTP Request │
|
||||
│ ↓ │
|
||||
│ Performance Middleware │
|
||||
│ (start timer) │
|
||||
│ ↓ │
|
||||
│ ┌─────────────────┐ │
|
||||
│ │ Request Handler │ │
|
||||
│ │ ↓ │ │
|
||||
│ │ Database Layer │←── Query Monitor
|
||||
│ │ ↓ │ │
|
||||
│ │ Business Logic │←── Function Timer
|
||||
│ │ ↓ │ │
|
||||
│ │ Response Build │ │
|
||||
│ └─────────────────┘ │
|
||||
│ ↓ │
|
||||
│ Performance Middleware │
|
||||
│ (stop timer) │
|
||||
│ ↓ │
|
||||
│ Metrics Collector ← Memory Monitor
|
||||
│ ↓ │
|
||||
│ Circular Buffer │
|
||||
│ ↓ │
|
||||
│ Admin Dashboard │
|
||||
└──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Data Model
|
||||
|
||||
```python
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional, Dict, Any
|
||||
from datetime import datetime
|
||||
from collections import deque
|
||||
|
||||
@dataclass
|
||||
class PerformanceMetric:
|
||||
"""Single performance measurement"""
|
||||
timestamp: datetime
|
||||
category: str # 'http', 'db', 'function', 'memory'
|
||||
operation: str # Specific operation name
|
||||
duration_ms: Optional[float] # For timed operations
|
||||
value: Optional[float] # For measurements
|
||||
metadata: Dict[str, Any] # Additional context
|
||||
|
||||
class MetricsBuffer:
|
||||
"""Circular buffer for metrics storage"""
|
||||
|
||||
def __init__(self, max_size: int = 1000):
|
||||
self.metrics = deque(maxlen=max_size)
|
||||
self.slow_queries = deque(maxlen=100)
|
||||
|
||||
def add_metric(self, metric: PerformanceMetric):
|
||||
"""Add metric to buffer"""
|
||||
self.metrics.append(metric)
|
||||
|
||||
# Special handling for slow queries
|
||||
if (metric.category == 'db' and
|
||||
metric.duration_ms > config.PERF_SLOW_QUERY_THRESHOLD * 1000):
|
||||
self.slow_queries.append(metric)
|
||||
|
||||
def get_recent(self, seconds: int = 900) -> List[PerformanceMetric]:
|
||||
"""Get metrics from last N seconds"""
|
||||
cutoff = datetime.now() - timedelta(seconds=seconds)
|
||||
return [m for m in self.metrics if m.timestamp > cutoff]
|
||||
|
||||
def get_summary(self) -> Dict[str, Any]:
|
||||
"""Get summary statistics"""
|
||||
recent = self.get_recent()
|
||||
|
||||
# Group by category and operation
|
||||
summary = defaultdict(lambda: {
|
||||
'count': 0,
|
||||
'total_ms': 0,
|
||||
'avg_ms': 0,
|
||||
'max_ms': 0,
|
||||
'p95_ms': 0,
|
||||
'p99_ms': 0
|
||||
})
|
||||
|
||||
# Calculate statistics...
|
||||
return dict(summary)
|
||||
```
|
||||
|
||||
### Instrumentation Implementation
|
||||
|
||||
#### Database Query Monitoring
|
||||
```python
|
||||
import sqlite3
|
||||
import time
|
||||
from contextlib import contextmanager
|
||||
|
||||
@contextmanager
|
||||
def monitored_connection():
|
||||
"""Database connection with monitoring"""
|
||||
conn = sqlite3.connect(DATABASE_PATH)
|
||||
|
||||
if config.PERF_MONITORING_ENABLED:
|
||||
# Set trace callback for query logging
|
||||
def trace_callback(statement):
|
||||
start_time = time.perf_counter()
|
||||
|
||||
# Execute query (via monkey-patching)
|
||||
original_execute = conn.execute
|
||||
|
||||
def monitored_execute(sql, params=None):
|
||||
result = original_execute(sql, params)
|
||||
duration = time.perf_counter() - start_time
|
||||
|
||||
metric = PerformanceMetric(
|
||||
timestamp=datetime.now(),
|
||||
category='db',
|
||||
operation=sql.split()[0].upper(), # SELECT, INSERT, etc
|
||||
duration_ms=duration * 1000,
|
||||
metadata={
|
||||
'query': sql if config.PERF_LOG_QUERIES else None,
|
||||
'params_count': len(params) if params else 0
|
||||
}
|
||||
)
|
||||
metrics_buffer.add_metric(metric)
|
||||
|
||||
if duration > config.PERF_SLOW_QUERY_THRESHOLD:
|
||||
logger.warning(
|
||||
"Slow query detected",
|
||||
extra={
|
||||
'query': sql,
|
||||
'duration_ms': duration * 1000
|
||||
}
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
conn.execute = monitored_execute
|
||||
|
||||
conn.set_trace_callback(trace_callback)
|
||||
|
||||
yield conn
|
||||
conn.close()
|
||||
```
|
||||
|
||||
#### HTTP Request Monitoring
|
||||
```python
|
||||
from flask import g, request
|
||||
import time
|
||||
|
||||
@app.before_request
|
||||
def start_request_timer():
|
||||
"""Start timing the request"""
|
||||
if config.PERF_MONITORING_ENABLED:
|
||||
g.start_time = time.perf_counter()
|
||||
g.start_memory = get_memory_usage()
|
||||
|
||||
@app.after_request
|
||||
def end_request_timer(response):
|
||||
"""End timing and record metrics"""
|
||||
if config.PERF_MONITORING_ENABLED and hasattr(g, 'start_time'):
|
||||
duration = time.perf_counter() - g.start_time
|
||||
memory_delta = get_memory_usage() - g.start_memory
|
||||
|
||||
metric = PerformanceMetric(
|
||||
timestamp=datetime.now(),
|
||||
category='http',
|
||||
operation=f"{request.method} {request.endpoint}",
|
||||
duration_ms=duration * 1000,
|
||||
metadata={
|
||||
'method': request.method,
|
||||
'path': request.path,
|
||||
'status': response.status_code,
|
||||
'size': len(response.get_data()),
|
||||
'memory_delta': memory_delta
|
||||
}
|
||||
)
|
||||
metrics_buffer.add_metric(metric)
|
||||
|
||||
return response
|
||||
```
|
||||
|
||||
#### Memory Monitoring
|
||||
```python
|
||||
import resource
|
||||
import threading
|
||||
import time
|
||||
|
||||
class MemoryMonitor:
|
||||
"""Background thread for memory monitoring"""
|
||||
|
||||
def __init__(self):
|
||||
self.running = False
|
||||
self.thread = None
|
||||
self.high_water_mark = 0
|
||||
|
||||
def start(self):
|
||||
"""Start memory monitoring"""
|
||||
if not config.PERF_MEMORY_TRACKING:
|
||||
return
|
||||
|
||||
self.running = True
|
||||
self.thread = threading.Thread(target=self._monitor)
|
||||
self.thread.daemon = True
|
||||
self.thread.start()
|
||||
|
||||
def _monitor(self):
|
||||
"""Monitor memory usage"""
|
||||
while self.running:
|
||||
memory_mb = get_memory_usage()
|
||||
self.high_water_mark = max(self.high_water_mark, memory_mb)
|
||||
|
||||
metric = PerformanceMetric(
|
||||
timestamp=datetime.now(),
|
||||
category='memory',
|
||||
operation='rss',
|
||||
value=memory_mb,
|
||||
metadata={
|
||||
'high_water_mark': self.high_water_mark
|
||||
}
|
||||
)
|
||||
metrics_buffer.add_metric(metric)
|
||||
|
||||
time.sleep(10) # Check every 10 seconds
|
||||
|
||||
def get_memory_usage() -> float:
|
||||
"""Get current memory usage in MB"""
|
||||
usage = resource.getrusage(resource.RUSAGE_SELF)
|
||||
return usage.ru_maxrss / 1024 # Convert KB to MB
|
||||
```
|
||||
|
||||
### Performance Dashboard
|
||||
|
||||
#### Dashboard Route
|
||||
```python
|
||||
@app.route('/admin/performance')
|
||||
@require_admin
|
||||
def performance_dashboard():
|
||||
"""Display performance metrics"""
|
||||
if not config.PERF_MONITORING_ENABLED:
|
||||
return render_template('admin/performance_disabled.html')
|
||||
|
||||
summary = metrics_buffer.get_summary()
|
||||
slow_queries = list(metrics_buffer.slow_queries)
|
||||
memory_data = get_memory_graph_data()
|
||||
|
||||
return render_template(
|
||||
'admin/performance.html',
|
||||
summary=summary,
|
||||
slow_queries=slow_queries,
|
||||
memory_data=memory_data,
|
||||
uptime=get_uptime(),
|
||||
config={
|
||||
'slow_threshold': config.PERF_SLOW_QUERY_THRESHOLD,
|
||||
'monitoring_enabled': config.PERF_MONITORING_ENABLED,
|
||||
'memory_tracking': config.PERF_MEMORY_TRACKING
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
#### Dashboard Template Structure
|
||||
```html
|
||||
<div class="performance-dashboard">
|
||||
<h2>Performance Monitoring</h2>
|
||||
|
||||
<!-- Overview Stats -->
|
||||
<div class="stats-grid">
|
||||
<div class="stat">
|
||||
<h3>Uptime</h3>
|
||||
<p>{{ uptime }}</p>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<h3>Total Requests</h3>
|
||||
<p>{{ summary.http.count }}</p>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<h3>Avg Response Time</h3>
|
||||
<p>{{ summary.http.avg_ms|round(2) }}ms</p>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<h3>Memory Usage</h3>
|
||||
<p>{{ current_memory }}MB</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Slow Queries -->
|
||||
<div class="slow-queries">
|
||||
<h3>Slow Queries (>{{ config.slow_threshold }}s)</h3>
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Time</th>
|
||||
<th>Duration</th>
|
||||
<th>Query</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% for query in slow_queries %}
|
||||
<tr>
|
||||
<td>{{ query.timestamp|timeago }}</td>
|
||||
<td>{{ query.duration_ms|round(2) }}ms</td>
|
||||
<td><code>{{ query.metadata.query|truncate(100) }}</code></td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
<!-- Endpoint Performance -->
|
||||
<div class="endpoint-performance">
|
||||
<h3>Endpoint Performance</h3>
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Endpoint</th>
|
||||
<th>Calls</th>
|
||||
<th>Avg (ms)</th>
|
||||
<th>P95 (ms)</th>
|
||||
<th>P99 (ms)</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% for endpoint, stats in summary.endpoints.items() %}
|
||||
<tr>
|
||||
<td>{{ endpoint }}</td>
|
||||
<td>{{ stats.count }}</td>
|
||||
<td>{{ stats.avg_ms|round(2) }}</td>
|
||||
<td>{{ stats.p95_ms|round(2) }}</td>
|
||||
<td>{{ stats.p99_ms|round(2) }}</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
<!-- Memory Graph -->
|
||||
<div class="memory-graph">
|
||||
<h3>Memory Usage (Last 15 Minutes)</h3>
|
||||
<canvas id="memory-chart"></canvas>
|
||||
</div>
|
||||
</div>
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
```python
|
||||
# Performance monitoring configuration
|
||||
PERF_MONITORING_ENABLED = Config.get_bool("STARPUNK_PERF_MONITORING_ENABLED", False)
|
||||
PERF_SLOW_QUERY_THRESHOLD = Config.get_float("STARPUNK_PERF_SLOW_QUERY_THRESHOLD", 1.0)
|
||||
PERF_LOG_QUERIES = Config.get_bool("STARPUNK_PERF_LOG_QUERIES", False)
|
||||
PERF_MEMORY_TRACKING = Config.get_bool("STARPUNK_PERF_MEMORY_TRACKING", False)
|
||||
PERF_BUFFER_SIZE = Config.get_int("STARPUNK_PERF_BUFFER_SIZE", 1000)
|
||||
PERF_SAMPLE_RATE = Config.get_float("STARPUNK_PERF_SAMPLE_RATE", 1.0)
|
||||
```
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
1. Metric collection and storage
|
||||
2. Circular buffer behavior
|
||||
3. Summary statistics calculation
|
||||
4. Memory monitoring functions
|
||||
5. Query monitoring callbacks
|
||||
|
||||
### Integration Tests
|
||||
1. End-to-end request monitoring
|
||||
2. Slow query detection
|
||||
3. Memory leak detection
|
||||
4. Dashboard rendering
|
||||
5. Performance overhead measurement
|
||||
|
||||
### Performance Tests
|
||||
```python
|
||||
def test_monitoring_overhead():
|
||||
"""Verify monitoring overhead is <1%"""
|
||||
# Baseline without monitoring
|
||||
config.PERF_MONITORING_ENABLED = False
|
||||
baseline_time = measure_operation_time()
|
||||
|
||||
# With monitoring
|
||||
config.PERF_MONITORING_ENABLED = True
|
||||
monitored_time = measure_operation_time()
|
||||
|
||||
overhead = (monitored_time - baseline_time) / baseline_time
|
||||
assert overhead < 0.01 # Less than 1%
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Authentication**: Dashboard requires admin access
|
||||
2. **Query Sanitization**: Don't log sensitive query parameters
|
||||
3. **Rate Limiting**: Prevent dashboard DoS
|
||||
4. **Data Retention**: Automatic cleanup of old metrics
|
||||
5. **Configuration**: Validate all config values
|
||||
|
||||
## Performance Impact
|
||||
|
||||
### Expected Overhead
|
||||
- Request timing: <0.1ms per request
|
||||
- Query monitoring: <0.5ms per query
|
||||
- Memory tracking: <1% CPU (background thread)
|
||||
- Dashboard rendering: <50ms
|
||||
- Total overhead: <1% when fully enabled
|
||||
|
||||
### Optimization Strategies
|
||||
1. Use sampling for high-frequency operations
|
||||
2. Lazy calculation of statistics
|
||||
3. Efficient circular buffer implementation
|
||||
4. Minimal string operations in hot path
|
||||
|
||||
## Documentation Requirements
|
||||
|
||||
### Administrator Guide
|
||||
- How to enable monitoring
|
||||
- Understanding metrics
|
||||
- Identifying performance issues
|
||||
- Tuning configuration
|
||||
|
||||
### Dashboard User Guide
|
||||
- Navigating the dashboard
|
||||
- Interpreting metrics
|
||||
- Finding slow queries
|
||||
- Memory usage patterns
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
1. ✅ Timing instrumentation for all key operations
|
||||
2. ✅ Database query performance logging
|
||||
3. ✅ Slow query detection with configurable threshold
|
||||
4. ✅ Memory usage tracking
|
||||
5. ✅ Performance dashboard at /admin/performance
|
||||
6. ✅ Monitoring overhead <1%
|
||||
7. ✅ Zero impact when disabled
|
||||
8. ✅ Circular buffer limits memory usage
|
||||
9. ✅ All metrics clearly documented
|
||||
10. ✅ Security review passed
|
||||
@@ -1,710 +0,0 @@
|
||||
# Production Readiness Improvements Specification
|
||||
|
||||
## Overview
|
||||
Production readiness improvements for v1.1.1 focus on robustness, error handling, resource optimization, and operational visibility to ensure StarPunk runs reliably in production environments.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Functional Requirements
|
||||
|
||||
1. **Graceful FTS5 Degradation**
|
||||
- Detect FTS5 availability at startup
|
||||
- Automatically fall back to LIKE-based search
|
||||
- Log clear warnings about reduced functionality
|
||||
- Document SQLite compilation requirements
|
||||
|
||||
2. **Enhanced Error Messages**
|
||||
- Provide actionable error messages for common issues
|
||||
- Include troubleshooting steps
|
||||
- Differentiate between user and system errors
|
||||
- Add configuration validation at startup
|
||||
|
||||
3. **Database Connection Pooling**
|
||||
- Optimize connection pool size
|
||||
- Monitor pool usage
|
||||
- Handle connection exhaustion gracefully
|
||||
- Configure pool parameters
|
||||
|
||||
4. **Structured Logging**
|
||||
- Implement log levels (DEBUG, INFO, WARNING, ERROR, CRITICAL)
|
||||
- JSON-structured logs for production
|
||||
- Human-readable logs for development
|
||||
- Request correlation IDs
|
||||
|
||||
5. **Health Check Improvements**
|
||||
- Enhanced /health endpoint
|
||||
- Detailed health status (when authorized)
|
||||
- Component health checks
|
||||
- Readiness vs liveness probes
|
||||
|
||||
### Non-Functional Requirements
|
||||
|
||||
1. **Reliability**
|
||||
- Graceful handling of all error conditions
|
||||
- No crashes from user input
|
||||
- Automatic recovery from transient errors
|
||||
|
||||
2. **Observability**
|
||||
- Clear logging of all operations
|
||||
- Traceable request flow
|
||||
- Diagnostic information available
|
||||
|
||||
3. **Performance**
|
||||
- Connection pooling reduces latency
|
||||
- Efficient error handling paths
|
||||
- Minimal logging overhead
|
||||
|
||||
## Design
|
||||
|
||||
### FTS5 Graceful Degradation
|
||||
|
||||
```python
|
||||
# starpunk/search/engine.py
|
||||
class SearchEngineFactory:
|
||||
"""Factory for creating appropriate search engine"""
|
||||
|
||||
@staticmethod
|
||||
def create() -> SearchEngine:
|
||||
"""Create search engine based on availability"""
|
||||
if SearchEngineFactory._check_fts5():
|
||||
logger.info("Using FTS5 search engine")
|
||||
return FTS5SearchEngine()
|
||||
else:
|
||||
logger.warning(
|
||||
"FTS5 not available. Using fallback search engine. "
|
||||
"For better search performance, please ensure SQLite "
|
||||
"is compiled with FTS5 support. See: "
|
||||
"https://www.sqlite.org/fts5.html#compiling_and_using_fts5"
|
||||
)
|
||||
return FallbackSearchEngine()
|
||||
|
||||
@staticmethod
|
||||
def _check_fts5() -> bool:
|
||||
"""Check if FTS5 is available"""
|
||||
try:
|
||||
conn = sqlite3.connect(":memory:")
|
||||
conn.execute(
|
||||
"CREATE VIRTUAL TABLE test_fts USING fts5(content)"
|
||||
)
|
||||
conn.close()
|
||||
return True
|
||||
except sqlite3.OperationalError:
|
||||
return False
|
||||
|
||||
class FallbackSearchEngine(SearchEngine):
|
||||
"""LIKE-based search for systems without FTS5"""
|
||||
|
||||
def search(self, query: str, limit: int = 50) -> List[SearchResult]:
|
||||
"""Perform case-insensitive LIKE search"""
|
||||
sql = """
|
||||
SELECT
|
||||
id,
|
||||
content,
|
||||
created_at,
|
||||
0 as rank -- No ranking available
|
||||
FROM notes
|
||||
WHERE
|
||||
content LIKE ? OR
|
||||
content LIKE ? OR
|
||||
content LIKE ?
|
||||
ORDER BY created_at DESC
|
||||
LIMIT ?
|
||||
"""
|
||||
|
||||
# Search for term at start, middle, or end
|
||||
patterns = [
|
||||
f'{query}%', # Starts with
|
||||
f'% {query}%', # Word in middle
|
||||
f'%{query}' # Ends with
|
||||
]
|
||||
|
||||
results = []
|
||||
with get_db() as conn:
|
||||
cursor = conn.execute(sql, (*patterns, limit))
|
||||
for row in cursor:
|
||||
results.append(SearchResult(*row))
|
||||
|
||||
return results
|
||||
```
|
||||
|
||||
### Enhanced Error Messages
|
||||
|
||||
```python
|
||||
# starpunk/errors/messages.py
|
||||
class ErrorMessages:
|
||||
"""User-friendly error messages with troubleshooting"""
|
||||
|
||||
DATABASE_LOCKED = ErrorInfo(
|
||||
message="The database is temporarily locked",
|
||||
suggestion="Please try again in a moment",
|
||||
details="This usually happens during concurrent writes",
|
||||
troubleshooting=[
|
||||
"Wait a few seconds and retry",
|
||||
"Check for long-running operations",
|
||||
"Ensure WAL mode is enabled"
|
||||
]
|
||||
)
|
||||
|
||||
CONFIGURATION_INVALID = ErrorInfo(
|
||||
message="Configuration error: {detail}",
|
||||
suggestion="Please check your environment variables",
|
||||
details="Invalid configuration detected at startup",
|
||||
troubleshooting=[
|
||||
"Verify all STARPUNK_* environment variables",
|
||||
"Check for typos in configuration names",
|
||||
"Ensure values are in the correct format",
|
||||
"See docs/deployment/configuration.md"
|
||||
]
|
||||
)
|
||||
|
||||
MICROPUB_MALFORMED = ErrorInfo(
|
||||
message="Invalid Micropub request format",
|
||||
suggestion="Please check your Micropub client configuration",
|
||||
details="The request doesn't conform to Micropub specification",
|
||||
troubleshooting=[
|
||||
"Ensure Content-Type is correct",
|
||||
"Verify required fields are present",
|
||||
"Check for proper encoding",
|
||||
"See https://www.w3.org/TR/micropub/"
|
||||
]
|
||||
)
|
||||
|
||||
def format_error(self, error_key: str, **kwargs) -> dict:
|
||||
"""Format error for response"""
|
||||
error_info = getattr(self, error_key)
|
||||
return {
|
||||
'error': {
|
||||
'message': error_info.message.format(**kwargs),
|
||||
'suggestion': error_info.suggestion,
|
||||
'troubleshooting': error_info.troubleshooting
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Database Connection Pool Optimization
|
||||
|
||||
```python
|
||||
# starpunk/database/pool.py
|
||||
from contextlib import contextmanager
|
||||
from threading import Semaphore, Lock
|
||||
from queue import Queue, Empty, Full
|
||||
import sqlite3
|
||||
|
||||
class ConnectionPool:
|
||||
"""Thread-safe SQLite connection pool"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
database_path: str,
|
||||
pool_size: int = None,
|
||||
timeout: float = None
|
||||
):
|
||||
self.database_path = database_path
|
||||
self.pool_size = pool_size or config.DB_CONNECTION_POOL_SIZE
|
||||
self.timeout = timeout or config.DB_CONNECTION_TIMEOUT
|
||||
self._pool = Queue(maxsize=self.pool_size)
|
||||
self._all_connections = []
|
||||
self._lock = Lock()
|
||||
self._stats = {
|
||||
'acquired': 0,
|
||||
'released': 0,
|
||||
'created': 0,
|
||||
'wait_time_total': 0,
|
||||
'active': 0
|
||||
}
|
||||
|
||||
# Pre-create connections
|
||||
for _ in range(self.pool_size):
|
||||
self._create_connection()
|
||||
|
||||
def _create_connection(self) -> sqlite3.Connection:
|
||||
"""Create a new database connection"""
|
||||
conn = sqlite3.connect(self.database_path)
|
||||
|
||||
# Configure connection for production
|
||||
conn.execute("PRAGMA journal_mode=WAL")
|
||||
conn.execute(f"PRAGMA busy_timeout={config.DB_BUSY_TIMEOUT}")
|
||||
conn.execute("PRAGMA synchronous=NORMAL")
|
||||
conn.execute("PRAGMA temp_store=MEMORY")
|
||||
|
||||
# Enable row factory for dict-like access
|
||||
conn.row_factory = sqlite3.Row
|
||||
|
||||
with self._lock:
|
||||
self._all_connections.append(conn)
|
||||
self._stats['created'] += 1
|
||||
|
||||
return conn
|
||||
|
||||
@contextmanager
|
||||
def acquire(self):
|
||||
"""Acquire connection from pool"""
|
||||
start_time = time.time()
|
||||
conn = None
|
||||
|
||||
try:
|
||||
# Try to get connection with timeout
|
||||
conn = self._pool.get(timeout=self.timeout)
|
||||
wait_time = time.time() - start_time
|
||||
|
||||
with self._lock:
|
||||
self._stats['acquired'] += 1
|
||||
self._stats['wait_time_total'] += wait_time
|
||||
self._stats['active'] += 1
|
||||
|
||||
if wait_time > 1.0:
|
||||
logger.warning(
|
||||
"Slow connection acquisition",
|
||||
extra={'wait_time': wait_time}
|
||||
)
|
||||
|
||||
yield conn
|
||||
|
||||
except Empty:
|
||||
raise DatabaseError(
|
||||
"Connection pool exhausted",
|
||||
suggestion="Increase pool size or optimize queries",
|
||||
details={
|
||||
'pool_size': self.pool_size,
|
||||
'timeout': self.timeout
|
||||
}
|
||||
)
|
||||
finally:
|
||||
if conn:
|
||||
# Return connection to pool
|
||||
try:
|
||||
self._pool.put_nowait(conn)
|
||||
with self._lock:
|
||||
self._stats['released'] += 1
|
||||
self._stats['active'] -= 1
|
||||
except Full:
|
||||
# Pool is full, close the connection
|
||||
conn.close()
|
||||
|
||||
def get_stats(self) -> dict:
|
||||
"""Get pool statistics"""
|
||||
with self._lock:
|
||||
return {
|
||||
**self._stats,
|
||||
'pool_size': self.pool_size,
|
||||
'available': self._pool.qsize()
|
||||
}
|
||||
|
||||
def close_all(self):
|
||||
"""Close all connections in pool"""
|
||||
while not self._pool.empty():
|
||||
try:
|
||||
conn = self._pool.get_nowait()
|
||||
conn.close()
|
||||
except Empty:
|
||||
break
|
||||
|
||||
for conn in self._all_connections:
|
||||
try:
|
||||
conn.close()
|
||||
except:
|
||||
pass
|
||||
|
||||
# Global pool instance
|
||||
_connection_pool = None
|
||||
|
||||
def get_connection_pool() -> ConnectionPool:
|
||||
"""Get or create connection pool"""
|
||||
global _connection_pool
|
||||
if _connection_pool is None:
|
||||
_connection_pool = ConnectionPool(
|
||||
database_path=config.DATABASE_PATH
|
||||
)
|
||||
return _connection_pool
|
||||
|
||||
@contextmanager
|
||||
def get_db():
|
||||
"""Get database connection from pool"""
|
||||
pool = get_connection_pool()
|
||||
with pool.acquire() as conn:
|
||||
yield conn
|
||||
```
|
||||
|
||||
### Structured Logging Implementation
|
||||
|
||||
```python
|
||||
# starpunk/logging/setup.py
|
||||
import logging
|
||||
import json
|
||||
import sys
|
||||
from uuid import uuid4
|
||||
|
||||
def setup_logging():
|
||||
"""Configure structured logging for production"""
|
||||
|
||||
# Determine environment
|
||||
is_production = config.ENV == 'production'
|
||||
|
||||
# Configure root logger
|
||||
root = logging.getLogger()
|
||||
root.setLevel(config.LOG_LEVEL)
|
||||
|
||||
# Remove default handler
|
||||
root.handlers = []
|
||||
|
||||
# Create appropriate handler
|
||||
handler = logging.StreamHandler(sys.stdout)
|
||||
|
||||
if is_production:
|
||||
# JSON format for production
|
||||
handler.setFormatter(JSONFormatter())
|
||||
else:
|
||||
# Human-readable for development
|
||||
handler.setFormatter(logging.Formatter(
|
||||
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
))
|
||||
|
||||
root.addHandler(handler)
|
||||
|
||||
# Configure specific loggers
|
||||
logging.getLogger('starpunk').setLevel(config.LOG_LEVEL)
|
||||
logging.getLogger('werkzeug').setLevel(logging.WARNING)
|
||||
|
||||
logger.info(
|
||||
"Logging configured",
|
||||
extra={
|
||||
'level': config.LOG_LEVEL,
|
||||
'format': 'json' if is_production else 'human'
|
||||
}
|
||||
)
|
||||
|
||||
class JSONFormatter(logging.Formatter):
|
||||
"""JSON log formatter for structured logging"""
|
||||
|
||||
def format(self, record):
|
||||
log_data = {
|
||||
'timestamp': self.formatTime(record),
|
||||
'level': record.levelname,
|
||||
'logger': record.name,
|
||||
'message': record.getMessage(),
|
||||
'request_id': getattr(record, 'request_id', None),
|
||||
}
|
||||
|
||||
# Add extra fields
|
||||
if hasattr(record, 'extra'):
|
||||
log_data.update(record.extra)
|
||||
|
||||
# Add exception info
|
||||
if record.exc_info:
|
||||
log_data['exception'] = self.formatException(record.exc_info)
|
||||
|
||||
return json.dumps(log_data)
|
||||
|
||||
# Request context middleware
|
||||
from flask import g
|
||||
|
||||
@app.before_request
|
||||
def add_request_id():
|
||||
"""Add unique request ID for correlation"""
|
||||
g.request_id = str(uuid4())[:8]
|
||||
|
||||
# Configure logger for this request
|
||||
logging.LoggerAdapter(
|
||||
logger,
|
||||
{'request_id': g.request_id}
|
||||
)
|
||||
```
|
||||
|
||||
### Enhanced Health Checks
|
||||
|
||||
```python
|
||||
# starpunk/health.py
|
||||
from datetime import datetime
|
||||
|
||||
class HealthChecker:
|
||||
"""System health checking"""
|
||||
|
||||
def __init__(self):
|
||||
self.start_time = datetime.now()
|
||||
|
||||
def check_basic(self) -> dict:
|
||||
"""Basic health check for liveness probe"""
|
||||
return {
|
||||
'status': 'healthy',
|
||||
'timestamp': datetime.now().isoformat()
|
||||
}
|
||||
|
||||
def check_detailed(self) -> dict:
|
||||
"""Detailed health check for readiness probe"""
|
||||
checks = {
|
||||
'database': self._check_database(),
|
||||
'search': self._check_search(),
|
||||
'filesystem': self._check_filesystem(),
|
||||
'memory': self._check_memory()
|
||||
}
|
||||
|
||||
# Overall status
|
||||
all_healthy = all(c['healthy'] for c in checks.values())
|
||||
|
||||
return {
|
||||
'status': 'healthy' if all_healthy else 'degraded',
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'uptime': str(datetime.now() - self.start_time),
|
||||
'version': __version__,
|
||||
'checks': checks
|
||||
}
|
||||
|
||||
def _check_database(self) -> dict:
|
||||
"""Check database connectivity"""
|
||||
try:
|
||||
with get_db() as conn:
|
||||
conn.execute("SELECT 1")
|
||||
|
||||
pool_stats = get_connection_pool().get_stats()
|
||||
return {
|
||||
'healthy': True,
|
||||
'pool_active': pool_stats['active'],
|
||||
'pool_size': pool_stats['pool_size']
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
'healthy': False,
|
||||
'error': str(e)
|
||||
}
|
||||
|
||||
def _check_search(self) -> dict:
|
||||
"""Check search engine status"""
|
||||
try:
|
||||
engine_type = 'fts5' if has_fts5() else 'fallback'
|
||||
return {
|
||||
'healthy': True,
|
||||
'engine': engine_type,
|
||||
'enabled': config.SEARCH_ENABLED
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
'healthy': False,
|
||||
'error': str(e)
|
||||
}
|
||||
|
||||
def _check_filesystem(self) -> dict:
|
||||
"""Check filesystem access"""
|
||||
try:
|
||||
# Check if we can write to temp
|
||||
import tempfile
|
||||
with tempfile.NamedTemporaryFile() as f:
|
||||
f.write(b'test')
|
||||
|
||||
return {'healthy': True}
|
||||
except Exception as e:
|
||||
return {
|
||||
'healthy': False,
|
||||
'error': str(e)
|
||||
}
|
||||
|
||||
def _check_memory(self) -> dict:
|
||||
"""Check memory usage"""
|
||||
memory_mb = get_memory_usage()
|
||||
threshold = config.MEMORY_THRESHOLD_MB
|
||||
|
||||
return {
|
||||
'healthy': memory_mb < threshold,
|
||||
'usage_mb': memory_mb,
|
||||
'threshold_mb': threshold
|
||||
}
|
||||
|
||||
# Health check endpoints
|
||||
@app.route('/health')
|
||||
def health():
|
||||
"""Basic health check endpoint"""
|
||||
checker = HealthChecker()
|
||||
result = checker.check_basic()
|
||||
status_code = 200 if result['status'] == 'healthy' else 503
|
||||
return jsonify(result), status_code
|
||||
|
||||
@app.route('/health/ready')
|
||||
def health_ready():
|
||||
"""Readiness probe endpoint"""
|
||||
checker = HealthChecker()
|
||||
|
||||
# Detailed check only for authenticated or configured
|
||||
if config.HEALTH_CHECK_DETAILED or is_admin():
|
||||
result = checker.check_detailed()
|
||||
else:
|
||||
result = checker.check_basic()
|
||||
|
||||
status_code = 200 if result['status'] == 'healthy' else 503
|
||||
return jsonify(result), status_code
|
||||
```
|
||||
|
||||
### Session Timeout Handling
|
||||
|
||||
```python
|
||||
# starpunk/auth/session.py
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
class SessionManager:
|
||||
"""Manage user sessions with configurable timeout"""
|
||||
|
||||
def __init__(self):
|
||||
self.timeout = config.SESSION_TIMEOUT
|
||||
|
||||
def create_session(self, user_id: str) -> str:
|
||||
"""Create new session with timeout"""
|
||||
session_id = str(uuid4())
|
||||
expires_at = datetime.now() + timedelta(seconds=self.timeout)
|
||||
|
||||
# Store in database
|
||||
with get_db() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO sessions (id, user_id, expires_at, created_at)
|
||||
VALUES (?, ?, ?, ?)
|
||||
""",
|
||||
(session_id, user_id, expires_at, datetime.now())
|
||||
)
|
||||
|
||||
logger.info(
|
||||
"Session created",
|
||||
extra={
|
||||
'user_id': user_id,
|
||||
'timeout': self.timeout
|
||||
}
|
||||
)
|
||||
|
||||
return session_id
|
||||
|
||||
def validate_session(self, session_id: str) -> Optional[str]:
|
||||
"""Validate session and extend if valid"""
|
||||
with get_db() as conn:
|
||||
result = conn.execute(
|
||||
"""
|
||||
SELECT user_id, expires_at
|
||||
FROM sessions
|
||||
WHERE id = ? AND expires_at > ?
|
||||
""",
|
||||
(session_id, datetime.now())
|
||||
).fetchone()
|
||||
|
||||
if result:
|
||||
# Extend session
|
||||
new_expires = datetime.now() + timedelta(
|
||||
seconds=self.timeout
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
UPDATE sessions
|
||||
SET expires_at = ?, last_accessed = ?
|
||||
WHERE id = ?
|
||||
""",
|
||||
(new_expires, datetime.now(), session_id)
|
||||
)
|
||||
|
||||
return result['user_id']
|
||||
|
||||
return None
|
||||
|
||||
def cleanup_expired(self):
|
||||
"""Remove expired sessions"""
|
||||
with get_db() as conn:
|
||||
deleted = conn.execute(
|
||||
"""
|
||||
DELETE FROM sessions
|
||||
WHERE expires_at < ?
|
||||
""",
|
||||
(datetime.now(),)
|
||||
).rowcount
|
||||
|
||||
if deleted > 0:
|
||||
logger.info(
|
||||
"Cleaned up expired sessions",
|
||||
extra={'count': deleted}
|
||||
)
|
||||
```
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
1. FTS5 detection and fallback
|
||||
2. Error message formatting
|
||||
3. Connection pool operations
|
||||
4. Health check components
|
||||
5. Session timeout logic
|
||||
|
||||
### Integration Tests
|
||||
1. Search with and without FTS5
|
||||
2. Error handling end-to-end
|
||||
3. Connection pool under load
|
||||
4. Health endpoints
|
||||
5. Session expiration
|
||||
|
||||
### Load Tests
|
||||
```python
|
||||
def test_connection_pool_under_load():
|
||||
"""Test connection pool with concurrent requests"""
|
||||
pool = ConnectionPool(":memory:", pool_size=5)
|
||||
|
||||
def worker():
|
||||
for _ in range(100):
|
||||
with pool.acquire() as conn:
|
||||
conn.execute("SELECT 1")
|
||||
|
||||
threads = [Thread(target=worker) for _ in range(20)]
|
||||
for t in threads:
|
||||
t.start()
|
||||
for t in threads:
|
||||
t.join()
|
||||
|
||||
stats = pool.get_stats()
|
||||
assert stats['acquired'] == 2000
|
||||
assert stats['released'] == 2000
|
||||
```
|
||||
|
||||
## Migration Considerations
|
||||
|
||||
### Database Schema Updates
|
||||
```sql
|
||||
-- Add sessions table if not exists
|
||||
CREATE TABLE IF NOT EXISTS sessions (
|
||||
id TEXT PRIMARY KEY,
|
||||
user_id TEXT NOT NULL,
|
||||
created_at TIMESTAMP NOT NULL,
|
||||
expires_at TIMESTAMP NOT NULL,
|
||||
last_accessed TIMESTAMP,
|
||||
INDEX idx_sessions_expires (expires_at)
|
||||
);
|
||||
```
|
||||
|
||||
### Configuration Migration
|
||||
1. Add new environment variables with defaults
|
||||
2. Document in deployment guide
|
||||
3. Update example .env file
|
||||
|
||||
## Performance Impact
|
||||
|
||||
### Expected Improvements
|
||||
- Connection pooling: 20-30% reduction in query latency
|
||||
- Structured logging: <1ms per log statement
|
||||
- Health checks: <10ms response time
|
||||
- Session management: Minimal overhead
|
||||
|
||||
### Resource Usage
|
||||
- Connection pool: ~5MB per connection
|
||||
- Logging buffer: <1MB
|
||||
- Session storage: ~1KB per active session
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Connection Pool**: Prevent connection exhaustion attacks
|
||||
2. **Error Messages**: Never expose sensitive information
|
||||
3. **Health Checks**: Require auth for detailed info
|
||||
4. **Session Timeout**: Configurable for security/UX balance
|
||||
5. **Logging**: Sanitize all user input
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
1. ✅ FTS5 unavailability handled gracefully
|
||||
2. ✅ Clear error messages with troubleshooting
|
||||
3. ✅ Connection pooling implemented and optimized
|
||||
4. ✅ Structured logging with levels
|
||||
5. ✅ Enhanced health check endpoints
|
||||
6. ✅ Session timeout handling
|
||||
7. ✅ All features configurable
|
||||
8. ✅ Zero breaking changes
|
||||
9. ✅ Performance improvements measured
|
||||
10. ✅ Production deployment guide updated
|
||||
@@ -1,340 +0,0 @@
|
||||
# Search Configuration System Specification
|
||||
|
||||
## Overview
|
||||
The search configuration system for v1.1.1 provides operators with control over search functionality, including the ability to disable it entirely for sites that don't need it, configure title extraction parameters, and enhance result presentation.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Functional Requirements
|
||||
|
||||
1. **Search Toggle**
|
||||
- Ability to completely disable search functionality
|
||||
- When disabled, search UI elements should be hidden
|
||||
- Search endpoints should return appropriate messages
|
||||
- Database FTS5 tables can be skipped if search disabled from start
|
||||
|
||||
2. **Title Length Configuration**
|
||||
- Configure maximum title extraction length (currently hardcoded at 100)
|
||||
- Apply to both new and existing notes during search
|
||||
- Ensure truncation doesn't break words mid-character
|
||||
- Add ellipsis (...) for truncated titles
|
||||
|
||||
3. **Search Result Enhancement**
|
||||
- Highlight search terms in results
|
||||
- Show relevance score for each result
|
||||
- Configurable highlight CSS class
|
||||
- Preserve HTML safety (no XSS via highlights)
|
||||
|
||||
4. **Graceful FTS5 Degradation**
|
||||
- Detect FTS5 availability at startup
|
||||
- Fall back to LIKE queries if unavailable
|
||||
- Show appropriate warnings to operators
|
||||
- Document SQLite compilation requirements
|
||||
|
||||
### Non-Functional Requirements
|
||||
|
||||
1. **Performance**
|
||||
- Configuration checks must not impact request latency (<1ms)
|
||||
- Search highlighting must not slow results >10%
|
||||
- Graceful degradation should work within 2x time of FTS5
|
||||
|
||||
2. **Compatibility**
|
||||
- All existing deployments continue working without configuration
|
||||
- Default values match current behavior exactly
|
||||
- No database migrations required
|
||||
|
||||
3. **Security**
|
||||
- Search term highlighting must be XSS-safe
|
||||
- Configuration values must be validated
|
||||
- No sensitive data in configuration
|
||||
|
||||
## Design
|
||||
|
||||
### Configuration Schema
|
||||
|
||||
```python
|
||||
# Environment variables with defaults
|
||||
STARPUNK_SEARCH_ENABLED = True
|
||||
STARPUNK_SEARCH_TITLE_LENGTH = 100
|
||||
STARPUNK_SEARCH_HIGHLIGHT_CLASS = "highlight"
|
||||
STARPUNK_SEARCH_MIN_SCORE = 0.0
|
||||
STARPUNK_SEARCH_HIGHLIGHT_ENABLED = True
|
||||
STARPUNK_SEARCH_SCORE_DISPLAY = True
|
||||
```
|
||||
|
||||
### Component Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ Configuration Layer │
|
||||
├─────────────────────────────────────┤
|
||||
│ Search Controller │
|
||||
│ ┌─────────────┬─────────────┐ │
|
||||
│ │ FTS5 Engine │ LIKE Engine │ │
|
||||
│ └─────────────┴─────────────┘ │
|
||||
├─────────────────────────────────────┤
|
||||
│ Result Processor │
|
||||
│ • Highlighting │
|
||||
│ • Scoring │
|
||||
│ • Title Extraction │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Search Disabling Flow
|
||||
|
||||
```python
|
||||
# In search module
|
||||
def search_notes(query: str) -> List[Note]:
|
||||
if not config.SEARCH_ENABLED:
|
||||
return SearchResults(
|
||||
results=[],
|
||||
message="Search is disabled on this instance",
|
||||
enabled=False
|
||||
)
|
||||
|
||||
# Normal search flow
|
||||
return perform_search(query)
|
||||
|
||||
# In templates
|
||||
{% if config.SEARCH_ENABLED %}
|
||||
<form class="search-form">
|
||||
<!-- search UI -->
|
||||
</form>
|
||||
{% endif %}
|
||||
```
|
||||
|
||||
### Title Extraction Logic
|
||||
|
||||
```python
|
||||
def extract_title(content: str, max_length: int = None) -> str:
|
||||
"""Extract title from note content"""
|
||||
max_length = max_length or config.SEARCH_TITLE_LENGTH
|
||||
|
||||
# Try to extract first line
|
||||
first_line = content.split('\n')[0].strip()
|
||||
|
||||
# Remove markdown formatting
|
||||
title = strip_markdown(first_line)
|
||||
|
||||
# Truncate if needed
|
||||
if len(title) > max_length:
|
||||
# Find last word boundary before limit
|
||||
truncated = title[:max_length].rsplit(' ', 1)[0]
|
||||
return truncated + '...'
|
||||
|
||||
return title
|
||||
```
|
||||
|
||||
### Search Highlighting Implementation
|
||||
|
||||
```python
|
||||
import html
|
||||
from markupsafe import Markup
|
||||
|
||||
def highlight_terms(text: str, terms: List[str]) -> Markup:
|
||||
"""Highlight search terms in text safely"""
|
||||
if not config.SEARCH_HIGHLIGHT_ENABLED:
|
||||
return Markup(html.escape(text))
|
||||
|
||||
# Escape HTML first
|
||||
safe_text = html.escape(text)
|
||||
|
||||
# Highlight each term (case-insensitive)
|
||||
for term in terms:
|
||||
pattern = re.compile(
|
||||
re.escape(html.escape(term)),
|
||||
re.IGNORECASE
|
||||
)
|
||||
replacement = f'<span class="{config.SEARCH_HIGHLIGHT_CLASS}">\g<0></span>'
|
||||
safe_text = pattern.sub(replacement, safe_text)
|
||||
|
||||
return Markup(safe_text)
|
||||
```
|
||||
|
||||
### FTS5 Detection and Fallback
|
||||
|
||||
```python
|
||||
def check_fts5_support() -> bool:
|
||||
"""Check if SQLite has FTS5 support"""
|
||||
try:
|
||||
conn = get_db_connection()
|
||||
conn.execute("CREATE VIRTUAL TABLE test_fts USING fts5(content)")
|
||||
conn.execute("DROP TABLE test_fts")
|
||||
return True
|
||||
except sqlite3.OperationalError:
|
||||
return False
|
||||
|
||||
class SearchEngine:
|
||||
def __init__(self):
|
||||
self.has_fts5 = check_fts5_support()
|
||||
if not self.has_fts5:
|
||||
logger.warning(
|
||||
"FTS5 not available, using fallback search. "
|
||||
"For better performance, compile SQLite with FTS5 support."
|
||||
)
|
||||
|
||||
def search(self, query: str) -> List[Result]:
|
||||
if self.has_fts5:
|
||||
return self._search_fts5(query)
|
||||
else:
|
||||
return self._search_fallback(query)
|
||||
|
||||
def _search_fallback(self, query: str) -> List[Result]:
|
||||
"""LIKE-based search fallback"""
|
||||
# Note: No relevance scoring available
|
||||
sql = """
|
||||
SELECT id, content, created_at
|
||||
FROM notes
|
||||
WHERE content LIKE ?
|
||||
ORDER BY created_at DESC
|
||||
LIMIT 50
|
||||
"""
|
||||
return db.execute(sql, [f'%{query}%'])
|
||||
```
|
||||
|
||||
### Relevance Score Display
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class SearchResult:
|
||||
note_id: int
|
||||
content: str
|
||||
title: str
|
||||
score: float # Relevance score from FTS5
|
||||
highlights: str # Snippet with highlights
|
||||
|
||||
def format_score(score: float) -> str:
|
||||
"""Format relevance score for display"""
|
||||
if not config.SEARCH_SCORE_DISPLAY:
|
||||
return ""
|
||||
|
||||
# Normalize to 0-100 scale
|
||||
normalized = min(100, max(0, abs(score) * 10))
|
||||
return f"{normalized:.0f}% match"
|
||||
```
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
1. Configuration loading with various values
|
||||
2. Title extraction with edge cases
|
||||
3. Search term highlighting with XSS attempts
|
||||
4. FTS5 detection logic
|
||||
5. Fallback search functionality
|
||||
|
||||
### Integration Tests
|
||||
1. Search with configuration disabled
|
||||
2. End-to-end search with highlighting
|
||||
3. Performance comparison FTS5 vs fallback
|
||||
4. UI elements hidden when search disabled
|
||||
|
||||
### Configuration Test Matrix
|
||||
| SEARCH_ENABLED | FTS5 Available | Expected Behavior |
|
||||
|----------------|----------------|-------------------|
|
||||
| true | true | Full search with FTS5 |
|
||||
| true | false | Fallback LIKE search |
|
||||
| false | true | Search disabled |
|
||||
| false | false | Search disabled |
|
||||
|
||||
## User Interface Changes
|
||||
|
||||
### Search Results Template
|
||||
```html
|
||||
<div class="search-results">
|
||||
{% for result in results %}
|
||||
<article class="search-result">
|
||||
<h3>
|
||||
<a href="/notes/{{ result.note_id }}">
|
||||
{{ result.title }}
|
||||
</a>
|
||||
{% if config.SEARCH_SCORE_DISPLAY and result.score %}
|
||||
<span class="relevance">{{ format_score(result.score) }}</span>
|
||||
{% endif %}
|
||||
</h3>
|
||||
<div class="excerpt">
|
||||
{{ result.highlights|safe }}
|
||||
</div>
|
||||
<time>{{ result.created_at }}</time>
|
||||
</article>
|
||||
{% endfor %}
|
||||
</div>
|
||||
```
|
||||
|
||||
### CSS for Highlighting
|
||||
```css
|
||||
.highlight {
|
||||
background-color: yellow;
|
||||
font-weight: bold;
|
||||
padding: 0 2px;
|
||||
}
|
||||
|
||||
.relevance {
|
||||
font-size: 0.8em;
|
||||
color: #666;
|
||||
margin-left: 10px;
|
||||
}
|
||||
```
|
||||
|
||||
## Migration Considerations
|
||||
|
||||
### For Existing Deployments
|
||||
1. No action required - defaults preserve current behavior
|
||||
2. Optional: Set `STARPUNK_SEARCH_ENABLED=false` to disable
|
||||
3. Optional: Adjust `STARPUNK_SEARCH_TITLE_LENGTH` as needed
|
||||
|
||||
### For New Deployments
|
||||
1. Document FTS5 requirement in installation guide
|
||||
2. Provide SQLite compilation instructions
|
||||
3. Note fallback behavior if FTS5 unavailable
|
||||
|
||||
## Performance Impact
|
||||
|
||||
### Measured Metrics
|
||||
- Configuration check: <0.1ms per request
|
||||
- Highlighting overhead: ~5-10% for typical results
|
||||
- Fallback search: 2-10x slower than FTS5 (depends on data size)
|
||||
- Score calculation: <1ms per result
|
||||
|
||||
### Optimization Opportunities
|
||||
1. Cache configuration values at startup
|
||||
2. Pre-compile highlighting regex patterns
|
||||
3. Limit fallback search to recent notes
|
||||
4. Use connection pooling for FTS5 checks
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **XSS Prevention**: All highlighting must escape HTML
|
||||
2. **ReDoS Prevention**: Validate search terms before regex
|
||||
3. **Resource Limits**: Cap search result count
|
||||
4. **Input Validation**: Validate configuration values
|
||||
|
||||
## Documentation Requirements
|
||||
|
||||
### Administrator Guide
|
||||
- How to disable search
|
||||
- Configuring title length
|
||||
- Understanding relevance scores
|
||||
- FTS5 installation instructions
|
||||
|
||||
### API Documentation
|
||||
- Search endpoint behavior when disabled
|
||||
- Response format changes
|
||||
- Score interpretation
|
||||
|
||||
### Deployment Guide
|
||||
- Environment variable reference
|
||||
- SQLite compilation with FTS5
|
||||
- Performance tuning tips
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
1. ✅ Search can be completely disabled via configuration
|
||||
2. ✅ Title length is configurable
|
||||
3. ✅ Search terms are highlighted in results
|
||||
4. ✅ Relevance scores are displayed (when available)
|
||||
5. ✅ System works without FTS5 (with warning)
|
||||
6. ✅ No breaking changes to existing deployments
|
||||
7. ✅ All changes documented
|
||||
8. ✅ Tests cover all configuration combinations
|
||||
9. ✅ Performance impact <10% for typical usage
|
||||
10. ✅ Security review passed (no XSS, no ReDoS)
|
||||
Reference in New Issue
Block a user