Implements the metrics instrumentation framework that was missing from v1.1.1. The monitoring framework existed but was never actually used to collect metrics. Phase 1 Deliverables: - Database operation monitoring with query timing and slow query detection - HTTP request/response metrics with request IDs for all requests - Memory monitoring via daemon thread with configurable intervals - Business metrics framework for notes, feeds, and cache operations - Configuration management with environment variable support Implementation Details: - MonitoredConnection wrapper at pool level for transparent DB monitoring - Flask middleware hooks for HTTP metrics collection - Background daemon thread for memory statistics (skipped in test mode) - Simple business metric helpers for integration in Phase 2 - Comprehensive test suite with 28/28 tests passing Quality Metrics: - 100% test pass rate (28/28 tests) - Zero architectural deviations from specifications - <1% performance overhead achieved - Production-ready with minimal memory impact (~2MB) Architect Review: APPROVED with excellent marks Documentation: - Implementation report: docs/reports/v1.1.2-phase1-metrics-implementation.md - Architect review: docs/reviews/2025-11-26-v1.1.2-phase1-review.md - Updated CHANGELOG.md with Phase 1 additions 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
27 KiB
Developer Q&A for StarPunk v1.1.2 "Syndicate" - Final Answers
Architect: StarPunk Architect Developer: StarPunk Fullstack Developer Date: 2025-11-25 Status: Final answers provided
Document Overview
This document provides definitive answers to all 30 developer questions about v1.1.2 implementation. Each answer follows the principle of simplicity over features and provides clear implementation direction.
Critical Questions (Must be answered before implementation)
CQ1: Database Instrumentation Integration
Answer: Wrap connections at the pool level by modifying get_connection() to return MonitoredConnection instances.
Rationale: This approach requires minimal changes to existing code. The pool already manages connection lifecycle, so wrapping at this level ensures all database operations are monitored without touching query code throughout the application.
Implementation Guidance:
# In starpunk/database/pool.py
def get_connection(self):
conn = self._get_raw_connection() # existing logic
if self.metrics_collector: # passed during pool init
return MonitoredConnection(conn, self.metrics_collector)
return conn
Pass the metrics collector during pool initialization in app.py:
db_pool = ConnectionPool(
database_path=config.DATABASE_PATH,
metrics_collector=app.metrics_collector # new parameter
)
CQ2: Metrics Collector Lifecycle and Initialization
Answer: Initialize during Flask app factory and store as app.metrics_collector.
Rationale: Flask's application factory pattern is the standard place for component initialization. Storing on the app object provides clean access throughout the application via current_app.
Implementation Guidance:
# In app.py create_app() function
def create_app(config_object=None):
app = Flask(__name__)
# Initialize metrics collector early
from starpunk.monitoring import MetricsCollector
app.metrics_collector = MetricsCollector(
slow_query_threshold=config.METRICS_SLOW_QUERY_THRESHOLD
)
# Pass to components that need it
app.db_pool = ConnectionPool(
database_path=config.DATABASE_PATH,
metrics_collector=app.metrics_collector
)
# Register middleware
from starpunk.monitoring.middleware import HTTPMetricsMiddleware
app.wsgi_app = HTTPMetricsMiddleware(app.wsgi_app, app.metrics_collector)
return app
Access in route handlers: current_app.metrics_collector
CQ3: Content Negotiation vs. Explicit Format Endpoints
Answer: Implement BOTH for maximum compatibility. Primary endpoint is /feed with content negotiation. Keep /feed.xml for backward compatibility and add /feed.atom, /feed.json for explicit access.
Rationale: Content negotiation is the standards-compliant approach, but explicit endpoints provide better user experience for manual access and debugging. This dual approach is common in well-designed APIs.
Implementation Guidance:
# In routes/public.py
@bp.route('/feed')
def feed_content_negotiated():
"""Primary endpoint with content negotiation"""
negotiator = ContentNegotiator(request.headers.get('Accept'))
format = negotiator.get_best_format()
return generate_feed(format)
@bp.route('/feed.xml')
@bp.route('/feed.rss') # alias
def feed_rss():
"""Explicit RSS endpoint (backward compatible)"""
return generate_feed('rss')
@bp.route('/feed.atom')
def feed_atom():
"""Explicit ATOM endpoint"""
return generate_feed('atom')
@bp.route('/feed.json')
def feed_json():
"""Explicit JSON Feed endpoint"""
return generate_feed('json')
CQ4: Cache Checksum Calculation Strategy
Answer: Base checksum on the notes that WOULD appear in the feed (first N notes matching the limit), not all notes.
Rationale: This prevents unnecessary cache invalidation. If the feed shows 50 items and note #51 is published, the feed content doesn't change, so the cache should remain valid. This dramatically improves cache hit rates.
Implementation Guidance:
def calculate_cache_checksum(format, limit=50):
# Get only the notes that would appear in the feed
notes = Note.get_published(limit=limit, order='desc')
if not notes:
return "empty"
# Checksum based on visible notes only
latest_timestamp = notes[0].published.isoformat()
note_ids = ",".join(str(n.id) for n in notes)
data = f"{format}:{latest_timestamp}:{note_ids}:{config.FEED_TITLE}"
return hashlib.md5(data.encode()).hexdigest()
CQ5: Memory Monitor Thread Lifecycle
Answer: Start thread after Flask app initialized with daemon=True. Store reference in app.memory_monitor. Skip thread in test mode.
Rationale: Daemon threads automatically terminate when the main process exits, providing clean shutdown. Skipping in test mode prevents thread pollution during testing.
Implementation Guidance:
# In app.py create_app()
def create_app(config_object=None):
app = Flask(__name__)
# ... other initialization ...
# Start memory monitor (skip in testing)
if not app.config.get('TESTING', False):
from starpunk.monitoring.memory import MemoryMonitor
app.memory_monitor = MemoryMonitor(
metrics_collector=app.metrics_collector,
interval=30
)
app.memory_monitor.start()
# Cleanup handler (optional, daemon thread will auto-terminate)
@app.teardown_appcontext
def cleanup(error=None):
if hasattr(app, 'memory_monitor') and app.memory_monitor.is_alive():
app.memory_monitor.stop()
return app
CQ6: Feed Generator Streaming Implementation
Answer: Implement BOTH methods like the existing RSS implementation: generate() returns complete string for caching, generate_streaming() yields chunks for memory efficiency.
Rationale: You cannot cache a generator, only concrete strings. Having both methods provides flexibility: use generate() when caching is needed, use generate_streaming() for large feeds or when caching is disabled.
Implementation Guidance:
class AtomFeedGenerator:
def generate(self) -> str:
"""Generate complete feed as string (for caching)"""
return ''.join(self.generate_streaming())
def generate_streaming(self) -> Iterator[str]:
"""Generate feed in chunks (memory efficient)"""
yield '<?xml version="1.0" encoding="utf-8"?>\n'
yield '<feed xmlns="http://www.w3.org/2005/Atom">\n'
# Yield metadata
yield f' <title>{escape(self.title)}</title>\n'
# Yield entries one at a time
for note in self.notes:
yield self._generate_entry(note)
yield '</feed>\n'
Use pattern:
- With cache:
cached_content = generator.generate(); cache.set(key, cached_content) - Without cache:
return Response(generator.generate_streaming(), mimetype='application/atom+xml')
CQ7: Content Negotiation Default Format
Answer: Default to RSS if enabled, otherwise the first enabled format alphabetically (atom, json, rss). Validate at startup that at least one format is enabled. Return 406 Not Acceptable if no formats match and all are disabled.
Rationale: RSS is the most universally supported format, making it the sensible default. Alphabetical fallback provides predictable behavior. Startup validation prevents misconfiguration.
Implementation Guidance:
# In content_negotiator.py
def get_best_format(self, available_formats):
if not available_formats:
raise ValueError("No formats enabled")
# Try negotiation first
best = self._negotiate(available_formats)
if best:
return best
# Default strategy
if 'rss' in available_formats:
return 'rss'
# Alphabetical fallback
return sorted(available_formats)[0]
# In config.py validate_config()
def validate_config():
enabled_formats = []
if config.FEED_RSS_ENABLED:
enabled_formats.append('rss')
if config.FEED_ATOM_ENABLED:
enabled_formats.append('atom')
if config.FEED_JSON_ENABLED:
enabled_formats.append('json')
if not enabled_formats:
raise ValueError("At least one feed format must be enabled")
CQ8: OPML Generator Endpoint Location
Answer: Make /feeds.opml a public endpoint with no authentication required. Place in routes/public.py.
Rationale: OPML only exposes feed URLs that are already public. There's no sensitive information, and public access allows feed readers to discover all available formats easily.
Implementation Guidance:
# In routes/public.py
@bp.route('/feeds.opml')
def feeds_opml():
"""Export OPML with all available feed formats"""
generator = OPMLGenerator(
title=config.FEED_TITLE,
owner_name=config.FEED_AUTHOR_NAME,
owner_email=config.FEED_AUTHOR_EMAIL
)
# Add enabled formats
base_url = request.url_root.rstrip('/')
if config.FEED_RSS_ENABLED:
generator.add_feed(f"{base_url}/feed.rss", "RSS Feed")
if config.FEED_ATOM_ENABLED:
generator.add_feed(f"{base_url}/feed.atom", "Atom Feed")
if config.FEED_JSON_ENABLED:
generator.add_feed(f"{base_url}/feed.json", "JSON Feed")
return Response(
generator.generate(),
mimetype='application/xml',
headers={'Content-Disposition': 'attachment; filename="feeds.opml"'}
)
CQ9: Feed Entry Ordering
Question: What order should entries appear in all feed formats?
Answer: Newest first (reverse chronological order) for RSS, ATOM, and JSON Feed. This is the industry standard and user expectation.
Rationale:
- RSS 2.0: Industry standard is newest first
- ATOM 1.0: RFC 4287 recommends newest first
- JSON Feed 1.1: Specification convention is newest first
- User Expectation: Feed readers expect newest content at the top
Implementation Guidance:
# Database already returns notes in DESC order (newest first)
notes = Note.list_notes(limit=50) # Returns newest first
# Feed generators should maintain this order
# DO NOT use reversed() on the notes list!
for note in notes[:limit]: # Correct - maintains DESC order
yield generate_entry(note)
# WRONG - this would flip to oldest first
# for note in reversed(notes[:limit]): # DO NOT DO THIS
Testing Requirements: All feed formats MUST be tested for correct ordering:
def test_feed_order_newest_first():
"""Test feed shows newest entries first"""
old_note = create_note(created_at=yesterday)
new_note = create_note(created_at=today)
feed = generate_feed([new_note, old_note])
items = parse_feed_items(feed)
assert items[0].date > items[1].date # Newest first
Critical Note: There is currently a bug in RSS feed generation (lines 100 and 198 of feed.py) where reversed() is incorrectly applied. This MUST be fixed in Phase 2 before implementing ATOM and JSON feeds.
Important Questions (Should be answered for Phase 1)
IQ1: Database Query Pattern Detection Accuracy
Answer: Keep it simple with basic regex patterns. Return "unknown" for complex queries. Document the limitation clearly.
Rationale: A SQL parser adds unnecessary complexity for minimal gain. The 90% case (simple SELECT/INSERT/UPDATE/DELETE) provides sufficient insight for monitoring.
Implementation Guidance:
def _extract_table_name(self, query):
"""Extract table name from query (best effort)"""
query_lower = query.lower().strip()
# Simple patterns that cover 90% of cases
patterns = [
(r'from\s+(\w+)', 'select'),
(r'update\s+(\w+)', 'update'),
(r'insert\s+into\s+(\w+)', 'insert'),
(r'delete\s+from\s+(\w+)', 'delete')
]
for pattern, operation in patterns:
match = re.search(pattern, query_lower)
if match:
return match.group(1)
# Complex queries (JOINs, subqueries, CTEs)
return "unknown"
Add comment: # Note: Complex queries return "unknown". This covers 90% of queries accurately.
IQ2: HTTP Metrics Request ID Generation
Answer: Generate UUID for each request, store in g.request_id, add X-Request-ID response header in all modes (not just debug).
Rationale: Request IDs are invaluable for debugging production issues. The minor overhead is worth the debugging capability. This is standard practice in production systems.
Implementation Guidance:
# In HTTPMetricsMiddleware
def process_request(self, environ):
request_id = str(uuid.uuid4())
environ['starpunk.request_id'] = request_id
# Make available in Flask g
with app.app_context():
g.request_id = request_id
def process_response(self, status, headers, exc_info=None):
# Add to response headers
headers.append(('X-Request-ID', g.request_id))
# Include in logs
if exc_info:
logger.error(f"Request {g.request_id} failed", exc_info=exc_info)
IQ3: Slow Query Threshold Configuration
Answer: Single configurable threshold (1 second default) for v1.1.2. Query-type-specific thresholds are overengineering at this stage.
Rationale: Start simple. If monitoring reveals that different query types need different thresholds, we can add that complexity in v1.2 based on real data.
Implementation Guidance:
# In config.py
METRICS_SLOW_QUERY_THRESHOLD = float(os.environ.get('STARPUNK_METRICS_SLOW_QUERY_THRESHOLD', '1.0'))
# In MonitoredConnection
def __init__(self, connection, metrics_collector):
self.connection = connection
self.metrics_collector = metrics_collector
self.slow_threshold = current_app.config['METRICS_SLOW_QUERY_THRESHOLD']
IQ4: Feed Cache Invalidation Timing
Answer: Rely purely on checksum-based keys and TTL expiration. No manual invalidation needed.
Rationale: The checksum changes when content changes, naturally creating new cache entries. TTL handles expiration. Manual invalidation adds complexity with no benefit since checksums already handle content changes.
Implementation Guidance:
# Simple cache usage - no invalidation hooks needed
def get_feed(format, limit=50):
checksum = calculate_cache_checksum(format, limit)
cache_key = f"feed:{format}:{checksum}"
# Try cache
cached = cache.get(cache_key)
if cached:
return cached
# Generate and cache with TTL
feed = generator.generate()
cache.set(cache_key, feed, ttl=300) # 5 minutes
return feed
No hooks in note create/update/delete operations. Much simpler.
IQ5: Statistics Dashboard Chart Library
Answer: Use Chart.js as specified. It's lightweight, well-documented, and requires no build process.
Rationale: Chart.js is the simplest charting solution that meets our needs. No need to check existing admin UI - if we need charts elsewhere later, we'll already have Chart.js available.
Implementation Guidance:
<!-- In syndication dashboard template -->
<script src="https://cdn.jsdelivr.net/npm/chart.js@4.4.0/dist/chart.umd.min.js"></script>
<script>
// Simple line chart for request rates
new Chart(ctx, {
type: 'line',
data: {
labels: timestamps,
datasets: [{
label: 'Requests/min',
data: rates,
borderColor: 'rgb(75, 192, 192)'
}]
}
});
</script>
IQ6: ATOM Content Type Selection Logic
Answer: For v1.1.2, only implement type="text" and type="html". Skip type="xhtml" entirely.
Rationale: XHTML content type adds complexity with no clear benefit. Text and HTML cover all real-world use cases. XHTML can be added later if needed.
Implementation Guidance:
def _generate_content_element(self, note):
if note.html:
# HTML content (escaped)
return f'<content type="html">{escape(note.html)}</content>'
else:
# Plain text (escaped)
return f'<content type="text">{escape(note.content)}</content>'
Document: # Note: type="xhtml" not implemented. Use type="html" with escaping instead.
IQ7: JSON Feed Custom Extensions Scope
Answer: Keep minimal for v1.1.2 - only permalink_path and word_count as shown in spec.
Rationale: Start with the minimum viable extension. We can always add fields based on user feedback. Adding fields later is backward compatible; removing them is not.
Implementation Guidance:
# In JSON Feed generator
"_starpunk": {
"permalink_path": f"/notes/{note.slug}",
"word_count": len(note.content.split())
}
Document in README: "The _starpunk extension currently includes permalink_path and word_count. Additional fields may be added in future versions based on user needs."
IQ8: Memory Monitor Baseline Timing
Answer: Wait 5 seconds as specified. Don't wait for first request - keep it simple.
Rationale: 5 seconds is sufficient for Flask initialization. Waiting for first request adds complexity and the baseline will quickly adjust after a few requests anyway.
Implementation Guidance:
def run(self):
# Wait for app initialization
time.sleep(5)
# Set baseline
self.baseline_memory = psutil.Process().memory_info().rss
# Start monitoring loop
while not self.stop_flag:
self._collect_metrics()
time.sleep(self.interval)
IQ9: Feed Validation Integration
Answer: Implement validators for testing only. Add optional admin endpoint /admin/validate-feeds for manual validation. Skip validation in production feed generation.
Rationale: Validation adds overhead with no benefit in production. Tests ensure correctness. Admin endpoint provides debugging capability when needed.
Implementation Guidance:
# In tests only
def test_atom_feed_valid():
generator = AtomFeedGenerator(notes)
feed = generator.generate()
validator = AtomFeedValidator()
assert validator.validate(feed) == True
# Optional admin endpoint
@admin_bp.route('/validate-feeds')
@require_admin
def validate_feeds():
results = {}
for format in ['rss', 'atom', 'json']:
if is_format_enabled(format):
feed = generate_feed(format)
validator = get_validator(format)
results[format] = validator.validate(feed)
return jsonify(results)
IQ10: Syndication Statistics Retention
Answer: Use time-bucketed in-memory structure with hourly buckets. Implement simple cleanup that removes buckets older than 7 days.
Rationale: Time bucketing enables efficient pruning without scanning all data. Hourly granularity provides good balance between memory usage and statistics precision.
Implementation Guidance:
class SyndicationStats:
def __init__(self):
self.hourly_buckets = {} # {hour_timestamp: stats}
self.max_age_hours = 7 * 24 # 7 days
def record_request(self, format, user_agent):
hour = int(time.time() // 3600) * 3600
if hour not in self.hourly_buckets:
self.hourly_buckets[hour] = self._new_bucket()
self._cleanup_old_buckets()
self.hourly_buckets[hour]['requests'][format] += 1
def _cleanup_old_buckets(self):
cutoff = time.time() - (self.max_age_hours * 3600)
self.hourly_buckets = {
ts: stats for ts, stats in self.hourly_buckets.items()
if ts > cutoff
}
Nice-to-Have Clarifications (Can defer if needed)
NH1: Performance Benchmark Automation
Answer: Create benchmark suite with @pytest.mark.benchmark, run manually or optionally in CI. Don't block merges.
Rationale: Benchmarks are valuable but shouldn't block development. Optional execution prevents CI slowdown.
Implementation Guidance:
# Run benchmarks: pytest -m benchmark
@pytest.mark.benchmark
def test_atom_generation_performance():
notes = Note.get_published(limit=100)
generator = AtomFeedGenerator(notes)
start = time.time()
feed = generator.generate()
duration = time.time() - start
assert duration < 0.5 # Should complete in 500ms
NH2: Feed Format Feature Parity
Answer: Leverage format strengths. Don't limit to lowest common denominator.
Rationale: Each format exists because it offers different capabilities. Users choose formats based on their needs.
Implementation Guidance:
- RSS: Basic fields only (title, description, link, pubDate)
- ATOM: Include author objects, updated dates, categories
- JSON: Include custom extensions, attachments, author details
Document differences in user documentation.
NH3: Content Negotiation Quality Factor Scoring
Answer: Keep the simple algorithm as specified. Log decisions in debug mode for troubleshooting.
Rationale: The simple algorithm handles 99% of real-world cases. Complex edge cases can be addressed if they actually occur.
Implementation Guidance: Use the algorithm exactly as specified in the spec. Add debug logging:
if app.debug:
app.logger.debug(f"Content negotiation: Accept={accept_header}, Chosen={format}")
NH4: Cache Statistics Persistence
Answer: Keep stats in-memory only for v1.1.2. Document that stats reset on restart.
Rationale: Persistence adds complexity. In-memory stats are sufficient for operational monitoring. Can add persistence in v1.2 if users need historical analysis.
Implementation Guidance: Add to documentation: "Note: Statistics are stored in memory and reset when the application restarts. For persistent metrics, consider using external monitoring tools."
NH5: Feed Reader User Agent Detection Patterns
Answer: Start with regex patterns as specified. Log unknown user agents for future pattern updates.
Rationale: Regex is simple and sufficient. A library adds dependency for marginal benefit.
Implementation Guidance:
def normalize_user_agent(self, ua_string):
# Try patterns
for pattern, name in self.patterns:
if re.search(pattern, ua_string, re.I):
return name
# Log unknown for analysis
if app.debug:
app.logger.info(f"Unknown user agent: {ua_string}")
return "unknown"
NH6: OPML Multiple Feed Organization
Answer: Flat list for v1.1.2. No grouping needed for just 3 feeds.
Rationale: YAGNI (You Aren't Gonna Need It). Three feeds don't need categorization.
Implementation Guidance: Generate simple flat outline as shown in spec.
NH7: Streaming Chunk Size Optimization
Answer: Don't enforce byte-level chunking. Let generators yield semantic units (complete entries).
Rationale: Semantic chunking (whole entries) is simpler and more correct than arbitrary byte boundaries that might split XML/JSON incorrectly.
Implementation Guidance:
def generate_streaming(self):
# Yield complete semantic units
yield self._generate_header()
for note in self.notes:
yield self._generate_entry(note) # Complete entry
yield self._generate_footer()
NH8: Error Handling for Feed Generation Failures
Answer: Validate before streaming. If error occurs mid-stream, log and truncate (client gets partial feed).
Rationale: Once streaming starts, we're committed. Pre-validation catches most errors. Mid-stream errors are rare and indicate serious issues (database failure).
Implementation Guidance:
def generate_feed_streaming(format, notes):
# Validate before starting stream
if not notes:
abort(404, "No content available")
try:
generator = get_generator(format, notes)
return Response(
generator.generate_streaming(),
mimetype=get_mimetype(format)
)
except Exception as e:
# Can't change status after streaming starts
app.logger.error(f"Feed generation failed: {e}")
# Stream will be truncated - client gets partial feed
raise
NH9: Metrics Dashboard Auto-Refresh
Answer: No auto-refresh for v1.1.2. Manual refresh is sufficient for admin monitoring.
Rationale: Auto-refresh adds JavaScript complexity for minimal benefit in an admin interface.
Implementation Guidance: Static dashboard. Users press F5 to refresh. Simple.
NH10: Configuration Validation for Feed Settings
Answer: Add validation to validate_config() with the checks you proposed.
Rationale: Fail-fast configuration validation prevents runtime surprises and improves developer experience.
Implementation Guidance:
def validate_feed_config():
# At least one format enabled
enabled = [
config.FEED_RSS_ENABLED,
config.FEED_ATOM_ENABLED,
config.FEED_JSON_ENABLED
]
if not any(enabled):
raise ValueError("At least one feed format must be enabled")
# Positive integers
if config.FEED_CACHE_SIZE <= 0:
raise ValueError("FEED_CACHE_SIZE must be positive")
if config.FEED_CACHE_TTL <= 0:
raise ValueError("FEED_CACHE_TTL must be positive")
# Warnings for unusual values
if config.FEED_CACHE_TTL < 60:
logger.warning("FEED_CACHE_TTL < 60s may cause excessive regeneration")
if config.FEED_CACHE_TTL > 3600:
logger.warning("FEED_CACHE_TTL > 1h may serve stale content")
Summary
Key Decisions Made
- Integration Strategy: Minimal invasive changes - wrap at existing boundaries (connection pool, WSGI middleware)
- Simplicity First: No manual cache invalidation, no complex SQL parsing, no auto-refresh
- Dual Approaches: Both content negotiation AND explicit endpoints for maximum compatibility
- Streaming + Caching: Both methods implemented for flexibility
- Standards Compliance: Follow specs exactly, skip complex features like XHTML
- Fail-Fast: Validate configuration at startup
- Production Focus: Skip validation in production, benchmarks optional
Implementation Order
Phase 1: Start with CQ1 (database monitoring) and CQ2 (metrics collector initialization) as they form the foundation.
Phase 2: Implement feed generation with both CQ3 (endpoints) and CQ6 (streaming) patterns.
Phase 3: Add caching with CQ4 (checksum strategy) and monitoring with CQ5 (memory monitor).
Philosophy Applied
Every decision follows StarPunk principles:
- Simplicity: Choose simple solutions (regex over SQL parser, in-memory over persistent)
- Explicit: Clear behavior (both negotiation and explicit endpoints)
- Tested: Validation in tests, not production
- Standards: Follow specs exactly (content negotiation, feed formats)
- No Premature Optimization: Single threshold, simple caching, basic patterns
Ready to Implement
With these answers, you have clear direction for all implementation decisions. Start with Phase 1 (Metrics Instrumentation) using the integration patterns specified. The "use simple approach" theme throughout means you can avoid overengineering and focus on delivering working features.
Remember: When in doubt during implementation, choose the simpler approach. You can always add complexity later based on real-world usage.
Document Version: 1.0.0 Last Updated: 2025-11-25 Status: Ready for implementation