This commit implements the first three phases of v1.1.2 Phase 2 Feed Formats, adding ATOM 1.0 and JSON Feed 1.1 support alongside the existing RSS feed. CRITICAL BUG FIX: - Fixed RSS streaming feed ordering (was showing oldest-first instead of newest-first) - Streaming RSS removed incorrect reversed() call at line 198 - Feedgen RSS kept correct reversed() to compensate for library behavior NEW FEATURES: - ATOM 1.0 feed generation (RFC 4287 compliant) - Proper XML namespacing and RFC 3339 dates - Streaming and non-streaming methods - 11 comprehensive tests - JSON Feed 1.1 generation (JSON Feed spec compliant) - RFC 3339 dates and UTF-8 JSON output - Custom _starpunk extension with permalink_path and word_count - 13 comprehensive tests REFACTORING: - Restructured feed code into starpunk/feeds/ module - feeds/rss.py - RSS 2.0 (moved from feed.py) - feeds/atom.py - ATOM 1.0 (new) - feeds/json_feed.py - JSON Feed 1.1 (new) - Backward compatible feed.py shim for existing imports - Business metrics integrated into all feed generators TESTING: - Created shared test helper tests/helpers/feed_ordering.py - Helper validates newest-first ordering across all formats - 48 total feed tests, all passing - RSS: 24 tests - ATOM: 11 tests - JSON Feed: 13 tests FILES CHANGED: - Modified: starpunk/feed.py (now compatibility shim) - New: starpunk/feeds/ module with rss.py, atom.py, json_feed.py - New: tests/helpers/feed_ordering.py (shared test helper) - New: tests/test_feeds_atom.py, tests/test_feeds_json.py - Modified: CHANGELOG.md (Phase 2 entries) - New: docs/reports/2025-11-26-v1.1.2-phase2-feed-formats-partial.md NEXT STEPS: Phase 2.4 (Content Negotiation) pending - will add /feed endpoint with Accept header negotiation and explicit format endpoints. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
1096 lines
38 KiB
Markdown
1096 lines
38 KiB
Markdown
# Developer Q&A for StarPunk v1.1.2 "Syndicate" - Final Answers
|
|
|
|
**Architect**: StarPunk Architect
|
|
**Developer**: StarPunk Fullstack Developer
|
|
**Date**: 2025-11-25
|
|
**Status**: Final answers provided
|
|
|
|
## Document Overview
|
|
|
|
This document provides definitive answers to all 30 developer questions about v1.1.2 implementation. Each answer follows the principle of simplicity over features and provides clear implementation direction.
|
|
|
|
---
|
|
|
|
## Critical Questions (Must be answered before implementation)
|
|
|
|
### C2: Feed Generator Module Structure
|
|
|
|
**Question**: How should we organize the feed generator code as we add ATOM and JSON formats?
|
|
1. Keep single file: Add ATOM and JSON to existing `feed.py`
|
|
2. Split by format: Create `feed/rss.py`, `feed/atom.py`, `feed/json.py`
|
|
3. Hybrid: Keep RSS in `feed.py`, new formats in `feed/` subdirectory
|
|
|
|
**Answer**: **Option 2 - Split by format into separate modules** (`feed/rss.py`, `feed/atom.py`, `feed/json.py`).
|
|
|
|
**Rationale**: This provides the cleanest separation of concerns and follows the single responsibility principle. Each feed format has distinct specifications, escaping rules, and structure. Separate files prevent the code from becoming unwieldy and make it easier to maintain each format independently. This also aligns with the existing pattern where distinct functionality gets its own module.
|
|
|
|
**Implementation Guidance**:
|
|
```
|
|
starpunk/feeds/
|
|
├── __init__.py # Exports main interface functions
|
|
├── rss.py # RSSFeedGenerator class
|
|
├── atom.py # AtomFeedGenerator class
|
|
├── json.py # JSONFeedGenerator class
|
|
├── opml.py # OPMLGenerator class
|
|
├── cache.py # FeedCache class
|
|
├── content_negotiator.py # ContentNegotiator class
|
|
└── validators.py # Feed validators (test use only)
|
|
```
|
|
|
|
In `feeds/__init__.py`:
|
|
```python
|
|
from .rss import RSSFeedGenerator
|
|
from .atom import AtomFeedGenerator
|
|
from .json import JSONFeedGenerator
|
|
from .cache import FeedCache
|
|
from .content_negotiator import ContentNegotiator
|
|
|
|
def generate_feed(format, notes, config):
|
|
"""Factory function to generate feed in specified format"""
|
|
generators = {
|
|
'rss': RSSFeedGenerator,
|
|
'atom': AtomFeedGenerator,
|
|
'json': JSONFeedGenerator
|
|
}
|
|
|
|
generator_class = generators.get(format)
|
|
if not generator_class:
|
|
raise ValueError(f"Unknown feed format: {format}")
|
|
|
|
return generator_class(notes, config).generate()
|
|
```
|
|
|
|
Move existing RSS code to `feeds/rss.py` during Phase 2.0.
|
|
|
|
---
|
|
|
|
## Critical Questions (Must be answered before implementation)
|
|
|
|
### CQ1: Database Instrumentation Integration
|
|
|
|
**Answer**: Wrap connections at the pool level by modifying `get_connection()` to return `MonitoredConnection` instances.
|
|
|
|
**Rationale**: This approach requires minimal changes to existing code. The pool already manages connection lifecycle, so wrapping at this level ensures all database operations are monitored without touching query code throughout the application.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In starpunk/database/pool.py
|
|
def get_connection(self):
|
|
conn = self._get_raw_connection() # existing logic
|
|
if self.metrics_collector: # passed during pool init
|
|
return MonitoredConnection(conn, self.metrics_collector)
|
|
return conn
|
|
```
|
|
|
|
Pass the metrics collector during pool initialization in `app.py`:
|
|
```python
|
|
db_pool = ConnectionPool(
|
|
database_path=config.DATABASE_PATH,
|
|
metrics_collector=app.metrics_collector # new parameter
|
|
)
|
|
```
|
|
|
|
---
|
|
|
|
### CQ2: Metrics Collector Lifecycle and Initialization
|
|
|
|
**Answer**: Initialize during Flask app factory and store as `app.metrics_collector`.
|
|
|
|
**Rationale**: Flask's application factory pattern is the standard place for component initialization. Storing on the app object provides clean access throughout the application via `current_app`.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In app.py create_app() function
|
|
def create_app(config_object=None):
|
|
app = Flask(__name__)
|
|
|
|
# Initialize metrics collector early
|
|
from starpunk.monitoring import MetricsCollector
|
|
app.metrics_collector = MetricsCollector(
|
|
slow_query_threshold=config.METRICS_SLOW_QUERY_THRESHOLD
|
|
)
|
|
|
|
# Pass to components that need it
|
|
app.db_pool = ConnectionPool(
|
|
database_path=config.DATABASE_PATH,
|
|
metrics_collector=app.metrics_collector
|
|
)
|
|
|
|
# Register middleware
|
|
from starpunk.monitoring.middleware import HTTPMetricsMiddleware
|
|
app.wsgi_app = HTTPMetricsMiddleware(app.wsgi_app, app.metrics_collector)
|
|
|
|
return app
|
|
```
|
|
|
|
Access in route handlers: `current_app.metrics_collector`
|
|
|
|
---
|
|
|
|
### CQ3: Content Negotiation vs. Explicit Format Endpoints
|
|
|
|
**Answer**: Implement BOTH for maximum compatibility. Primary endpoint is `/feed` with content negotiation. Keep `/feed.xml` for backward compatibility and add `/feed.atom`, `/feed.json` for explicit access.
|
|
|
|
**Rationale**: Content negotiation is the standards-compliant approach, but explicit endpoints provide better user experience for manual access and debugging. This dual approach is common in well-designed APIs.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In routes/public.py
|
|
|
|
@bp.route('/feed')
|
|
def feed_content_negotiated():
|
|
"""Primary endpoint with content negotiation"""
|
|
negotiator = ContentNegotiator(request.headers.get('Accept'))
|
|
format = negotiator.get_best_format()
|
|
return generate_feed(format)
|
|
|
|
@bp.route('/feed.xml')
|
|
@bp.route('/feed.rss') # alias
|
|
def feed_rss():
|
|
"""Explicit RSS endpoint (backward compatible)"""
|
|
return generate_feed('rss')
|
|
|
|
@bp.route('/feed.atom')
|
|
def feed_atom():
|
|
"""Explicit ATOM endpoint"""
|
|
return generate_feed('atom')
|
|
|
|
@bp.route('/feed.json')
|
|
def feed_json():
|
|
"""Explicit JSON Feed endpoint"""
|
|
return generate_feed('json')
|
|
```
|
|
|
|
---
|
|
|
|
### CQ4: Cache Checksum Calculation Strategy
|
|
|
|
**Answer**: Base checksum on the notes that WOULD appear in the feed (first N notes matching the limit), not all notes.
|
|
|
|
**Rationale**: This prevents unnecessary cache invalidation. If the feed shows 50 items and note #51 is published, the feed content doesn't change, so the cache should remain valid. This dramatically improves cache hit rates.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
def calculate_cache_checksum(format, limit=50):
|
|
# Get only the notes that would appear in the feed
|
|
notes = Note.get_published(limit=limit, order='desc')
|
|
|
|
if not notes:
|
|
return "empty"
|
|
|
|
# Checksum based on visible notes only
|
|
latest_timestamp = notes[0].published.isoformat()
|
|
note_ids = ",".join(str(n.id) for n in notes)
|
|
|
|
data = f"{format}:{latest_timestamp}:{note_ids}:{config.FEED_TITLE}"
|
|
return hashlib.md5(data.encode()).hexdigest()
|
|
```
|
|
|
|
---
|
|
|
|
### CQ5: Memory Monitor Thread Lifecycle
|
|
|
|
**Answer**: Start thread after Flask app initialized with daemon=True. Store reference in `app.memory_monitor`. Skip thread in test mode.
|
|
|
|
**Rationale**: Daemon threads automatically terminate when the main process exits, providing clean shutdown. Skipping in test mode prevents thread pollution during testing.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In app.py create_app()
|
|
def create_app(config_object=None):
|
|
app = Flask(__name__)
|
|
|
|
# ... other initialization ...
|
|
|
|
# Start memory monitor (skip in testing)
|
|
if not app.config.get('TESTING', False):
|
|
from starpunk.monitoring.memory import MemoryMonitor
|
|
app.memory_monitor = MemoryMonitor(
|
|
metrics_collector=app.metrics_collector,
|
|
interval=30
|
|
)
|
|
app.memory_monitor.start()
|
|
|
|
# Cleanup handler (optional, daemon thread will auto-terminate)
|
|
@app.teardown_appcontext
|
|
def cleanup(error=None):
|
|
if hasattr(app, 'memory_monitor') and app.memory_monitor.is_alive():
|
|
app.memory_monitor.stop()
|
|
|
|
return app
|
|
```
|
|
|
|
---
|
|
|
|
### CQ6: Feed Generator Streaming Implementation
|
|
|
|
**Answer**: Implement BOTH methods like the existing RSS implementation: `generate()` returns complete string for caching, `generate_streaming()` yields chunks for memory efficiency.
|
|
|
|
**Rationale**: You cannot cache a generator, only concrete strings. Having both methods provides flexibility: use `generate()` when caching is needed, use `generate_streaming()` for large feeds or when caching is disabled.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
class AtomFeedGenerator:
|
|
def generate(self) -> str:
|
|
"""Generate complete feed as string (for caching)"""
|
|
return ''.join(self.generate_streaming())
|
|
|
|
def generate_streaming(self) -> Iterator[str]:
|
|
"""Generate feed in chunks (memory efficient)"""
|
|
yield '<?xml version="1.0" encoding="utf-8"?>\n'
|
|
yield '<feed xmlns="http://www.w3.org/2005/Atom">\n'
|
|
|
|
# Yield metadata
|
|
yield f' <title>{escape(self.title)}</title>\n'
|
|
|
|
# Yield entries one at a time
|
|
for note in self.notes:
|
|
yield self._generate_entry(note)
|
|
|
|
yield '</feed>\n'
|
|
```
|
|
|
|
Use pattern:
|
|
- With cache: `cached_content = generator.generate(); cache.set(key, cached_content)`
|
|
- Without cache: `return Response(generator.generate_streaming(), mimetype='application/atom+xml')`
|
|
|
|
---
|
|
|
|
### CQ7: Content Negotiation Default Format
|
|
|
|
**Answer**: Default to RSS if enabled, otherwise the first enabled format alphabetically (atom, json, rss). Validate at startup that at least one format is enabled. Return 406 Not Acceptable if no formats match and all are disabled.
|
|
|
|
**Rationale**: RSS is the most universally supported format, making it the sensible default. Alphabetical fallback provides predictable behavior. Startup validation prevents misconfiguration.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In content_negotiator.py
|
|
def get_best_format(self, available_formats):
|
|
if not available_formats:
|
|
raise ValueError("No formats enabled")
|
|
|
|
# Try negotiation first
|
|
best = self._negotiate(available_formats)
|
|
if best:
|
|
return best
|
|
|
|
# Default strategy
|
|
if 'rss' in available_formats:
|
|
return 'rss'
|
|
|
|
# Alphabetical fallback
|
|
return sorted(available_formats)[0]
|
|
|
|
# In config.py validate_config()
|
|
def validate_config():
|
|
enabled_formats = []
|
|
if config.FEED_RSS_ENABLED:
|
|
enabled_formats.append('rss')
|
|
if config.FEED_ATOM_ENABLED:
|
|
enabled_formats.append('atom')
|
|
if config.FEED_JSON_ENABLED:
|
|
enabled_formats.append('json')
|
|
|
|
if not enabled_formats:
|
|
raise ValueError("At least one feed format must be enabled")
|
|
```
|
|
|
|
---
|
|
|
|
### CQ8: OPML Generator Endpoint Location
|
|
|
|
**Answer**: Make `/feeds.opml` a public endpoint with no authentication required. Place in `routes/public.py`.
|
|
|
|
**Rationale**: OPML only exposes feed URLs that are already public. There's no sensitive information, and public access allows feed readers to discover all available formats easily.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In routes/public.py
|
|
@bp.route('/feeds.opml')
|
|
def feeds_opml():
|
|
"""Export OPML with all available feed formats"""
|
|
generator = OPMLGenerator(
|
|
title=config.FEED_TITLE,
|
|
owner_name=config.FEED_AUTHOR_NAME,
|
|
owner_email=config.FEED_AUTHOR_EMAIL
|
|
)
|
|
|
|
# Add enabled formats
|
|
base_url = request.url_root.rstrip('/')
|
|
if config.FEED_RSS_ENABLED:
|
|
generator.add_feed(f"{base_url}/feed.rss", "RSS Feed")
|
|
if config.FEED_ATOM_ENABLED:
|
|
generator.add_feed(f"{base_url}/feed.atom", "Atom Feed")
|
|
if config.FEED_JSON_ENABLED:
|
|
generator.add_feed(f"{base_url}/feed.json", "JSON Feed")
|
|
|
|
return Response(
|
|
generator.generate(),
|
|
mimetype='application/xml',
|
|
headers={'Content-Disposition': 'attachment; filename="feeds.opml"'}
|
|
)
|
|
```
|
|
|
|
---
|
|
|
|
### CQ9: Feed Entry Ordering
|
|
|
|
**Question**: What order should entries appear in all feed formats?
|
|
|
|
**Answer**: **Newest first (reverse chronological order)** for RSS, ATOM, and JSON Feed. This is the industry standard and user expectation.
|
|
|
|
**Rationale**:
|
|
- RSS 2.0: Industry standard is newest first
|
|
- ATOM 1.0: RFC 4287 recommends newest first
|
|
- JSON Feed 1.1: Specification convention is newest first
|
|
- User Expectation: Feed readers expect newest content at the top
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# Database already returns notes in DESC order (newest first)
|
|
notes = Note.list_notes(limit=50) # Returns newest first
|
|
|
|
# Feed generators should maintain this order
|
|
# DO NOT use reversed() on the notes list!
|
|
for note in notes[:limit]: # Correct - maintains DESC order
|
|
yield generate_entry(note)
|
|
|
|
# WRONG - this would flip to oldest first
|
|
# for note in reversed(notes[:limit]): # DO NOT DO THIS
|
|
```
|
|
|
|
**Testing Requirements**:
|
|
All feed formats MUST be tested for correct ordering:
|
|
```python
|
|
def test_feed_order_newest_first():
|
|
"""Test feed shows newest entries first"""
|
|
old_note = create_note(created_at=yesterday)
|
|
new_note = create_note(created_at=today)
|
|
|
|
feed = generate_feed([new_note, old_note])
|
|
items = parse_feed_items(feed)
|
|
|
|
assert items[0].date > items[1].date # Newest first
|
|
```
|
|
|
|
**Critical Note**: There is currently a bug in RSS feed generation (lines 100 and 198 of feed.py) where `reversed()` is incorrectly applied. This MUST be fixed in Phase 2 before implementing ATOM and JSON feeds.
|
|
|
|
### C1: RSS Fix Testing Strategy
|
|
|
|
**Question**: How should we test the RSS ordering fix?
|
|
1. Minimal: Single test verifying newest-first order
|
|
2. Comprehensive: Multiple tests covering edge cases
|
|
3. Cross-format: Shared test helper for all 3 formats
|
|
|
|
**Answer**: **Option 3 - Cross-format shared test helper** that will be used for RSS now and ATOM/JSON later.
|
|
|
|
**Rationale**: The ordering requirement is identical across all feed formats (newest first). Creating a shared test helper now ensures consistency and prevents duplicating test logic. This minimal extra effort now saves time and prevents bugs when implementing ATOM and JSON formats.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In tests/test_feeds.py
|
|
|
|
def assert_feed_ordering_newest_first(feed_content, format):
|
|
"""Shared helper to verify feed items are in newest-first order"""
|
|
if format == 'rss':
|
|
items = parse_rss_items(feed_content)
|
|
dates = [item.pubDate for item in items]
|
|
elif format == 'atom':
|
|
items = parse_atom_entries(feed_content)
|
|
dates = [item.published for item in items]
|
|
elif format == 'json':
|
|
items = json.loads(feed_content)['items']
|
|
dates = [item['date_published'] for item in items]
|
|
|
|
# Verify descending order (newest first)
|
|
for i in range(len(dates) - 1):
|
|
assert dates[i] > dates[i + 1], f"Item {i} should be newer than item {i+1}"
|
|
|
|
return True
|
|
|
|
# Test for RSS fix in Phase 2.0
|
|
def test_rss_feed_newest_first():
|
|
"""Verify RSS feed shows newest entries first (regression test)"""
|
|
old_note = create_test_note(published=yesterday)
|
|
new_note = create_test_note(published=today)
|
|
|
|
generator = RSSFeedGenerator([new_note, old_note], config)
|
|
feed = generator.generate()
|
|
|
|
assert_feed_ordering_newest_first(feed, 'rss')
|
|
```
|
|
|
|
Also create edge case tests:
|
|
- Empty feed
|
|
- Single item
|
|
- Items with identical timestamps
|
|
- Items spanning months/years
|
|
|
|
---
|
|
|
|
## Important Questions (Should be answered for Phase 1)
|
|
|
|
### IQ1: Database Query Pattern Detection Accuracy
|
|
|
|
**Answer**: Keep it simple with basic regex patterns. Return "unknown" for complex queries. Document the limitation clearly.
|
|
|
|
**Rationale**: A SQL parser adds unnecessary complexity for minimal gain. The 90% case (simple SELECT/INSERT/UPDATE/DELETE) provides sufficient insight for monitoring.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
def _extract_table_name(self, query):
|
|
"""Extract table name from query (best effort)"""
|
|
query_lower = query.lower().strip()
|
|
|
|
# Simple patterns that cover 90% of cases
|
|
patterns = [
|
|
(r'from\s+(\w+)', 'select'),
|
|
(r'update\s+(\w+)', 'update'),
|
|
(r'insert\s+into\s+(\w+)', 'insert'),
|
|
(r'delete\s+from\s+(\w+)', 'delete')
|
|
]
|
|
|
|
for pattern, operation in patterns:
|
|
match = re.search(pattern, query_lower)
|
|
if match:
|
|
return match.group(1)
|
|
|
|
# Complex queries (JOINs, subqueries, CTEs)
|
|
return "unknown"
|
|
```
|
|
|
|
Add comment: `# Note: Complex queries return "unknown". This covers 90% of queries accurately.`
|
|
|
|
---
|
|
|
|
### IQ2: HTTP Metrics Request ID Generation
|
|
|
|
**Answer**: Generate UUID for each request, store in `g.request_id`, add `X-Request-ID` response header in all modes (not just debug).
|
|
|
|
**Rationale**: Request IDs are invaluable for debugging production issues. The minor overhead is worth the debugging capability. This is standard practice in production systems.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In HTTPMetricsMiddleware
|
|
def process_request(self, environ):
|
|
request_id = str(uuid.uuid4())
|
|
environ['starpunk.request_id'] = request_id
|
|
|
|
# Make available in Flask g
|
|
with app.app_context():
|
|
g.request_id = request_id
|
|
|
|
def process_response(self, status, headers, exc_info=None):
|
|
# Add to response headers
|
|
headers.append(('X-Request-ID', g.request_id))
|
|
|
|
# Include in logs
|
|
if exc_info:
|
|
logger.error(f"Request {g.request_id} failed", exc_info=exc_info)
|
|
```
|
|
|
|
---
|
|
|
|
### IQ3: Slow Query Threshold Configuration
|
|
|
|
**Answer**: Single configurable threshold (1 second default) for v1.1.2. Query-type-specific thresholds are overengineering at this stage.
|
|
|
|
**Rationale**: Start simple. If monitoring reveals that different query types need different thresholds, we can add that complexity in v1.2 based on real data.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In config.py
|
|
METRICS_SLOW_QUERY_THRESHOLD = float(os.environ.get('STARPUNK_METRICS_SLOW_QUERY_THRESHOLD', '1.0'))
|
|
|
|
# In MonitoredConnection
|
|
def __init__(self, connection, metrics_collector):
|
|
self.connection = connection
|
|
self.metrics_collector = metrics_collector
|
|
self.slow_threshold = current_app.config['METRICS_SLOW_QUERY_THRESHOLD']
|
|
```
|
|
|
|
---
|
|
|
|
### IQ4: Feed Cache Invalidation Timing
|
|
|
|
**Answer**: Rely purely on checksum-based keys and TTL expiration. No manual invalidation needed.
|
|
|
|
**Rationale**: The checksum changes when content changes, naturally creating new cache entries. TTL handles expiration. Manual invalidation adds complexity with no benefit since checksums already handle content changes.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# Simple cache usage - no invalidation hooks needed
|
|
def get_feed(format, limit=50):
|
|
checksum = calculate_cache_checksum(format, limit)
|
|
cache_key = f"feed:{format}:{checksum}"
|
|
|
|
# Try cache
|
|
cached = cache.get(cache_key)
|
|
if cached:
|
|
return cached
|
|
|
|
# Generate and cache with TTL
|
|
feed = generator.generate()
|
|
cache.set(cache_key, feed, ttl=300) # 5 minutes
|
|
return feed
|
|
```
|
|
|
|
No hooks in note create/update/delete operations. Much simpler.
|
|
|
|
---
|
|
|
|
### IQ5: Statistics Dashboard Chart Library
|
|
|
|
**Answer**: Use Chart.js as specified. It's lightweight, well-documented, and requires no build process.
|
|
|
|
**Rationale**: Chart.js is the simplest charting solution that meets our needs. No need to check existing admin UI - if we need charts elsewhere later, we'll already have Chart.js available.
|
|
|
|
**Implementation Guidance**:
|
|
```html
|
|
<!-- In syndication dashboard template -->
|
|
<script src="https://cdn.jsdelivr.net/npm/chart.js@4.4.0/dist/chart.umd.min.js"></script>
|
|
<script>
|
|
// Simple line chart for request rates
|
|
new Chart(ctx, {
|
|
type: 'line',
|
|
data: {
|
|
labels: timestamps,
|
|
datasets: [{
|
|
label: 'Requests/min',
|
|
data: rates,
|
|
borderColor: 'rgb(75, 192, 192)'
|
|
}]
|
|
}
|
|
});
|
|
</script>
|
|
```
|
|
|
|
---
|
|
|
|
### IQ6: ATOM Content Type Selection Logic
|
|
|
|
**Answer**: For v1.1.2, only implement `type="text"` and `type="html"`. Skip `type="xhtml"` entirely.
|
|
|
|
**Rationale**: XHTML content type adds complexity with no clear benefit. Text and HTML cover all real-world use cases. XHTML can be added later if needed.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
def _generate_content_element(self, note):
|
|
if note.html:
|
|
# HTML content (escaped)
|
|
return f'<content type="html">{escape(note.html)}</content>'
|
|
else:
|
|
# Plain text (escaped)
|
|
return f'<content type="text">{escape(note.content)}</content>'
|
|
```
|
|
|
|
Document: `# Note: type="xhtml" not implemented. Use type="html" with escaping instead.`
|
|
|
|
---
|
|
|
|
### IQ7: JSON Feed Custom Extensions Scope
|
|
|
|
**Answer**: Keep minimal for v1.1.2 - only `permalink_path` and `word_count` as shown in spec.
|
|
|
|
**Rationale**: Start with the minimum viable extension. We can always add fields based on user feedback. Adding fields later is backward compatible; removing them is not.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In JSON Feed generator
|
|
"_starpunk": {
|
|
"permalink_path": f"/notes/{note.slug}",
|
|
"word_count": len(note.content.split())
|
|
}
|
|
```
|
|
|
|
Document in README: "The `_starpunk` extension currently includes permalink_path and word_count. Additional fields may be added in future versions based on user needs."
|
|
|
|
---
|
|
|
|
### IQ8: Memory Monitor Baseline Timing
|
|
|
|
**Answer**: Wait 5 seconds as specified. Don't wait for first request - keep it simple.
|
|
|
|
**Rationale**: 5 seconds is sufficient for Flask initialization. Waiting for first request adds complexity and the baseline will quickly adjust after a few requests anyway.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
def run(self):
|
|
# Wait for app initialization
|
|
time.sleep(5)
|
|
|
|
# Set baseline
|
|
self.baseline_memory = psutil.Process().memory_info().rss
|
|
|
|
# Start monitoring loop
|
|
while not self.stop_flag:
|
|
self._collect_metrics()
|
|
time.sleep(self.interval)
|
|
```
|
|
|
|
---
|
|
|
|
### IQ9: Feed Validation Integration
|
|
|
|
**Answer**: Implement validators for testing only. Add optional admin endpoint `/admin/validate-feeds` for manual validation. Skip validation in production feed generation.
|
|
|
|
**Rationale**: Validation adds overhead with no benefit in production. Tests ensure correctness. Admin endpoint provides debugging capability when needed.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In tests only
|
|
def test_atom_feed_valid():
|
|
generator = AtomFeedGenerator(notes)
|
|
feed = generator.generate()
|
|
validator = AtomFeedValidator()
|
|
assert validator.validate(feed) == True
|
|
|
|
# Optional admin endpoint
|
|
@admin_bp.route('/validate-feeds')
|
|
@require_admin
|
|
def validate_feeds():
|
|
results = {}
|
|
for format in ['rss', 'atom', 'json']:
|
|
if is_format_enabled(format):
|
|
feed = generate_feed(format)
|
|
validator = get_validator(format)
|
|
results[format] = validator.validate(feed)
|
|
return jsonify(results)
|
|
```
|
|
|
|
---
|
|
|
|
### IQ10: Syndication Statistics Retention
|
|
|
|
**Answer**: Use time-bucketed in-memory structure with hourly buckets. Implement simple cleanup that removes buckets older than 7 days.
|
|
|
|
**Rationale**: Time bucketing enables efficient pruning without scanning all data. Hourly granularity provides good balance between memory usage and statistics precision.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
class SyndicationStats:
|
|
def __init__(self):
|
|
self.hourly_buckets = {} # {hour_timestamp: stats}
|
|
self.max_age_hours = 7 * 24 # 7 days
|
|
|
|
def record_request(self, format, user_agent):
|
|
hour = int(time.time() // 3600) * 3600
|
|
if hour not in self.hourly_buckets:
|
|
self.hourly_buckets[hour] = self._new_bucket()
|
|
self._cleanup_old_buckets()
|
|
|
|
self.hourly_buckets[hour]['requests'][format] += 1
|
|
|
|
def _cleanup_old_buckets(self):
|
|
cutoff = time.time() - (self.max_age_hours * 3600)
|
|
self.hourly_buckets = {
|
|
ts: stats for ts, stats in self.hourly_buckets.items()
|
|
if ts > cutoff
|
|
}
|
|
```
|
|
|
|
### I1: Business Metrics Integration Timing
|
|
|
|
**Question**: When should we integrate business metrics into feed generation?
|
|
1. During Phase 2.0 RSS fix (add to existing feed.py)
|
|
2. During Phase 2.1 when creating new feed structure
|
|
3. Deferred to Phase 3
|
|
|
|
**Answer**: **Option 2 - During Phase 2.1 when creating the new feed structure**.
|
|
|
|
**Rationale**: Adding metrics to the old `feed.py` that we're about to refactor is throwaway work. Since you're creating the new `feeds/` module structure in Phase 2.1, integrate metrics properly from the start. This avoids refactoring metrics code immediately after adding it.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In feeds/rss.py (and similarly for atom.py, json.py)
|
|
class RSSFeedGenerator:
|
|
def __init__(self, notes, config, metrics_collector=None):
|
|
self.notes = notes
|
|
self.config = config
|
|
self.metrics_collector = metrics_collector
|
|
|
|
def generate(self):
|
|
start_time = time.time()
|
|
feed_content = ''.join(self.generate_streaming())
|
|
|
|
if self.metrics_collector:
|
|
self.metrics_collector.record_business_metric(
|
|
'feed_generated',
|
|
{
|
|
'format': 'rss',
|
|
'item_count': len(self.notes),
|
|
'duration': time.time() - start_time
|
|
}
|
|
)
|
|
|
|
return feed_content
|
|
```
|
|
|
|
For Phase 2.0, focus solely on fixing the RSS ordering bug. Keep changes minimal.
|
|
|
|
### I2: Streaming vs Non-Streaming for ATOM/JSON
|
|
|
|
**Question**: Should we implement both streaming and non-streaming methods for ATOM/JSON like RSS?
|
|
1. Implement both methods like RSS
|
|
2. Implement streaming only
|
|
3. Implement non-streaming only
|
|
|
|
**Answer**: **Option 1 - Implement both methods** (streaming and non-streaming) for consistency.
|
|
|
|
**Rationale**: This matches the existing RSS pattern established in CQ6. The non-streaming method (`generate()`) is required for caching, while the streaming method (`generate_streaming()`) provides memory efficiency for large feeds. Consistency across all feed formats simplifies maintenance and usage.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# Pattern for all feed generators
|
|
class AtomFeedGenerator:
|
|
def generate(self) -> str:
|
|
"""Generate complete feed for caching"""
|
|
return ''.join(self.generate_streaming())
|
|
|
|
def generate_streaming(self) -> Iterator[str]:
|
|
"""Generate feed in chunks for memory efficiency"""
|
|
yield '<?xml version="1.0" encoding="utf-8"?>\n'
|
|
yield '<feed xmlns="http://www.w3.org/2005/Atom">\n'
|
|
# ... yield chunks ...
|
|
|
|
# Usage in routes
|
|
if cache_enabled:
|
|
content = generator.generate() # Full string for caching
|
|
cache.set(key, content)
|
|
return Response(content, mimetype='application/atom+xml')
|
|
else:
|
|
return Response(
|
|
generator.generate_streaming(), # Stream directly
|
|
mimetype='application/atom+xml'
|
|
)
|
|
```
|
|
|
|
### I3: XML Escaping for ATOM
|
|
|
|
**Question**: How should we handle XML generation and escaping for ATOM?
|
|
1. Use feedgen library
|
|
2. Write manual XML generation with custom escaping
|
|
3. Use xml.etree.ElementTree
|
|
|
|
**Answer**: **Option 3 - Use xml.etree.ElementTree** from the Python standard library.
|
|
|
|
**Rationale**: ElementTree is in the standard library (no new dependencies), handles escaping correctly, and is simpler than manual XML string building. While feedgen is powerful, it's overkill for our simple needs and adds an unnecessary dependency. ElementTree provides the right balance of safety and simplicity.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# In feeds/atom.py
|
|
import xml.etree.ElementTree as ET
|
|
from xml.dom import minidom
|
|
|
|
class AtomFeedGenerator:
|
|
def generate_streaming(self):
|
|
# Build tree
|
|
feed = ET.Element('feed', xmlns='http://www.w3.org/2005/Atom')
|
|
|
|
# Add metadata
|
|
ET.SubElement(feed, 'title').text = self.config.FEED_TITLE
|
|
ET.SubElement(feed, 'id').text = self.config.SITE_URL + '/feed.atom'
|
|
|
|
# Add entries
|
|
for note in self.notes:
|
|
entry = ET.SubElement(feed, 'entry')
|
|
ET.SubElement(entry, 'title').text = note.title or note.slug
|
|
ET.SubElement(entry, 'id').text = f"{self.config.SITE_URL}/notes/{note.slug}"
|
|
|
|
# Content with proper escaping
|
|
content = ET.SubElement(entry, 'content')
|
|
content.set('type', 'html' if note.html else 'text')
|
|
content.text = note.html or note.content # ElementTree handles escaping
|
|
|
|
# Convert to string
|
|
rough_string = ET.tostring(feed, encoding='unicode')
|
|
|
|
# Pretty print for readability (optional)
|
|
if self.config.DEBUG:
|
|
dom = minidom.parseString(rough_string)
|
|
yield dom.toprettyxml(indent=" ")
|
|
else:
|
|
yield rough_string
|
|
```
|
|
|
|
This ensures proper escaping without manual string manipulation.
|
|
|
|
---
|
|
|
|
## Nice-to-Have Clarifications (Can defer if needed)
|
|
|
|
### NH1: Performance Benchmark Automation
|
|
|
|
**Answer**: Create benchmark suite with `@pytest.mark.benchmark`, run manually or optionally in CI. Don't block merges.
|
|
|
|
**Rationale**: Benchmarks are valuable but shouldn't block development. Optional execution prevents CI slowdown.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
# Run benchmarks: pytest -m benchmark
|
|
@pytest.mark.benchmark
|
|
def test_atom_generation_performance():
|
|
notes = Note.get_published(limit=100)
|
|
generator = AtomFeedGenerator(notes)
|
|
|
|
start = time.time()
|
|
feed = generator.generate()
|
|
duration = time.time() - start
|
|
|
|
assert duration < 0.5 # Should complete in 500ms
|
|
```
|
|
|
|
---
|
|
|
|
### NH2: Feed Format Feature Parity
|
|
|
|
**Answer**: Leverage format strengths. Don't limit to lowest common denominator.
|
|
|
|
**Rationale**: Each format exists because it offers different capabilities. Users choose formats based on their needs.
|
|
|
|
**Implementation Guidance**:
|
|
- **RSS**: Basic fields only (title, description, link, pubDate)
|
|
- **ATOM**: Include author objects, updated dates, categories
|
|
- **JSON**: Include custom extensions, attachments, author details
|
|
|
|
Document differences in user documentation.
|
|
|
|
---
|
|
|
|
### NH3: Content Negotiation Quality Factor Scoring
|
|
|
|
**Answer**: Keep the simple algorithm as specified. Log decisions in debug mode for troubleshooting.
|
|
|
|
**Rationale**: The simple algorithm handles 99% of real-world cases. Complex edge cases can be addressed if they actually occur.
|
|
|
|
**Implementation Guidance**: Use the algorithm exactly as specified in the spec. Add debug logging:
|
|
```python
|
|
if app.debug:
|
|
app.logger.debug(f"Content negotiation: Accept={accept_header}, Chosen={format}")
|
|
```
|
|
|
|
---
|
|
|
|
### NH4: Cache Statistics Persistence
|
|
|
|
**Answer**: Keep stats in-memory only for v1.1.2. Document that stats reset on restart.
|
|
|
|
**Rationale**: Persistence adds complexity. In-memory stats are sufficient for operational monitoring. Can add persistence in v1.2 if users need historical analysis.
|
|
|
|
**Implementation Guidance**: Add to documentation: "Note: Statistics are stored in memory and reset when the application restarts. For persistent metrics, consider using external monitoring tools."
|
|
|
|
---
|
|
|
|
### NH5: Feed Reader User Agent Detection Patterns
|
|
|
|
**Answer**: Start with regex patterns as specified. Log unknown user agents for future pattern updates.
|
|
|
|
**Rationale**: Regex is simple and sufficient. A library adds dependency for marginal benefit.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
def normalize_user_agent(self, ua_string):
|
|
# Try patterns
|
|
for pattern, name in self.patterns:
|
|
if re.search(pattern, ua_string, re.I):
|
|
return name
|
|
|
|
# Log unknown for analysis
|
|
if app.debug:
|
|
app.logger.info(f"Unknown user agent: {ua_string}")
|
|
|
|
return "unknown"
|
|
```
|
|
|
|
---
|
|
|
|
### NH6: OPML Multiple Feed Organization
|
|
|
|
**Answer**: Flat list for v1.1.2. No grouping needed for just 3 feeds.
|
|
|
|
**Rationale**: YAGNI (You Aren't Gonna Need It). Three feeds don't need categorization.
|
|
|
|
**Implementation Guidance**: Generate simple flat outline as shown in spec.
|
|
|
|
---
|
|
|
|
### NH7: Streaming Chunk Size Optimization
|
|
|
|
**Answer**: Don't enforce byte-level chunking. Let generators yield semantic units (complete entries).
|
|
|
|
**Rationale**: Semantic chunking (whole entries) is simpler and more correct than arbitrary byte boundaries that might split XML/JSON incorrectly.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
def generate_streaming(self):
|
|
# Yield complete semantic units
|
|
yield self._generate_header()
|
|
|
|
for note in self.notes:
|
|
yield self._generate_entry(note) # Complete entry
|
|
|
|
yield self._generate_footer()
|
|
```
|
|
|
|
---
|
|
|
|
### NH8: Error Handling for Feed Generation Failures
|
|
|
|
**Answer**: Validate before streaming. If error occurs mid-stream, log and truncate (client gets partial feed).
|
|
|
|
**Rationale**: Once streaming starts, we're committed. Pre-validation catches most errors. Mid-stream errors are rare and indicate serious issues (database failure).
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
def generate_feed_streaming(format, notes):
|
|
# Validate before starting stream
|
|
if not notes:
|
|
abort(404, "No content available")
|
|
|
|
try:
|
|
generator = get_generator(format, notes)
|
|
return Response(
|
|
generator.generate_streaming(),
|
|
mimetype=get_mimetype(format)
|
|
)
|
|
except Exception as e:
|
|
# Can't change status after streaming starts
|
|
app.logger.error(f"Feed generation failed: {e}")
|
|
# Stream will be truncated - client gets partial feed
|
|
raise
|
|
```
|
|
|
|
---
|
|
|
|
### NH9: Metrics Dashboard Auto-Refresh
|
|
|
|
**Answer**: No auto-refresh for v1.1.2. Manual refresh is sufficient for admin monitoring.
|
|
|
|
**Rationale**: Auto-refresh adds JavaScript complexity for minimal benefit in an admin interface.
|
|
|
|
**Implementation Guidance**: Static dashboard. Users press F5 to refresh. Simple.
|
|
|
|
---
|
|
|
|
### NH10: Configuration Validation for Feed Settings
|
|
|
|
**Answer**: Add validation to `validate_config()` with the checks you proposed.
|
|
|
|
**Rationale**: Fail-fast configuration validation prevents runtime surprises and improves developer experience.
|
|
|
|
**Implementation Guidance**:
|
|
```python
|
|
def validate_feed_config():
|
|
# At least one format enabled
|
|
enabled = [
|
|
config.FEED_RSS_ENABLED,
|
|
config.FEED_ATOM_ENABLED,
|
|
config.FEED_JSON_ENABLED
|
|
]
|
|
if not any(enabled):
|
|
raise ValueError("At least one feed format must be enabled")
|
|
|
|
# Positive integers
|
|
if config.FEED_CACHE_SIZE <= 0:
|
|
raise ValueError("FEED_CACHE_SIZE must be positive")
|
|
|
|
if config.FEED_CACHE_TTL <= 0:
|
|
raise ValueError("FEED_CACHE_TTL must be positive")
|
|
|
|
# Warnings for unusual values
|
|
if config.FEED_CACHE_TTL < 60:
|
|
logger.warning("FEED_CACHE_TTL < 60s may cause excessive regeneration")
|
|
|
|
if config.FEED_CACHE_TTL > 3600:
|
|
logger.warning("FEED_CACHE_TTL > 1h may serve stale content")
|
|
```
|
|
|
|
### N1: Feed Discovery Link Tags
|
|
|
|
**Question**: Should we automatically add feed discovery `<link>` tags to HTML pages?
|
|
|
|
**Answer**: **Yes, add discovery links to all HTML responses** that have the main layout template.
|
|
|
|
**Rationale**: Feed discovery is a web standard that improves user experience. Browsers and feed readers use these tags to detect available feeds. The overhead is minimal (a few bytes of HTML).
|
|
|
|
**Implementation Guidance**:
|
|
```html
|
|
<!-- In base template head section -->
|
|
{% if config.FEED_RSS_ENABLED %}
|
|
<link rel="alternate" type="application/rss+xml" title="RSS Feed" href="/feed.rss">
|
|
{% endif %}
|
|
{% if config.FEED_ATOM_ENABLED %}
|
|
<link rel="alternate" type="application/atom+xml" title="Atom Feed" href="/feed.atom">
|
|
{% endif %}
|
|
{% if config.FEED_JSON_ENABLED %}
|
|
<link rel="alternate" type="application/json" title="JSON Feed" href="/feed.json">
|
|
{% endif %}
|
|
```
|
|
|
|
### N2: Feed Icons/Badges
|
|
|
|
**Question**: Should we add visual feed subscription buttons/icons to the site?
|
|
|
|
**Answer**: **No visual feed buttons for v1.1.2**. Focus on the API functionality.
|
|
|
|
**Rationale**: Visual design is not part of this technical release. The discovery link tags provide the functionality for feed readers. Visual subscription buttons can be added in a future UI-focused release.
|
|
|
|
**Implementation Guidance**: Skip any visual feed indicators. The discovery links in N1 are sufficient for feed reader detection.
|
|
|
|
### N3: Feed Pagination Support
|
|
|
|
**Question**: Should feeds support pagination for sites with many notes?
|
|
|
|
**Answer**: **No pagination for v1.1.2**. Use simple limit parameter only.
|
|
|
|
**Rationale**: The spec already includes a configurable limit (default 50 items). This is sufficient for v1. RFC 5005 (Feed Paging and Archiving) can be considered for v1.2 if users need access to older entries via feeds.
|
|
|
|
**Implementation Guidance**:
|
|
- Stick with the simple `limit` parameter in the current design
|
|
- Document the limit in the feed itself using appropriate elements:
|
|
- RSS: Add comment `<!-- Limited to 50 most recent entries -->`
|
|
- ATOM: Could add `<link rel="self">` with `?limit=50`
|
|
- JSON: Add to `_starpunk` extension: `"limit": 50`
|
|
|
|
---
|
|
|
|
## Summary
|
|
|
|
### Key Decisions Made
|
|
|
|
1. **Integration Strategy**: Minimal invasive changes - wrap at existing boundaries (connection pool, WSGI middleware)
|
|
2. **Simplicity First**: No manual cache invalidation, no complex SQL parsing, no auto-refresh
|
|
3. **Dual Approaches**: Both content negotiation AND explicit endpoints for maximum compatibility
|
|
4. **Streaming + Caching**: Both methods implemented for flexibility
|
|
5. **Standards Compliance**: Follow specs exactly, skip complex features like XHTML
|
|
6. **Fail-Fast**: Validate configuration at startup
|
|
7. **Production Focus**: Skip validation in production, benchmarks optional
|
|
|
|
### Implementation Order
|
|
|
|
**Phase 1**: Start with CQ1 (database monitoring) and CQ2 (metrics collector initialization) as they form the foundation.
|
|
|
|
**Phase 2**: Implement feed generation with both CQ3 (endpoints) and CQ6 (streaming) patterns.
|
|
|
|
**Phase 3**: Add caching with CQ4 (checksum strategy) and monitoring with CQ5 (memory monitor).
|
|
|
|
### Philosophy Applied
|
|
|
|
Every decision follows StarPunk principles:
|
|
- **Simplicity**: Choose simple solutions (regex over SQL parser, in-memory over persistent)
|
|
- **Explicit**: Clear behavior (both negotiation and explicit endpoints)
|
|
- **Tested**: Validation in tests, not production
|
|
- **Standards**: Follow specs exactly (content negotiation, feed formats)
|
|
- **No Premature Optimization**: Single threshold, simple caching, basic patterns
|
|
|
|
### Ready to Implement
|
|
|
|
With these answers, you have clear direction for all implementation decisions. Start with Phase 1 (Metrics Instrumentation) using the integration patterns specified. The "use simple approach" theme throughout means you can avoid overengineering and focus on delivering working features.
|
|
|
|
Remember: When in doubt during implementation, choose the simpler approach. You can always add complexity later based on real-world usage.
|
|
|
|
---
|
|
|
|
**Document Version**: 1.1.0
|
|
**Last Updated**: 2025-11-26
|
|
**Status**: All questions answered - Ready for Phase 2 implementation |