Implements tag/category system backend following microformats2 p-category specification. Database changes: - Migration 008: Add tags and note_tags tables - Normalized tag storage (case-insensitive lookup, display name preserved) - Indexes for performance New module: - starpunk/tags.py: Tag management functions - normalize_tag: Normalize tag strings - get_or_create_tag: Get or create tag records - add_tags_to_note: Associate tags with notes (replaces existing) - get_note_tags: Retrieve note tags (alphabetically ordered) - get_tag_by_name: Lookup tag by normalized name - get_notes_by_tag: Get all notes with specific tag - parse_tag_input: Parse comma-separated tag input Model updates: - Note.tags property (lazy-loaded, prefer pre-loading in routes) - Note.to_dict() add include_tags parameter CRUD updates: - create_note() accepts tags parameter - update_note() accepts tags parameter (None = no change, [] = remove all) Micropub integration: - Pass tags to create_note() (tags already extracted by extract_tags()) - Return tags in q=source response Per design doc: docs/design/v1.3.0/microformats-tags-design.md Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
5.8 KiB
v1.1.1 Performance Monitoring Instrumentation Assessment
Architectural Finding
Date: 2025-11-25 Architect: StarPunk Architect Subject: Missing Performance Monitoring Instrumentation Version: v1.1.1-rc.2
Executive Summary
VERDICT: IMPLEMENTATION BUG - Critical instrumentation was not implemented
The performance monitoring infrastructure exists but lacks the actual instrumentation code to collect metrics. This represents an incomplete implementation of the v1.1.1 design specifications.
Evidence
1. Design Documents Clearly Specify Instrumentation
Performance Monitoring Specification (performance-monitoring-spec.md)
Lines 141-232 explicitly detail three types of instrumentation:
- Database Query Monitoring (lines 143-195)
- HTTP Request Monitoring (lines 197-232)
- Memory Monitoring (lines 234-276)
Example from specification:
# Line 165: "Execute query (via monkey-patching)"
def monitored_execute(sql, params=None):
result = original_execute(sql, params)
duration = time.perf_counter() - start_time
metric = PerformanceMetric(...)
metrics_buffer.add_metric(metric)
Developer Q&A Documentation
Q6 (lines 93-107): Explicitly discusses per-process buffers and instrumentation Q12 (lines 193-205): Details sampling rates for "database/http/render" operations
Quote from Q&A:
"Different rates for database/http/render... Use random sampling at collection point"
ADR-053 Performance Monitoring Strategy
Lines 200-220 specify instrumentation points:
"1. Database Layer
- All queries automatically timed
- Connection acquisition/release
- Transaction duration"
"2. HTTP Layer
- Middleware wraps all requests
- Per-endpoint timing"
2. Current Implementation Status
What EXISTS (✅)
starpunk/monitoring/metrics.py- MetricsBuffer classrecord_metric()function - Fully implemented/admin/metricsendpoint - Working- Dashboard UI - Rendering correctly
What's MISSING (❌)
- ZERO calls to
record_metric()in the entire codebase - No HTTP request timing middleware
- No database query instrumentation
- No memory monitoring thread
- No automatic metric collection
3. Grep Analysis Results
# Search for record_metric calls (excluding definition)
$ grep -r "record_metric" --include="*.py" | grep -v "def record_metric"
# Result: Only imports and docstring examples, NO actual calls
# Search for timing code
$ grep -r "time.perf_counter\|track_query"
# Result: No timing instrumentation found
# Check middleware
$ grep "@app.after_request"
# Result: No after_request handler for timing
4. Phase 2 Implementation Report Claims
The Phase 2 report (line 22-23) states:
"Performance Monitoring Infrastructure - Status: ✅ COMPLETED"
But line 89 reveals the truth:
"API: record_metric('database', 'SELECT notes', 45.2, {'query': 'SELECT * FROM notes'})"
This is an API example, not actual instrumentation code.
Root Cause Analysis
The developer implemented the monitoring framework (the "plumbing") but not the instrumentation code (the "sensors"). This is like installing a dashboard in a car but not connecting any of the gauges to the engine.
Why This Happened
- Misinterpretation: Developer may have interpreted "monitoring infrastructure" as just the data structures and endpoints
- Documentation Gap: The Phase 2 report focuses on the API but doesn't show actual integration
- Testing Gap: No tests verify that metrics are actually being collected
Impact Assessment
User Impact
- Dashboard shows all zeros (confusing UX)
- No performance visibility as designed
- Feature appears broken
Technical Impact
- Core functionality works (no crashes)
- Performance overhead is actually ZERO (ironically meeting the <1% target)
- Easy to fix - framework is ready
Architectural Recommendation
Recommendation: Fix in v1.1.2 (not blocking v1.1.1)
Rationale
- Not a Breaking Bug: System functions correctly, just lacks metrics
- Documentation Exists: Can document as "known limitation"
- Clean Fix Path: v1.1.2 can add instrumentation without structural changes
- Version Strategy: v1.1.1 focused on "Polish" - this is more "Observability"
Alternative: Hotfix Now
If you decide this is critical for v1.1.1:
- Create v1.1.1-rc.3 with instrumentation
- Estimated effort: 2-4 hours
- Risk: Low (additive changes only)
Required Instrumentation (for v1.1.2)
1. HTTP Request Timing
# In starpunk/__init__.py
@app.before_request
def start_timer():
if app.config.get('METRICS_ENABLED'):
g.start_time = time.perf_counter()
@app.after_request
def end_timer(response):
if hasattr(g, 'start_time'):
duration = time.perf_counter() - g.start_time
record_metric('http', request.endpoint, duration * 1000)
return response
2. Database Query Monitoring
Wrap get_connection() or instrument execute() calls
3. Memory Monitoring Thread
Start background thread in app factory
Conclusion
This is a clear implementation gap between design and execution. The v1.1.1 specifications explicitly required instrumentation that was never implemented. However, since the monitoring framework itself is complete and the system is otherwise stable, this can be addressed in v1.1.2 without blocking the current release.
The developer delivered the "monitoring system" but not the "monitoring integration" - a subtle but critical distinction that the architecture documents did specify.
Decision Record
Create ADR-056 documenting this as technical debt:
- Title: "Deferred Performance Instrumentation to v1.1.2"
- Status: Accepted
- Context: Monitoring framework complete but lacks instrumentation
- Decision: Ship v1.1.1 with framework, add instrumentation in v1.1.2
- Consequences: Dashboard shows zeros until v1.1.2