feat: v1.2.0-rc.1 - IndieWeb Features Release Candidate

Complete implementation of v1.2.0 "IndieWeb Features" release.

## Phase 1: Custom Slugs
- Optional custom slug field in note creation form
- Auto-sanitization (lowercase, hyphens only)
- Uniqueness validation with auto-numbering
- Read-only after creation to preserve permalinks
- Matches Micropub mp-slug behavior

## Phase 2: Author Discovery + Microformats2
- Automatic h-card discovery from IndieAuth identity URL
- 24-hour caching with graceful fallback
- Never blocks login (per ADR-061)
- Complete h-entry, h-card, h-feed markup
- All required Microformats2 properties
- rel-me links for identity verification
- Passes IndieWeb validation

## Phase 3: Media Upload
- Upload up to 4 images per note (JPEG, PNG, GIF, WebP)
- Automatic optimization with Pillow
  - Auto-resize to 2048px
  - EXIF orientation correction
  - 95% quality compression
- Social media-style layout (media top, text below)
- Optional captions for accessibility
- Integration with all feed formats (RSS, ATOM, JSON Feed)
- Date-organized storage with UUID filenames
- Immutable caching (1 year)

## Database Changes
- migrations/006_add_author_profile.sql - Author discovery cache
- migrations/007_add_media_support.sql - Media storage

## New Modules
- starpunk/author_discovery.py - h-card discovery and caching
- starpunk/media.py - Image upload, validation, optimization

## Documentation
- 4 new ADRs (056, 057, 058, 061)
- Complete design specifications
- Developer Q&A with 40+ questions answered
- 3 implementation reports
- 3 architect reviews (all approved)

## Testing
- 56 new tests for v1.2.0 features
- 842 total tests in suite
- All v1.2.0 feature tests passing

## Dependencies
- Added: mf2py (Microformats2 parser)
- Added: Pillow (image processing)

Version: 1.2.0-rc.1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-28 15:02:20 -07:00
parent 83739ec2c6
commit dd822a35b5
40 changed files with 6929 additions and 15 deletions

View File

@@ -177,6 +177,31 @@ def create_app(config=None):
register_routes(app)
# Template context processor - Inject author profile (v1.2.0 Phase 2)
@app.context_processor
def inject_author():
"""
Inject author profile into all templates
Per Q19: Global context processor approach
Makes author data available in all templates for h-card markup
"""
from starpunk.author_discovery import get_author_profile
# Get ADMIN_ME from config (single-user CMS)
me_url = app.config.get('ADMIN_ME')
if me_url:
try:
author = get_author_profile(me_url)
except Exception as e:
app.logger.warning(f"Failed to get author profile in template context: {e}")
author = None
else:
author = None
return {'author': author}
# Request middleware - Add correlation ID to each request
@app.before_request
def before_request():
@@ -298,5 +323,5 @@ def create_app(config=None):
# Package version (Semantic Versioning 2.0.0)
# See docs/standards/versioning-strategy.md for details
__version__ = "1.1.2"
__version_info__ = (1, 1, 2)
__version__ = "1.2.0-rc.1"
__version_info__ = (1, 2, 0, "dev")

View File

@@ -461,6 +461,16 @@ def handle_callback(code: str, state: str, iss: Optional[str] = None) -> Optiona
# Create session
session_token = create_session(me)
# Trigger author profile discovery (v1.2.0 Phase 2)
# Per Q14: Never block login, always allow fallback
try:
from starpunk.author_discovery import get_author_profile
author_profile = get_author_profile(me, refresh=True)
current_app.logger.info(f"Author profile refreshed for {me}")
except Exception as e:
current_app.logger.warning(f"Author discovery failed: {e}")
# Continue login anyway - never block per Q14
return session_token

View File

@@ -0,0 +1,377 @@
"""
Author profile discovery from IndieAuth identity
Per ADR-061 and v1.2.0 Phase 2:
- Discover h-card from user's IndieAuth 'me' URL
- Cache for 24 hours (per Q14)
- Graceful fallback if discovery fails
- Never block login functionality
Discovery Process:
1. Fetch user's profile URL
2. Parse h-card microformats using mf2py
3. Extract: name, photo, url, note (bio), rel-me links
4. Cache in author_profile table with 24-hour TTL
5. Return cached data on subsequent requests
Fallback Behavior (per Q14):
- If discovery fails, use cached data even if expired
- If no cache exists, use minimal defaults (domain as name)
- Never block or fail login due to discovery issues
"""
import json
import logging
from datetime import datetime, timedelta
from typing import Dict, Optional
from urllib.parse import urlparse
import httpx
import mf2py
from flask import current_app
from starpunk.database import get_db
# Discovery timeout (per Q&A Q38)
DISCOVERY_TIMEOUT = 5.0
# Cache TTL (per Q&A Q14, Q19)
CACHE_TTL_HOURS = 24
class DiscoveryError(Exception):
"""Raised when author profile discovery fails"""
pass
def discover_author_profile(me_url: str) -> Optional[Dict]:
"""
Discover author h-card from IndieAuth profile URL
Per Q15: Use mf2py library (already a dependency)
Per Q14: Graceful fallback, never block login
Per Q16: Use first representative h-card
Args:
me_url: User's IndieAuth identity URL
Returns:
Dict with author profile data or None on failure
Profile dict contains:
- name: Author name (from p-name)
- photo: Author photo URL (from u-photo)
- url: Author canonical URL (from u-url)
- note: Author bio (from p-note)
- rel_me_links: List of rel-me URLs
"""
try:
current_app.logger.info(f"Discovering author profile from {me_url}")
# Fetch profile page with timeout
response = httpx.get(
me_url,
timeout=DISCOVERY_TIMEOUT,
follow_redirects=True,
headers={
'Accept': 'text/html,application/xhtml+xml',
'User-Agent': f'StarPunk/{current_app.config.get("VERSION", "1.2.0")}'
}
)
response.raise_for_status()
# Parse microformats from HTML
parsed = mf2py.parse(doc=response.text, url=me_url)
# Extract h-card (per Q16: first representative h-card)
hcard = _find_representative_hcard(parsed, me_url)
if not hcard:
current_app.logger.warning(f"No h-card found at {me_url}")
return None
# Extract h-card properties
profile = {
'name': _get_property(hcard, 'name'),
'photo': _get_property(hcard, 'photo'),
'url': _get_property(hcard, 'url') or me_url,
'note': _get_property(hcard, 'note'),
}
# Extract rel-me links (per Q17: store as list)
rel_me_links = parsed.get('rels', {}).get('me', [])
profile['rel_me_links'] = rel_me_links
current_app.logger.info(
f"Discovered author profile: name={profile.get('name')}, "
f"photo={'yes' if profile.get('photo') else 'no'}, "
f"rel_me_count={len(rel_me_links)}"
)
return profile
except httpx.TimeoutException:
current_app.logger.warning(f"Timeout discovering profile at {me_url}")
raise DiscoveryError(f"Timeout fetching profile: {me_url}")
except httpx.HTTPStatusError as e:
current_app.logger.warning(
f"HTTP {e.response.status_code} discovering profile at {me_url}"
)
raise DiscoveryError(f"HTTP error fetching profile: {e.response.status_code}")
except httpx.RequestError as e:
current_app.logger.warning(f"Network error discovering profile at {me_url}: {e}")
raise DiscoveryError(f"Network error: {e}")
except Exception as e:
current_app.logger.error(f"Unexpected error discovering profile at {me_url}: {e}")
raise DiscoveryError(f"Discovery failed: {e}")
def _find_representative_hcard(parsed: dict, me_url: str) -> Optional[dict]:
"""
Find representative h-card from parsed microformats
Per Q16: First representative h-card = first h-card with p-name
Per Q18: First h-card with url property matching profile URL
Args:
parsed: Parsed microformats data from mf2py
me_url: Profile URL for matching
Returns:
h-card dict or None if not found
"""
items = parsed.get('items', [])
# First try: h-card with matching URL (most specific)
for item in items:
if 'h-card' in item.get('type', []):
properties = item.get('properties', {})
urls = properties.get('url', [])
# Check if any URL matches the profile URL
for url in urls:
if isinstance(url, dict):
url = url.get('value', '')
if _normalize_url(url) == _normalize_url(me_url):
# Found matching h-card
return item
# Second try: First h-card with p-name (representative h-card)
for item in items:
if 'h-card' in item.get('type', []):
properties = item.get('properties', {})
if properties.get('name'):
return item
# Third try: Just use first h-card if any
for item in items:
if 'h-card' in item.get('type', []):
return item
return None
def _get_property(hcard: dict, prop_name: str) -> Optional[str]:
"""
Extract property value from h-card
Handles both string values and nested objects (for u-* properties)
Args:
hcard: h-card item dict
prop_name: Property name (e.g., 'name', 'photo', 'url')
Returns:
Property value as string or None
"""
properties = hcard.get('properties', {})
values = properties.get(prop_name, [])
if not values:
return None
# Get first value
value = values[0]
# Handle nested objects (e.g., u-photo might be {'value': '...', 'alt': '...'})
if isinstance(value, dict):
return value.get('value')
return value
def _normalize_url(url: str) -> str:
"""
Normalize URL for comparison
Removes trailing slash and converts to lowercase
Args:
url: URL to normalize
Returns:
Normalized URL
"""
if not url:
return ''
return url.rstrip('/').lower()
def get_author_profile(me_url: str, refresh: bool = False) -> Dict:
"""
Get author profile with caching
Per Q14: 24-hour cache, never block on failure
Per Q19: Use database for caching
Args:
me_url: User's IndieAuth identity URL
refresh: If True, force refresh from profile URL
Returns:
Author profile dict (from cache or fresh discovery)
Always returns a dict, never None (uses fallback defaults)
Profile dict contains:
- me: IndieAuth identity URL
- name: Author name
- photo: Author photo URL (may be None)
- url: Author canonical URL
- note: Author bio (may be None)
- rel_me_links: List of rel-me URLs
"""
db = get_db(current_app)
# Check cache unless refresh requested
if not refresh:
cached = db.execute(
"""
SELECT me, name, photo, url, note, rel_me_links, cached_until
FROM author_profile
WHERE me = ?
""",
(me_url,)
).fetchone()
if cached:
# Check if cache is still valid
cached_until = datetime.fromisoformat(cached['cached_until'])
if datetime.utcnow() < cached_until:
current_app.logger.debug(f"Using cached author profile for {me_url}")
# Parse rel_me_links from JSON
rel_me_links = json.loads(cached['rel_me_links']) if cached['rel_me_links'] else []
return {
'me': cached['me'],
'name': cached['name'],
'photo': cached['photo'],
'url': cached['url'],
'note': cached['note'],
'rel_me_links': rel_me_links,
}
# Attempt discovery
try:
profile = discover_author_profile(me_url)
if profile:
# Save to cache
save_author_profile(me_url, profile)
# Return with me_url added
profile['me'] = me_url
return profile
except DiscoveryError as e:
current_app.logger.warning(f"Discovery failed: {e}")
# Try to use expired cache as fallback (per Q14)
cached = db.execute(
"""
SELECT me, name, photo, url, note, rel_me_links
FROM author_profile
WHERE me = ?
""",
(me_url,)
).fetchone()
if cached:
current_app.logger.info(f"Using expired cache as fallback for {me_url}")
rel_me_links = json.loads(cached['rel_me_links']) if cached['rel_me_links'] else []
return {
'me': cached['me'],
'name': cached['name'],
'photo': cached['photo'],
'url': cached['url'],
'note': cached['note'],
'rel_me_links': rel_me_links,
}
# No cache, discovery failed - use minimal defaults (per Q14, Q21)
current_app.logger.warning(
f"No cached profile for {me_url}, using default fallback"
)
# Extract domain from URL for default name
try:
parsed_url = urlparse(me_url)
default_name = parsed_url.netloc or me_url
except Exception:
default_name = me_url
return {
'me': me_url,
'name': default_name,
'photo': None,
'url': me_url,
'note': None,
'rel_me_links': [],
}
def save_author_profile(me_url: str, profile: Dict) -> None:
"""
Save author profile to database
Per Q14: Sets cached_until to 24 hours from now
Per Q17: Store rel-me as JSON
Args:
me_url: User's IndieAuth identity URL
profile: Author profile dict from discovery
"""
db = get_db(current_app)
# Calculate cache expiry (24 hours from now)
cached_until = datetime.utcnow() + timedelta(hours=CACHE_TTL_HOURS)
# Convert rel_me_links to JSON (per Q17)
rel_me_json = json.dumps(profile.get('rel_me_links', []))
# Upsert (insert or replace)
db.execute(
"""
INSERT OR REPLACE INTO author_profile
(me, name, photo, url, note, rel_me_links, discovered_at, cached_until)
VALUES (?, ?, ?, ?, ?, ?, CURRENT_TIMESTAMP, ?)
""",
(
me_url,
profile.get('name'),
profile.get('photo'),
profile.get('url'),
profile.get('note'),
rel_me_json,
cached_until.isoformat(),
)
)
db.commit()
current_app.logger.info(f"Saved author profile for {me_url} (expires {cached_until})")

View File

@@ -178,11 +178,34 @@ def generate_atom_streaming(
# Link to entry
yield f' <link rel="alternate" type="text/html" href="{_escape_xml(permalink)}"/>\n'
# Content
# Media enclosures (v1.2.0 Phase 3, per Q24 and ADR-057)
if hasattr(note, 'media') and note.media:
for item in note.media:
media_url = f"{site_url}/media/{item['path']}"
mime_type = item.get('mime_type', 'image/jpeg')
size = item.get('size', 0)
yield f' <link rel="enclosure" type="{_escape_xml(mime_type)}" href="{_escape_xml(media_url)}" length="{size}"/>\n'
# Content - include media as HTML (per Q24)
if note.html:
# Build HTML content with media at top
html_content = ""
# Add media at top if present
if hasattr(note, 'media') and note.media:
html_content += '<div class="media">'
for item in note.media:
media_url = f"{site_url}/media/{item['path']}"
caption = item.get('caption', '')
html_content += f'<img src="{media_url}" alt="{caption}" />'
html_content += '</div>'
# Add text content below media
html_content += note.html
# HTML content - escaped
yield ' <content type="html">'
yield _escape_xml(note.html)
yield _escape_xml(html_content)
yield '</content>\n'
else:
# Plain text content

View File

@@ -260,14 +260,47 @@ def _build_item_object(site_url: str, note: Note) -> Dict[str, Any]:
item["title"] = note.title
# Add content (HTML or text)
# Per Q24: Include media as HTML in content_html
if note.html:
item["content_html"] = note.html
content_html = ""
# Add media at top if present (v1.2.0 Phase 3)
if hasattr(note, 'media') and note.media:
content_html += '<div class="media">'
for media_item in note.media:
media_url = f"{site_url}/media/{media_item['path']}"
caption = media_item.get('caption', '')
content_html += f'<img src="{media_url}" alt="{caption}" />'
content_html += '</div>'
# Add text content below media
content_html += note.html
item["content_html"] = content_html
else:
item["content_text"] = note.content
# Add publication date (RFC 3339 format)
item["date_published"] = _format_rfc3339_date(note.created_at)
# Add attachments array (v1.2.0 Phase 3, per Q24 and ADR-057)
# JSON Feed 1.1 native support for attachments
if hasattr(note, 'media') and note.media:
attachments = []
for media_item in note.media:
media_url = f"{site_url}/media/{media_item['path']}"
attachment = {
'url': media_url,
'mime_type': media_item.get('mime_type', 'image/jpeg'),
'size_in_bytes': media_item.get('size', 0)
}
# Add title (caption) if present
if media_item.get('caption'):
attachment['title'] = media_item['caption']
attachments.append(attachment)
item["attachments"] = attachments
# Add custom StarPunk extensions
item["_starpunk"] = {
"permalink_path": note.permalink,

View File

@@ -124,8 +124,22 @@ def generate_rss(
fe.pubDate(pubdate)
# Set description with HTML content in CDATA
# Per Q24 and ADR-057: Embed media as HTML in description
html_content = ""
# Add media at top if present (v1.2.0 Phase 3)
if hasattr(note, 'media') and note.media:
html_content += '<div class="media">'
for item in note.media:
media_url = f"{site_url}/media/{item['path']}"
caption = item.get('caption', '')
html_content += f'<img src="{media_url}" alt="{caption}" />'
html_content += '</div>'
# Add text content below media
html_content += clean_html_for_rss(note.html)
# feedgen automatically wraps content in CDATA for RSS
html_content = clean_html_for_rss(note.html)
fe.description(html_content)
# Generate RSS 2.0 XML (pretty-printed)

341
starpunk/media.py Normal file
View File

@@ -0,0 +1,341 @@
"""
Media upload and management for StarPunk
Per ADR-057 and ADR-058:
- Social media attachment model (media at top of note)
- Pillow-based image optimization
- 10MB max file size, 4096x4096 max dimensions
- Auto-resize to 2048px for performance
- 4 images max per note
"""
from PIL import Image, ImageOps
from pathlib import Path
from datetime import datetime
import uuid
import io
from typing import Optional, List, Dict, Tuple
from flask import current_app
# Allowed MIME types per Q11
ALLOWED_MIME_TYPES = {
'image/jpeg': ['.jpg', '.jpeg'],
'image/png': ['.png'],
'image/gif': ['.gif'],
'image/webp': ['.webp']
}
# Limits per Q&A and ADR-058
MAX_FILE_SIZE = 10 * 1024 * 1024 # 10MB
MAX_DIMENSION = 4096 # 4096x4096 max
RESIZE_DIMENSION = 2048 # Auto-resize to 2048px
MAX_IMAGES_PER_NOTE = 4
def validate_image(file_data: bytes, filename: str) -> Tuple[str, int, int]:
"""
Validate image file
Per Q11: Validate MIME type using Pillow
Per Q6: Reject if >10MB or >4096px
Args:
file_data: Raw file bytes
filename: Original filename
Returns:
Tuple of (mime_type, width, height)
Raises:
ValueError: If file is invalid
"""
# Check file size first (before loading)
if len(file_data) > MAX_FILE_SIZE:
raise ValueError(f"File too large. Maximum size is 10MB")
# Try to open with Pillow (validates integrity)
try:
img = Image.open(io.BytesIO(file_data))
img.verify() # Verify it's a valid image
# Re-open after verify (verify() closes the file)
img = Image.open(io.BytesIO(file_data))
except Exception as e:
raise ValueError(f"Invalid or corrupted image: {e}")
# Check format is allowed
if img.format:
format_lower = img.format.lower()
mime_type = f'image/{format_lower}'
# Special case: JPEG format can be reported as 'jpeg'
if format_lower == 'jpeg':
mime_type = 'image/jpeg'
if mime_type not in ALLOWED_MIME_TYPES:
raise ValueError(f"Invalid image format. Accepted: JPEG, PNG, GIF, WebP")
else:
raise ValueError("Could not determine image format")
# Check dimensions
width, height = img.size
if max(width, height) > MAX_DIMENSION:
raise ValueError(f"Image dimensions too large. Maximum is {MAX_DIMENSION}x{MAX_DIMENSION} pixels")
return mime_type, width, height
def optimize_image(image_data: bytes) -> Tuple[Image.Image, int, int]:
"""
Optimize image for web display
Per ADR-058:
- Auto-resize if >2048px (maintaining aspect ratio)
- Correct EXIF orientation
- 95% quality
Per Q12: Preserve GIF animation during resize
Args:
image_data: Raw image bytes
Returns:
Tuple of (optimized_image, width, height)
"""
img = Image.open(io.BytesIO(image_data))
# Correct EXIF orientation (per ADR-058)
img = ImageOps.exif_transpose(img) if img.format != 'GIF' else img
# Get original dimensions
width, height = img.size
# Resize if needed (per ADR-058: >2048px gets resized)
if max(width, height) > RESIZE_DIMENSION:
# For GIFs, we need special handling to preserve animation
if img.format == 'GIF' and getattr(img, 'is_animated', False):
# For animated GIFs, just return original
# Per Q12: Preserve GIF animation
# Note: Resizing animated GIFs is complex, skip for v1.2.0
return img, width, height
else:
# Calculate new size maintaining aspect ratio
img.thumbnail((RESIZE_DIMENSION, RESIZE_DIMENSION), Image.Resampling.LANCZOS)
width, height = img.size
return img, width, height
def save_media(file_data: bytes, filename: str) -> Dict:
"""
Save uploaded media file
Per Q5: UUID-based filename to avoid collisions
Per Q2: Date-organized path: /media/YYYY/MM/uuid.ext
Per Q6: Validate, optimize, then save
Args:
file_data: Raw file bytes
filename: Original filename
Returns:
Media metadata dict (for database insert)
Raises:
ValueError: If validation fails
"""
from starpunk.database import get_db
# Validate image
mime_type, orig_width, orig_height = validate_image(file_data, filename)
# Optimize image
optimized_img, width, height = optimize_image(file_data)
# Generate UUID-based filename (per Q5)
file_ext = Path(filename).suffix.lower()
if not file_ext:
# Determine extension from MIME type
for mime, exts in ALLOWED_MIME_TYPES.items():
if mime == mime_type:
file_ext = exts[0]
break
stored_filename = f"{uuid.uuid4()}{file_ext}"
# Create date-based path (per Q2)
now = datetime.now()
year = now.strftime('%Y')
month = now.strftime('%m')
relative_path = f"{year}/{month}/{stored_filename}"
# Get media directory from app config
media_dir = Path(current_app.config.get('DATA_PATH', 'data')) / 'media'
full_dir = media_dir / year / month
full_dir.mkdir(parents=True, exist_ok=True)
# Save optimized image
full_path = full_dir / stored_filename
# Determine save format and quality
save_format = optimized_img.format or 'PNG'
save_kwargs = {'optimize': True}
if save_format in ['JPEG', 'JPG']:
save_kwargs['quality'] = 95 # Per ADR-058
elif save_format == 'PNG':
save_kwargs['optimize'] = True
elif save_format == 'WEBP':
save_kwargs['quality'] = 95
optimized_img.save(full_path, format=save_format, **save_kwargs)
# Get actual file size after optimization
actual_size = full_path.stat().st_size
# Insert into database
db = get_db(current_app)
cursor = db.execute(
"""
INSERT INTO media (filename, stored_filename, path, mime_type, size, width, height)
VALUES (?, ?, ?, ?, ?, ?, ?)
""",
(filename, stored_filename, relative_path, mime_type, actual_size, width, height)
)
db.commit()
media_id = cursor.lastrowid
return {
'id': media_id,
'filename': filename,
'stored_filename': stored_filename,
'path': relative_path,
'mime_type': mime_type,
'size': actual_size,
'width': width,
'height': height
}
def attach_media_to_note(note_id: int, media_ids: List[int], captions: List[str]) -> None:
"""
Attach media files to note
Per Q4: Happens after note creation
Per Q7: Captions are optional per image
Args:
note_id: Note to attach to
media_ids: List of media IDs (max 4)
captions: List of captions (same length as media_ids)
Raises:
ValueError: If more than MAX_IMAGES_PER_NOTE
"""
from starpunk.database import get_db
if len(media_ids) > MAX_IMAGES_PER_NOTE:
raise ValueError(f"Maximum {MAX_IMAGES_PER_NOTE} images per note")
db = get_db(current_app)
# Delete existing associations (for edit case)
db.execute("DELETE FROM note_media WHERE note_id = ?", (note_id,))
# Insert new associations
for i, (media_id, caption) in enumerate(zip(media_ids, captions)):
db.execute(
"""
INSERT INTO note_media (note_id, media_id, display_order, caption)
VALUES (?, ?, ?, ?)
""",
(note_id, media_id, i, caption or None)
)
db.commit()
def get_note_media(note_id: int) -> List[Dict]:
"""
Get all media attached to a note
Returns list sorted by display_order
Args:
note_id: Note ID to get media for
Returns:
List of media dicts with metadata
"""
from starpunk.database import get_db
db = get_db(current_app)
rows = db.execute(
"""
SELECT
m.id,
m.filename,
m.stored_filename,
m.path,
m.mime_type,
m.size,
m.width,
m.height,
nm.caption,
nm.display_order
FROM note_media nm
JOIN media m ON nm.media_id = m.id
WHERE nm.note_id = ?
ORDER BY nm.display_order
""",
(note_id,)
).fetchall()
return [
{
'id': row[0],
'filename': row[1],
'stored_filename': row[2],
'path': row[3],
'mime_type': row[4],
'size': row[5],
'width': row[6],
'height': row[7],
'caption': row[8],
'display_order': row[9]
}
for row in rows
]
def delete_media(media_id: int) -> None:
"""
Delete media file and database record
Per Q8: Cleanup orphaned files
Args:
media_id: Media ID to delete
"""
from starpunk.database import get_db
db = get_db(current_app)
# Get media path before deleting
row = db.execute("SELECT path FROM media WHERE id = ?", (media_id,)).fetchone()
if not row:
return
media_path = row[0]
# Delete database record (cascade will delete note_media entries)
db.execute("DELETE FROM media WHERE id = ?", (media_id,))
db.commit()
# Delete file from disk
media_dir = Path(current_app.config.get('DATA_PATH', 'data')) / 'media'
full_path = media_dir / media_path
if full_path.exists():
full_path.unlink()
current_app.logger.info(f"Deleted media file: {media_path}")

View File

@@ -75,23 +75,72 @@ def create_note_submit():
Form data:
content: Markdown content (required)
published: Checkbox for published status (optional)
custom_slug: Optional custom slug (v1.2.0 Phase 1)
media_files: Multiple file upload (v1.2.0 Phase 3)
captions[]: Captions for each media file (v1.2.0 Phase 3)
Returns:
Redirect to dashboard on success, back to form on error
Decorator: @require_auth
"""
from starpunk.media import save_media, attach_media_to_note
content = request.form.get("content", "").strip()
published = "published" in request.form
custom_slug = request.form.get("custom_slug", "").strip()
if not content:
flash("Content cannot be empty", "error")
return redirect(url_for("admin.new_note_form"))
try:
note = create_note(content, published=published)
# Create note first (per Q4)
note = create_note(
content,
published=published,
custom_slug=custom_slug if custom_slug else None
)
# Handle media uploads (v1.2.0 Phase 3)
media_files = request.files.getlist('media_files')
captions = request.form.getlist('captions[]')
if media_files and any(f.filename for f in media_files):
# Per Q35: Accept valid, reject invalid (not atomic)
media_ids = []
errors = []
for i, file in enumerate(media_files):
if not file.filename:
continue
try:
# Read file data
file_data = file.read()
# Save and optimize media
media_info = save_media(file_data, file.filename)
media_ids.append(media_info['id'])
except ValueError as e:
errors.append(f"{file.filename}: {str(e)}")
except Exception as e:
errors.append(f"{file.filename}: Upload failed")
if media_ids:
# Ensure captions list matches media_ids length
while len(captions) < len(media_ids):
captions.append('')
# Attach media to note
attach_media_to_note(note.id, media_ids, captions[:len(media_ids)])
if errors:
flash(f"Note created, but some images failed: {'; '.join(errors)}", "warning")
flash(f"Note created: {note.slug}", "success")
return redirect(url_for("admin.dashboard"))
except ValueError as e:
flash(f"Error creating note: {e}", "error")
return redirect(url_for("admin.new_note_form"))

View File

@@ -8,7 +8,7 @@ No authentication required for these routes.
import hashlib
from datetime import datetime, timedelta
from flask import Blueprint, abort, render_template, Response, current_app, request
from flask import Blueprint, abort, render_template, Response, current_app, request, send_from_directory
from starpunk.notes import list_notes, get_note
from starpunk.feed import generate_feed_streaming # Legacy RSS
@@ -40,11 +40,13 @@ def _get_cached_notes():
Get cached note list or fetch fresh notes
Returns cached notes if still valid, otherwise fetches fresh notes
from database and updates cache.
from database and updates cache. Includes media for each note.
Returns:
List of published notes for feed generation
List of published notes for feed generation (with media attached)
"""
from starpunk.media import get_note_media
# Get cache duration from config (in seconds)
cache_seconds = current_app.config.get("FEED_CACHE_SECONDS", 300)
cache_duration = timedelta(seconds=cache_seconds)
@@ -60,6 +62,12 @@ def _get_cached_notes():
# Cache expired or empty, fetch fresh notes
max_items = current_app.config.get("FEED_MAX_ITEMS", 50)
notes = list_notes(published_only=True, limit=max_items)
# Attach media to each note (v1.2.0 Phase 3)
for note in notes:
media = get_note_media(note.id)
object.__setattr__(note, 'media', media)
_feed_cache["notes"] = notes
_feed_cache["timestamp"] = now
@@ -158,6 +166,56 @@ def _generate_feed_with_cache(format_name: str, non_streaming_generator):
return response
@bp.route('/media/<path:path>')
def media_file(path):
"""
Serve media files
Per Q10: Set cache headers for media
Per Q26: Absolute URLs in feeds constructed from this route
Args:
path: Relative path to media file (YYYY/MM/filename.ext)
Returns:
File response with caching headers
Raises:
404: If file not found
Headers:
Cache-Control: public, max-age=31536000, immutable
Examples:
>>> response = client.get('/media/2025/01/uuid.jpg')
>>> response.status_code
200
>>> response.headers['Cache-Control']
'public, max-age=31536000, immutable'
"""
from pathlib import Path
media_dir = Path(current_app.config.get('DATA_PATH', 'data')) / 'media'
# Validate path is safe (prevent directory traversal)
try:
# Resolve path and ensure it's under media_dir
requested_path = (media_dir / path).resolve()
if not str(requested_path).startswith(str(media_dir.resolve())):
abort(404)
except (ValueError, OSError):
abort(404)
# Serve file with cache headers
response = send_from_directory(media_dir, path)
# Cache for 1 year (immutable content)
# Media files are UUID-named, so changing content = new URL
response.headers['Cache-Control'] = 'public, max-age=31536000, immutable'
return response
@bp.route("/")
def index():
"""
@@ -192,6 +250,8 @@ def note(slug: str):
Template: templates/note.html
Microformats: h-entry
"""
from starpunk.media import get_note_media
# Get note by slug
note_obj = get_note(slug=slug)
@@ -199,6 +259,13 @@ def note(slug: str):
if not note_obj or not note_obj.published:
abort(404)
# Get media for note (v1.2.0 Phase 3)
media = get_note_media(note_obj.id)
# Attach media to note object for template
# Use object.__setattr__ since Note is frozen dataclass
object.__setattr__(note_obj, 'media', media)
return render_template("note.html", note=note_obj)