# Phase 5a Deployment Configuration - Technical Clarifications Date: 2024-11-20 (Updated: 2025-11-20 for Podman support) ## Overview This document provides detailed technical clarifications for the Phase 5a deployment configuration implementation questions raised by the Developer. Each answer includes specific implementation guidance and examples. **Update 2025-11-20**: Added Podman-specific guidance and rootless container considerations. All examples now show both Podman and Docker where applicable. ## Question 1: Package Module Name & Docker Paths **Question**: Should the Docker runtime use `/app/gondulf/` or `/app/src/gondulf/`? What should PYTHONPATH be set to? **Answer**: Use `/app/src/gondulf/` to maintain consistency with the development structure. **Rationale**: The project structure already uses `src/gondulf/` in development. Maintaining this structure in Docker reduces configuration differences between environments. **Implementation**: ```dockerfile WORKDIR /app COPY pyproject.toml uv.lock ./ COPY src/ ./src/ ENV PYTHONPATH=/app/src:$PYTHONPATH ``` **Guidance**: The application will be run as `python -m gondulf.main` from the `/app` directory. --- ## Question 2: Test Execution During Build **Question**: What uv sync options should be used for test dependencies vs production dependencies? **Answer**: Use `--frozen` for reproducible builds and control dev dependencies explicitly. **Implementation**: ```dockerfile # Build stage (with tests) RUN uv sync --frozen --no-cache # Run tests (all dependencies available) RUN uv run pytest tests/ # Production stage (no dev dependencies) RUN uv sync --frozen --no-cache --no-dev ``` **Rationale**: - `--frozen` ensures uv.lock is respected without modifications - `--no-cache` reduces image size - `--no-dev` in production excludes test dependencies --- ## Question 3: SQLite Database Path Consistency **Question**: With WORKDIR `/app`, volume at `/data`, and DATABASE_URL `sqlite:///./data/gondulf.db`, where does the database actually live? **Answer**: The database lives at `/data/gondulf.db` in the container (absolute path). **Correction**: The DATABASE_URL should be: `sqlite:////data/gondulf.db` (four slashes for absolute path) **Implementation**: ```yaml # docker-compose.yml environment: DATABASE_URL: sqlite:////data/gondulf.db volumes: - ./data:/data ``` **File Structure**: ``` Container: /app/ # WORKDIR, application code /data/ # Volume mount point gondulf.db # Database file Host: ./data/ # Host directory gondulf.db # Persisted database ``` **Rationale**: Using an absolute path with four slashes makes the database location explicit and independent of the working directory. --- ## Question 4: uv Sync Options **Question**: What's the correct uv invocation for build stage vs production stage? **Answer**: **Build Stage**: ```dockerfile RUN uv sync --frozen --no-cache ``` **Production Stage**: ```dockerfile RUN uv sync --frozen --no-cache --no-dev ``` **Rationale**: Both stages use `--frozen` for reproducibility. Only production excludes dev dependencies with `--no-dev`. --- ## Question 5: nginx Configuration File Structure **Question**: Should the developer create full `nginx/nginx.conf` or just `conf.d/gondulf.conf`? **Answer**: Create only `nginx/conf.d/gondulf.conf`. Use the nginx base image's default nginx.conf. **Implementation**: ``` deployment/ nginx/ conf.d/ gondulf.conf # Only this file ``` **docker-compose.yml**: ```yaml nginx: image: nginx:alpine volumes: - ./nginx/conf.d:/etc/nginx/conf.d:ro ``` **Rationale**: The nginx:alpine image provides a suitable default nginx.conf that includes `/etc/nginx/conf.d/*.conf`. We only need to provide our server block configuration. --- ## Question 6: Backup Script Database Path Extraction **Question**: Is the sed regex `sed 's|^sqlite:///||'` correct for both 3-slash and 4-slash sqlite URLs? **Answer**: No. Use a more robust extraction method that handles both formats. **Implementation**: ```bash # Extract database path from DATABASE_URL extract_db_path() { local url="$1" # Handle both sqlite:///relative and sqlite:////absolute if [[ "$url" =~ ^sqlite:////(.+)$ ]]; then echo "/${BASH_REMATCH[1]}" # Absolute path elif [[ "$url" =~ ^sqlite:///(.+)$ ]]; then echo "$WORKDIR/${BASH_REMATCH[1]}" # Relative to WORKDIR else echo "Error: Invalid DATABASE_URL format" >&2 exit 1 fi } DB_PATH=$(extract_db_path "$DATABASE_URL") ``` **Rationale**: Since we're using absolute paths (4 slashes), the function handles both cases but expects the 4-slash format in production. --- ## Question 7: .env.example File **Question**: Update existing or create new? What format for placeholder values? **Answer**: Create a new `.env.example` file with clear placeholder patterns. **Format**: ```bash # Required: Your domain for IndieAuth DOMAIN=your-domain.example.com # Required: Strong random secret (generate with: openssl rand -hex 32) SECRET_KEY=your-secret-key-here-minimum-32-characters # Required: Database location (absolute path in container) DATABASE_URL=sqlite:////data/gondulf.db # Optional: Admin email for Let's Encrypt LETSENCRYPT_EMAIL=admin@example.com # Optional: Server bind address BIND_ADDRESS=0.0.0.0:8000 ``` **Rationale**: Use descriptive placeholders that indicate the expected format. Include generation commands where helpful. --- ## Question 8: Health Check Import Path **Question**: Use Python urllib (no deps), curl, or wget for health checks? **Answer**: Use wget (available in Debian slim base image). **Implementation**: ```dockerfile # In Dockerfile (Debian-based image) RUN apt-get update && \ apt-get install -y --no-install-recommends wget && \ rm -rf /var/lib/apt/lists/* HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD wget --no-verbose --tries=1 --spider http://localhost:8000/health || exit 1 ``` **Podman and Docker Compatibility**: - Health check syntax is identical for both engines - Both support HEALTHCHECK instruction in Containerfile/Dockerfile - Podman also supports `podman healthcheck` command **Rationale**: - wget is lightweight and available in Debian repositories - Simpler than Python script - Works identically with both Podman and Docker - The `--spider` flag makes HEAD request without downloading --- ## Question 9: Directory Creation and Ownership **Question**: Will chown in Dockerfile work with volume mounts? Need entrypoint script? **Answer**: Use an entrypoint script to handle runtime directory permissions. This is especially important for Podman rootless mode. **Implementation**: Create `deployment/docker/entrypoint.sh`: ```bash #!/bin/sh set -e # Ensure data directory exists with correct permissions if [ ! -d "/data" ]; then mkdir -p /data fi # Set ownership if running as specific user # Note: In Podman rootless mode, UID 1000 in container maps to host user's subuid if [ "$(id -u)" = "1000" ]; then # Only try to chown if we have permission chown -R 1000:1000 /data 2>/dev/null || true fi # Create database if it doesn't exist if [ ! -f "/data/gondulf.db" ]; then echo "Initializing database..." python -m gondulf.cli db init fi # Execute the main command exec "$@" ``` **Dockerfile/Containerfile**: ```dockerfile COPY deployment/docker/entrypoint.sh /entrypoint.sh RUN chmod +x /entrypoint.sh USER 1000:1000 ENTRYPOINT ["/entrypoint.sh"] CMD ["python", "-m", "gondulf.main"] ``` **Rootless Podman Considerations**: - In rootless mode, container UID 1000 maps to a range in `/etc/subuid` on the host - Named volumes work transparently with UID mapping - Bind mounts may require `:Z` or `:z` SELinux labels on SELinux-enabled systems - The entrypoint script runs as the mapped UID, not as root **Docker vs Podman Behavior**: - **Docker**: Container UID 1000 is literally UID 1000 on host (if using bind mounts) - **Podman (rootless)**: Container UID 1000 maps to host user's subuid range (e.g., 100000-165535) - **Podman (rootful)**: Behaves like Docker (UID 1000 = UID 1000) **Recommendation**: Use named volumes (not bind mounts) to avoid permission issues in rootless mode. **Rationale**: Volume mounts happen at runtime, after the Dockerfile executes. An entrypoint script handles runtime initialization properly and works with both Docker and Podman. --- ## Question 10: Backup Script Execution Context **Question**: Should backup scripts be mounted from host or copied into image? Where on host? **Answer**: Keep backup scripts on the host and execute them via `podman exec` or `docker exec`. Scripts should auto-detect the container engine. **Host Location**: ``` deployment/ scripts/ backup.sh # Executable from host restore.sh # Executable from host ``` **Execution Method with Engine Detection**: ```bash #!/bin/bash # backup.sh - runs on host, executes commands in container BACKUP_DIR="./backups" TIMESTAMP=$(date +%Y%m%d_%H%M%S) CONTAINER_NAME="gondulf" # Auto-detect container engine if command -v podman &> /dev/null; then ENGINE="podman" elif command -v docker &> /dev/null; then ENGINE="docker" else echo "ERROR: Neither podman nor docker found" >&2 exit 1 fi echo "Using container engine: $ENGINE" # Create backup directory mkdir -p "$BACKUP_DIR" # Execute backup inside container $ENGINE exec "$CONTAINER_NAME" sqlite3 /data/gondulf.db ".backup /tmp/backup.db" $ENGINE cp "$CONTAINER_NAME:/tmp/backup.db" "$BACKUP_DIR/gondulf_${TIMESTAMP}.db" $ENGINE exec "$CONTAINER_NAME" rm /tmp/backup.db echo "Backup saved to $BACKUP_DIR/gondulf_${TIMESTAMP}.db" ``` **Rootless Podman Considerations**: - `podman exec` works identically in rootless and rootful modes - Backup files created on host have host user's ownership (not mapped UID) - No special permission handling needed for backups written to host filesystem **Rationale**: - Scripts remain versioned with the code - No need to rebuild image for script changes - Simpler permission management - Can be run via cron on the host - Works transparently with both Podman and Docker - Engine detection allows single script for both environments --- ## Summary of Key Decisions 1. **Python Path**: Use `/app/src/gondulf/` structure with `PYTHONPATH=/app/src` 2. **Database Path**: Use absolute path `sqlite:////data/gondulf.db` 3. **nginx Config**: Only provide `conf.d/gondulf.conf`, not full nginx.conf 4. **Health Checks**: Use wget for simplicity (works with both Podman and Docker) 5. **Permissions**: Handle via entrypoint script at runtime (critical for rootless Podman) 6. **Backup Scripts**: Execute from host with auto-detected container engine (podman or docker) 7. **Container Engine**: Support both Podman (primary) and Docker (alternative) 8. **Volume Strategy**: Prefer named volumes over bind mounts for rootless compatibility 9. **systemd Integration**: Provide multiple methods (podman generate, compose, direct) ## Updated File Structure ``` deployment/ docker/ Dockerfile entrypoint.sh nginx/ conf.d/ gondulf.conf scripts/ backup.sh restore.sh docker-compose.yml .env.example ``` ## Additional Clarification: Podman-Specific Considerations **Date Added**: 2025-11-20 ### Rootless vs Rootful Podman **Rootless Mode** (recommended): - Container runs as regular user (no root privileges) - Port binding below 1024 requires sysctl configuration or port mapping above 1024 - Volume mounts use subuid/subgid mapping - Uses slirp4netns for networking (slight performance overhead vs rootful) - Systemd user services (not system services) **Rootful Mode** (alternative): - Container runs with root privileges (like Docker) - Full port range available - Volume mounts behave like Docker - Systemd system services - Less secure than rootless **Recommendation**: Use rootless mode for production deployments. ### SELinux Volume Labels On SELinux-enabled systems (RHEL, Fedora, CentOS), volume mounts may require labels: **Private Label** (`:Z`) - recommended: ```yaml volumes: - ./data:/data:Z ``` - Volume is private to this container - SELinux context is set uniquely - Other containers cannot access this volume **Shared Label** (`:z`): ```yaml volumes: - ./data:/data:z ``` - Volume can be shared among containers - SELinux context is shared - Use when multiple containers need access **When to Use**: - On SELinux systems: Use `:Z` for private volumes (recommended) - On non-SELinux systems: Labels are ignored (safe to include) - With named volumes: Labels not needed (Podman handles it) ### Port Binding in Rootless Mode **Issue**: Rootless containers cannot bind to ports below 1024. **Solution 1: Use unprivileged port and reverse proxy**: ```yaml ports: - "8000:8000" # Container port 8000, host port 8000 ``` Then use nginx/Apache to proxy from port 443 to 8000. **Solution 2: Configure sysctl for low ports**: ```bash # Allow binding to port 80 and above sudo sysctl net.ipv4.ip_unprivileged_port_start=80 # Make persistent: echo "net.ipv4.ip_unprivileged_port_start=80" | sudo tee /etc/sysctl.d/99-podman-port.conf ``` **Solution 3: Use rootful Podman** (not recommended): ```bash sudo podman run -p 443:8000 ... ``` **Recommendation**: Use Solution 1 (unprivileged port + reverse proxy) for best security. ### Networking Differences **Podman Rootless**: - Uses slirp4netns (user-mode networking) - Slight performance overhead vs host networking - Cannot use `--network=host` (requires root) - Container-to-container communication works via network name **Podman Rootful**: - Uses CNI plugins (like Docker) - Full network performance - Can use `--network=host` **Docker**: - Uses docker0 bridge - Daemon-managed networking **Impact on Gondulf**: Minimal. The application listens on 0.0.0.0:8000 inside container, which works identically in all modes. ### podman-compose vs docker-compose **Compatibility**: - Most docker-compose features work in podman-compose - Some advanced features may differ (profiles, depends_on conditions) - Compose file v3.8 is well-supported **Differences**: - `podman-compose` is community-maintained (not official Podman project) - `docker-compose` is official Docker tool - Syntax is identical (compose file format) **Recommendation**: Test compose files with both tools during development. ### Volume Management Commands **Podman**: ```bash # List volumes podman volume ls # Inspect volume podman volume inspect gondulf_data # Prune unused volumes podman volume prune # Remove specific volume podman volume rm gondulf_data ``` **Docker**: ```bash # List volumes docker volume ls # Inspect volume docker volume inspect gondulf_data # Prune unused volumes docker volume prune # Remove specific volume docker volume rm gondulf_data ``` Commands are identical (podman is Docker-compatible). ### systemd Integration Specifics **Rootless Podman**: - User service: `~/.config/systemd/user/` - Use `systemctl --user` commands - Enable lingering: `loginctl enable-linger $USER` - Service survives logout **Rootful Podman**: - System service: `/etc/systemd/system/` - Use `systemctl` (no --user) - Standard systemd behavior **Docker**: - System service: `/etc/systemd/system/` - Requires docker.service dependency - Type=oneshot with RemainAfterExit for compose ### Troubleshooting Rootless Issues **Issue**: Permission denied on volume mounts **Solution**: ```bash # Check subuid/subgid configuration grep $USER /etc/subuid grep $USER /etc/subgid # Should show: username:100000:65536 (or similar) # If missing, add entries: sudo usermod --add-subuids 100000-165535 $USER sudo usermod --add-subgids 100000-165535 $USER # Restart user services systemctl --user daemon-reload ``` **Issue**: Port already in use **Solution**: ```bash # Check what's using the port ss -tlnp | grep 8000 # Use different host port podman run -p 8001:8000 ... ``` **Issue**: SELinux denials **Solution**: ```bash # Check for denials sudo ausearch -m AVC -ts recent # Add :Z label to volume mounts # Or temporarily disable SELinux (not recommended for production) ``` ## Next Steps The Developer should: 1. Implement the Dockerfile with the specified paths and commands (OCI-compliant) 2. Create the entrypoint script for runtime initialization (handles rootless permissions) 3. Write the nginx configuration in `conf.d/gondulf.conf` 4. Create backup scripts with engine auto-detection (podman/docker) 5. Generate the .env.example with the specified format 6. Test with both Podman (rootless) and Docker 7. Verify SELinux compatibility if applicable 8. Create systemd unit examples for both engines All technical decisions have been made. The implementation can proceed with these specifications.