Compare commits

...

156 Commits

Author SHA1 Message Date
f088247ac0 feat: add dockcheck auto-update labels to remaining services
Add mag37.dockcheck.update labels to enable automated container update monitoring for:
- Gotify iOS assistant service
- Karakeep (Hoarder) bookmark manager and all components (Chrome, Meilisearch)
- MMDL task management service
- Postiz social media scheduler and all components (PostgreSQL, Redis)

This completes the rollout of dockcheck labels across all Docker services for consistent update monitoring.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-08 17:40:30 -06:00
e1b6d3132a feat: update service versions and add backup configurations
- Update Authentik to 2025.6.4
- Update Dawarich and Karakeep to latest versions
- Add Paperless-NGX backup with S3 storage
- Improve GoToSocial backup configuration with better naming and retention
- Add dockcheck update labels for automated container monitoring

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-08 17:26:16 -06:00
f71ded1a01 feat: add Grocy kitchen ERP service
- Add grocy subdomain to domains.yml
- Create Docker Compose template using LinuxServer image
- Add Ansible task for service deployment
- Configure Caddy reverse proxy with Authentik auth and API bypass
- Add DNS record for grocy subdomain
- Integrate with productivity services category

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-28 08:47:28 -06:00
a2ae9e5ff6 feat: add Kanboard project management service
- Add kanboard subdomain to domains.yml
- Create Docker Compose template with SQLite backend and plugin store enabled
- Add Ansible task for service deployment
- Configure Caddy reverse proxy routing
- Integrate with productivity services category

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-28 07:21:45 -06:00
fb6651f1dc docs: complete documentation audit and updates
- Update service count from 24 to 27 across all documentation
- Add missing services: ByteStash, Obsidian LiveSync, Gotify
- Update service categories in README.md, CLAUDE.md, docker/README.md
- Remove deprecated secrets.enc references from command examples
- Update todo.md with complete service listings
- Ensure all documentation accurately reflects current infrastructure

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-23 15:27:42 -06:00
58a6be8da0 docs: update documentation to reflect Pingvin → Palmr migration
- Replace all Pingvin references with Palmr in documentation
- Update README.md, CLAUDE.md, roles/docker/README.md, and todo.md
- Maintain accurate service count (24 services)
- Update service memories and productivity category listings

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-23 15:10:24 -06:00
17c3077cf0 feat: add Palmr file sharing service to replace Pingvin
- Add Palmr Docker Compose template with encryption enabled
- Create Palmr deployment tasks for productivity category
- Add files.thesatelliteoflove.com routing in Caddyfile
- Restore files subdomain for Palmr service
- Add Palmr to Glance dashboard with file icon
- Generate and store encryption key in vault
- Configure HTTPS, Authentik integration, and dockcheck updates

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-23 12:59:11 -06:00
75fabb3523 feat: deprecate Pingvin file sharing service
- Remove Pingvin Docker Compose template and deployment tasks
- Remove files.thesatelliteoflove.com routing from Caddyfile
- Remove files subdomain from domain variables
- Stop and remove Pingvin containers from remote server
- Clean up /opt/stacks/pingvin directory

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-23 12:36:58 -06:00
336e197176 feat: add dockcheck update labels and fix Gotify service names
- Add mag37.dockcheck.update labels to audiobookshelf, caddy, gitea services
- Fix Gotify container names in Caddyfile routing
- Add explicit container names for gotify and igotify-assistant services
- Update Authentik to version 2025.6.3
- Fix environment variable format in gotify-compose template

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-23 11:26:17 -06:00
f0c4cb51b8 fix: resolve Glance template parsing conflict with Go template syntax
- Wrap Go template syntax in Jinja2 raw blocks to prevent parsing conflicts
- Fix custom-api air quality widget template rendering
- Update Glance dashboard layout with search widget and head-widgets

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-03 09:31:58 -06:00
c95ca45a67 feat: add Obsidian LiveSync CouchDB service for note synchronization
- Add Obsidian LiveSync Docker service with CouchDB backend
- Configure service for Tailscale-only access on port 5984
- Add vault credentials for database authentication
- Create productivity category task and handler
- Enable Glance dashboard integration

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-02 23:46:05 -06:00
a287e50048 feat: add ByteStash service for code snippet management
- Add ByteStash Docker service configuration and deployment
- Configure subdomain routing through Caddy
- Add DNS record for ByteStash subdomain
- Update development service category to include ByteStash

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-02 13:53:23 -06:00
01d959d12c feat: enable automatic updates for ghost-1 container
Added dockcheck label to enable automatic container updates for the
ghost-1 photo blog service, ensuring it stays current with latest
security patches and features.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-24 11:13:51 -06:00
a4fc5f7608 fix: exclude dawarich database container from dockcheck updates
Added dawarich_db to the exclusion list to prevent automatic updates
of the database container, ensuring data integrity and preventing
potential downtime during automated container updates.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-24 11:12:12 -06:00
e3f4eb4e95 fix: update manyfold template to use proper vault variables and standardize configuration
- Fixed manyfold deployment error by updating template to use vault_manyfold.secret_key instead of undefined manyfold_key
- Standardized template to use centralized variables for domains, network, and hairpin configuration
- Added proper OIDC configuration using vault_manyfold.oidc structure

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-23 18:06:06 -06:00
a8350459ae feat: enable automatic container updates with dockcheck labels
- Configure dockcheck for automatic updates instead of check-only mode
- Add dockcheck update labels to Calibre and Changedetection services
- Enable OnlyLabel and AutoMode for targeted container management

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-22 14:51:37 -06:00
eac67e269c fix: add Gotify hairpin to AppriseAPI for notification delivery
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-20 16:42:12 -06:00
85cfca08f5 fix: improve dockcheck cron job logging and reliability
- Added comprehensive logging to /var/log/dockcheck/dockcheck.log
- Created wrapper script to avoid cron variable escaping issues
- Added timestamp logging for each execution with exit codes
- Created proper log directory with correct permissions
- Removed unnecessary -n flag (config file handles DontUpdate=true)
- Added cron handlers for service management

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-20 10:30:51 -06:00
2cc05a19e6 fix: add Gotify hairpin to changedetection services
- Add extra_hosts entry for changedetection service to reach Gotify
- Add extra_hosts entry for sockpuppetbrowser service to reach Gotify
- Resolves internal routing issues for Gotify notifications

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-19 17:18:30 -06:00
d54d04bcc9 feat: add dockcheck cron job for container update notifications
- Install dockcheck.sh script in user's .local/bin directory
- Create notification templates directory with notify_v2.sh and notify_gotify.sh
- Configure Gotify notifications for container update alerts
- Add minimal config with DontUpdate=true (notification only)
- Exclude authentik-postgresql-1 and dawarich_redis from checks
- Schedule daily cron job at 8:00 AM as phil user
- Add dockcheck Gotify token to vault secrets

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-19 16:54:32 -06:00
5f76f69d8b fix: complete Dawarich architecture with Redis and Sidekiq services
- Add Redis service for caching and background job processing
- Add Sidekiq worker service for background tasks
- Update to tagged version 0.28.1 for stability
- Fix Redis URL format to resolve parsing errors
- Remove incorrect volume mounts and SQLite paths
- Add proper service dependencies and health checks
- Use vault variable for SECRET_KEY_BASE security

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-19 16:04:46 -06:00
ef5309363c Update Dawarich to latest (0.28.1) and Glance to latest (v0.8.4) 2025-06-19 15:09:35 -06:00
ff89683038 feat: add Gotify notification server with iGotify iOS support
Add comprehensive push notification infrastructure with:
- Gotify server for push notifications with admin password configuration
- iGotify Assistant service for iOS notification relay via Apple Push Notifications
- Dual subdomain setup (gotify.* and gotify-assistant.*)
- Proper service dependencies and container communication via hairpinning
- Caddy reverse proxy configuration for both services
- DNS A records for both subdomains
- Added to monitoring services category
- Tested with successful notification delivery

Services accessible at:
- https://gotify.thesatelliteoflove.com (main server)
- https://gotify-assistant.thesatelliteoflove.com (iOS assistant)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-19 12:46:51 -06:00
a338186a77 feat: remove Conduit Matrix service
Remove all traces of the Conduit Matrix homeserver service including:
- Delete conduit-compose.yml.j2 template and conduit.yml task file
- Remove conduit from development services category
- Remove conduit Caddy reverse proxy configuration
- Remove conduit subdomain from domains.yml
- Remove conduit DNS A record from Route53
- Delete Matrix well-known files (client/server)
- Update all documentation from 25 to 24 services

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-19 11:39:15 -06:00
8710ffc70d feat: update documentation and infrastructure configuration
- Update service count from 22+ to 25 across documentation
- Add vault.yml to gitignore for security
- Add notifications configuration for AppriseAPI integration
- Add jq package to common role dependencies
- Add hairpin networking fix for AppriseAPI chat subdomain access
- Remove diun service references from monitoring category
- Update project completion status in todo.md

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-08 21:10:30 -06:00
a98fae0b92 feat: update container versions for Baikal, Karakeep, and Postiz
- Update Baikal to v0.10.1 (PostgreSQL support, PHP 8.4 compatibility)
- Update Karakeep to v0.25.0 (Safari extension, PDF screenshots, bulk tag deletion)
- Update Postiz to v1.48.4 (AI image generation, drag-drop uploads, enhanced platform support)

All services tested and running successfully with no errors.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-08 21:04:24 -06:00
d05bac8651 fix: add NEXT_API_DEBUG_MODE environment variable to MMDL
Resolves calendar creation issue where clicking save would fail with
'Cannot read properties of undefined (Reading 'toUpperCase')' error.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-08 20:37:08 -06:00
c500790ea3 feat: update Glance to v0.8.3
- Updated image version from latest to v0.8.3
- Deployed and verified successful upgrade
- New features available: theme picker, authentication, to-do widget

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-07 12:49:44 -06:00
2e4c096bbe feat: complete variable management implementation and update documentation
- Update remaining Docker Compose templates with centralized variables
- Fix service tag isolation to deploy individual services only
- Update all README files with variable management architecture
- Document variable hierarchy in DEPLOYMENT_LEARNINGS.md
- Add comprehensive variable usage patterns to CLAUDE.md
- Standardize domain references using {{ subdomains.* }} pattern
- Replace hardcoded network names with {{ docker.network_name }}
- Update hairpinning configuration to use variables

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 15:45:52 -06:00
12582b352c feat: implement comprehensive variable management system
- Create standardized group_vars directory structure
- Add domains.yml with centralized subdomain mappings
- Add infrastructure.yml with network, SMTP, and path config
- Reorganize vault.yml secrets by service with consistent naming
- Update 15+ Docker compose templates to use new variable structure
- Simplify playbook commands by removing --extra-vars requirement
- Replace hardcoded domains/IPs with template variables
- Standardize secret references across all services

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 15:14:47 -06:00
8d686c2aa5 feat: update GoToSocial to 0.19.1 and add Wazero cache
- Update image from latest to 0.19.1 (latest release from Codeberg)
- Add GTS_WAZERO_COMPILATION_CACHE for improved performance
- Use full docker.io registry path as per reference configuration

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 14:46:28 -06:00
249eb52ceb feat: update Dawarich to 0.27.3 and align with production configuration
- Update image from latest to 0.27.3
- Remove Redis and Sidekiq services (now uses SQLite queues)
- Add storage volume and database paths for SQLite queues
- Align with production compose file reference
- Document reference configuration in CLAUDE.md

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 14:40:13 -06:00
ef4f49fafb feat: update Authentik to version 2025.6.1
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 14:22:34 -06:00
06a7889024 feat: migrate Hoarder to Karakeep bookmark manager
Complete migration from discontinued Hoarder to actively maintained Karakeep:

## Service Updates
- Update Docker image: ghcr.io/hoarder-app/hoarder → ghcr.io/karakeep-app/karakeep
- Update environment variables: HOARDER_VERSION → KARAKEEP_VERSION
- Upgrade Meilisearch: v1.6 → v1.13.3 for better search performance
- Update Glance labels and service references to Karakeep

## Data Preservation
- Maintain same domain: bookmarks.thesatelliteoflove.com
- Preserve volume structure: data and meilisearch volumes unchanged
- Keep directory structure: /opt/stacks/hoarder/ for continuity
- Maintain container naming for Caddyfile compatibility

## Meilisearch Migration
- Resolved database version incompatibility (v1.6.2 → v1.13.3)
- Backed up old database and created fresh v1.13.3 compatible database
- Manual reindex required via Admin Settings > Background Jobs

## Documentation Updates
- Update all service references from Hoarder to Karakeep
- Add both 'hoarder' and 'karakeep' tags for deployment flexibility
- Maintain backwards compatibility for existing automation

## Benefits
- Access to latest Karakeep features and security updates
- Continued development support (Hoarder discontinued)
- Improved search performance with Meilisearch v1.13.3
- Zero data loss during migration

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 14:15:36 -06:00
68f0276ac0 feat: complete infrastructure cleanup and optimization
This comprehensive update improves maintainability and removes unused services:

## Major Changes
- Remove 5 unused services (beaver, grist, stirlingpdf, tasksmd, redlib)
- Convert remaining static compose files to Jinja2 templates
- Clean up Caddyfile removing orphaned proxy configurations
- Align DNS records with active services

## Service Cleanup
- Remove habits.thesatelliteoflove.com DNS record (beaver service)
- Add missing DNS records for active services:
  - post.thesatelliteoflove.com (Postiz)
  - files.thesatelliteoflove.com (Pingvin Share)
  - bookmarks.thesatelliteoflove.com (Hoarder)

## Template Standardization
- Convert caddy-compose.yml to template
- Convert dockge-compose.yml to template
- Convert hoarder-compose.yml to template
- All services now use consistent template-driven approach

## Documentation Updates
- Update CLAUDE.md with new service organization
- Update README.md files with category-based deployment examples
- Update todo.md with completed work summary
- Service count updated to 22+ active services

Infrastructure is now fully organized, cleaned up, and ready for future enhancements.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 12:16:44 -06:00
d4bec94b99 refactor: reorganize docker role into logical service groups
Break down the monolithic main.yml (176 lines) into organized service categories:

- infrastructure/ (caddy, authentik, dockge) - Core platform components
- development/ (gitea, codeserver, conduit) - Development tools
- media/ (audiobookshelf, calibre, ghost, pinchflat, etc.) - Content services
- productivity/ (paperless, baikal, syncthing, mmdl, etc.) - Personal organization
- monitoring/ (glance, changedetection, appriseapi) - System monitoring
- communication/ (gotosocial, postiz) - Social/messaging services

Benefits:
- Improved maintainability with logical grouping
- Better dependency management between service categories
- Enhanced tag-based deployment (can deploy by category)
- Cleaner organization for 25+ services

All individual service tags remain functional for backwards compatibility.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 11:50:26 -06:00
8ca2122cb3 add: comprehensive infrastructure improvement roadmap
Document prioritized improvements for Ansible infrastructure including:
- Docker role reorganization into logical service groups
- Variable management standardization
- Security hardening and backup strategies
- CI/CD automation opportunities
- Network segmentation and monitoring enhancements

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 11:46:07 -06:00
ccab665d26 fix: resolve MMDL hairpinning issue with CalDAV communication
- Add cal.thesatelliteoflove.com:172.20.0.5 to MMDL extra_hosts for internal communication
- Update DEPLOYMENT_LEARNINGS.md with comprehensive hairpinning documentation
- Update CLAUDE.md with hairpinning guidance and correct deployment commands
- Document standard pattern for Docker container internal domain resolution

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 11:24:05 -06:00
1c9ab0f5e6 add DEPLOYMENT_LEARNINGS.md to gitignore
- Keep deployment knowledge base local only
- Prevent committing sensitive troubleshooting information
- Maintain institutional knowledge without exposing internal details

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 10:56:31 -06:00
7fdb52e91b add comprehensive documentation for all Ansible roles
- Add main README with infrastructure overview and usage instructions
- Document bootstrap role for server initialization and security hardening
- Document common role for shared server configuration
- Document cron role for scheduled tasks and automation
- Document docker role with detailed service descriptions and deployment patterns
- Include MMDL service documentation with setup requirements
- Add troubleshooting guides and security considerations

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 10:51:39 -06:00
a2c3b53640 configure Caddy reverse proxy for MMDL task service
- Add tasks.thesatelliteoflove.com reverse proxy to MMDL container
- Route task management service through Caddy with automatic HTTPS

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 10:50:35 -06:00
e1f09fc119 add tasks subdomain DNS record for MMDL service
- Add tasks.thesatelliteoflove.com A record pointing to server IP
- Enable MMDL task management service accessibility

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 10:50:15 -06:00
1280bba7ff add MMDL task management service deployment
- Add MMDL (Manage My Damn Life) task and calendar management service
- Configure NextAuth with Authentik OIDC integration
- Use MySQL 8.0 with proper authentication plugin
- Include Glance dashboard integration
- Add to main docker deployment pipeline

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-06 10:49:49 -06:00
798d35be16 add Redlib Reddit frontend service with security hardening
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-04 16:08:50 -06:00
4fb991ac52 increase Manyfold max file upload size to 5GB
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-04 16:08:32 -06:00
4d1732ff16 add nerder.land homepage configuration to Caddy
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-04 16:08:14 -06:00
2a7bd0dc74 update authentik to 2025.4 and gotosocial to latest
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-04 16:07:55 -06:00
c94c3641b0 add vault_pass to gitignore for security
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-04 16:07:38 -06:00
e7cac9e19c fix Route53 @ record parsing in DNS playbook
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-04 16:05:47 -06:00
3fbd0c5053 more glance updates for apps 2025-03-14 15:59:38 -06:00
579fb581c6 added glance labels to a buch of apps 2025-03-14 14:52:01 -06:00
37f47a4cf3 add manyfold to stack 2025-03-14 14:23:56 -06:00
e3cef5ec47 update GTS version 2025-02-28 12:29:37 -07:00
47cf24b637 update dawarich version and add labels 2025-02-28 12:29:24 -07:00
fe596a2387 update authentik version 2025-02-28 12:28:58 -07:00
3908ffa9e6 set variable to allow non-https connections 2025-02-28 12:28:45 -07:00
e8c9d42b77 add pinchflat to stack 2025-02-28 12:28:17 -07:00
1271fdc2ce add tag for dockge 2025-02-28 12:27:46 -07:00
12a664415d add apprise api to stack 2025-02-28 12:27:29 -07:00
58ddde7dfc glance related changes 2025-02-28 12:26:18 -07:00
d2d0accd2c Add conduit to stack 2025-02-28 12:24:23 -07:00
d43f70b68f Glance updates 2025-02-09 21:21:07 -07:00
951531df0c Updated for 0.7.0 breaking change 2025-02-09 15:51:10 -07:00
6620fe6a86 add change detection 2025-02-09 15:44:05 -07:00
27fce8c757 fix dawarich 2025-02-09 15:43:38 -07:00
8b5e82b7e1 fix runner config 2025-02-09 15:43:19 -07:00
cbc343884b Update RSS feeds 2025-02-09 15:42:54 -07:00
27564a35e4 add beaver to stack 2025-01-04 11:38:36 -07:00
0f33ce5013 remove unused apps from stack 2025-01-04 09:17:13 -07:00
a97c37d8d4 add dawarich to stack 2025-01-04 09:16:27 -07:00
b0d0f41116 switch audiobookshelf to use latest tag 2025-01-04 09:10:43 -07:00
4b4182ce36 switch pingvin to use latest tag 2025-01-04 09:10:19 -07:00
c8f22f83e2 add ghost to stack for phlog 2025-01-04 09:09:47 -07:00
31c11437d0 update caddy to use static IP 2025-01-04 09:08:18 -07:00
1a9dba9cec add phlog to dns 2025-01-04 09:07:51 -07:00
c986c82aa3 add syncthing to stack 2024-12-26 18:28:37 -07:00
bcb082aa98 paperless config fix 2024-12-26 18:28:26 -07:00
1b8faa158d pingvin version bump 2024-12-26 18:28:11 -07:00
97eeeaab34 authentik version bump 2024-12-26 18:27:55 -07:00
d2030c4b8d added baikal to stack 2024-12-10 07:05:55 -07:00
cd15f6a9c4 bump audiobookshelf version 2024-12-09 18:47:46 -07:00
9388bb5037 add oidc auth to paperless 2024-12-09 18:33:59 -07:00
883e907a2f Put calibreweb behind proxy auth 2024-12-09 18:33:42 -07:00
41de8de8c1 add repos to glance 2024-12-09 17:22:11 -07:00
f66b6a6032 pingvin version bump 2024-12-03 18:08:02 -07:00
9544300044 audiobookshelf version bump 2024-12-03 18:07:50 -07:00
3dfe87555b taskmd version bump 2024-12-03 18:07:38 -07:00
d6505d896c added codeserver to stack 2024-12-03 17:56:24 -07:00
f93a21dc3a add smtp info to heyforms 2024-11-25 10:36:46 -07:00
c95bdc098d bump authentik verion 2024-11-25 10:36:12 -07:00
fb1ae448aa Bump audiobookshelf version 2024-11-25 10:35:58 -07:00
32694e8feb add action runner for gita 2024-11-25 10:35:26 -07:00
fdc82ac9f5 bump pingvin version 2024-11-17 16:00:25 -07:00
9e5a65be19 bump authentik to 2024.10.2 2024-11-17 15:01:24 -07:00
b4e1a79596 added new cron role with job to update warhammer rss feed nightly 2024-11-12 12:48:04 -07:00
ff3f69662a updated repo list in glance config 2024-11-12 11:41:13 -07:00
ba5bc3b1cd add heyform to stack 2024-11-11 11:58:14 -07:00
4724dcbede add dns record for repair cafe site 2024-11-11 11:57:54 -07:00
d7402d46a5 add static site hosting for repair cafe 2024-11-11 11:57:32 -07:00
54409656a2 update static site hosting for sub directories 2024-11-11 11:57:04 -07:00
e64bef6ac8 Updated dns playbook to support multiple domains and added forms record 2024-11-11 11:04:52 -07:00
7ec81f80c3 update gts backups to use S3 storage 2024-11-11 09:16:49 -07:00
84d4f44a70 added paperlessngx to stack 2024-11-07 09:29:59 -07:00
221b00f1c4 bump gts version 2024-11-07 08:56:40 -07:00
af1c0347af bump authentik version 2024-11-07 08:56:32 -07:00
09c2142ac9 updated tracked repos 2024-11-06 15:19:17 -07:00
6e32b4fb5a bumped audiobookshelf version 2024-11-06 15:18:23 -07:00
63ceab2cd8 updated rss feeds 2024-11-06 15:18:08 -07:00
b47fc8657d Add calibre and calibre-web to stack 2024-11-01 19:02:45 -06:00
c7b5d52d7d Bump authentik version 2024-11-01 19:02:24 -06:00
1fe3f7bdd5 added audiobookshelf to stack 2024-10-28 15:17:52 -06:00
69395a324b added some more dns names to management 2024-10-28 15:17:14 -06:00
2dff2a5b82 added hiro report to rss feeds 2024-10-26 12:07:44 -06:00
b67378e3d1 added pinry to stack 2024-10-26 12:07:26 -06:00
c9d3fa0397 added playbook to manage aws route53 domains 2024-10-26 12:07:01 -06:00
db4a97cc35 updated self hosting blog feed 2024-10-25 13:53:49 -06:00
de93ef69c0 updated tasksmd and glance to use proxy auth instead of looking for internal IP's 2024-10-25 13:53:05 -06:00
8dae2bb825 added handler to restart caddy on caddyfile change 2024-10-25 13:52:34 -06:00
6ef0d5f1d5 add BITO to tracked tickers 2024-10-24 10:23:17 -06:00
8e54340c9e Add handler so the glance container gets restarted every time the glance config file is changed 2024-10-24 10:22:59 -06:00
a1ab0ae715 bump tasksmd version to 2.5.3 2024-10-24 10:22:12 -06:00
eeb037d081 add tasksmd to glance release tracker 2024-10-23 10:46:18 -06:00
482f227319 Updated stock tickers 2024-10-23 10:13:16 -06:00
54dc844b95 bumped pingvin to 1.2.3 2024-10-23 08:29:32 -06:00
0b1c003699 Added DJT to glance stock tracker 2024-10-22 11:30:42 -06:00
2136dbf7d4 added postiz to stack and associated caddy and glance config 2024-10-22 11:24:41 -06:00
ba5b3f36dc bumped gts to 1.17.1 2024-10-22 09:13:46 -06:00
2f2d808d75 Bumped Pingvin to 1.2.2 2024-10-18 08:44:46 -06:00
41464d14e7 bumped pingvin version 2024-10-17 08:53:15 -06:00
e49c7e5022 Added self hosting page to glance 2024-10-17 08:52:58 -06:00
2e44e98f5c Bump gitea version 2024-10-17 08:52:03 -06:00
33c87bbcf1 Bumped GTS version to 0.17.0 2024-10-16 15:04:11 -06:00
fb440f8fcd Bumped gts version to 0.17.0-rc5 2024-10-14 12:46:17 -06:00
387e87a865 added oauth account linking 2024-10-14 12:43:08 -06:00
7430ab20d2 Enabled OIDC for hoarder 2024-10-10 17:03:45 -06:00
8c392e0211 Bump GTS to 0.17.0-rc3 2024-10-10 16:13:38 -06:00
722e28af89 Swapped tasks endpoint to internal only via tailscale 2024-10-10 16:00:57 -06:00
2a2120c976 Remove Grist from stack, unused 2024-10-09 12:57:48 -06:00
e1a85e8c6f update GTS with SMTP config 2024-10-09 12:57:30 -06:00
327a47169f fixed a mistake in the tasks file for gitea 2024-10-09 12:49:29 -06:00
24dd7d3e67 moved gitea compose to template and added smtp config 2024-10-09 12:29:07 -06:00
5e2abd7713 Added email config to authentik 2024-10-09 12:07:05 -06:00
3a73f85aa1 Added pingvin to stack 2024-10-09 11:40:11 -06:00
3d9d4f6ab7 updated caddy container version info to facilitate easier upgrades 2024-10-08 14:49:19 -06:00
f1ae30c975 Bumped authentik version 2024-10-08 14:48:57 -06:00
9e8da746af Bumped GTS version and fixed proxt location 2024-10-08 13:54:11 -06:00
fa95212be4 updated glance config 2024-10-08 13:30:55 -06:00
772c7addd6 Bumped GTS to 0/17/0-rc1 2024-09-30 16:15:53 -06:00
4fc8f310be added backups to gotosocial 2024-09-30 16:08:21 -06:00
116f415193 updated glance page to show relevant twitch channels 2024-09-23 16:39:16 -06:00
50138230b4 Added stirling pdf to stack 2024-09-23 12:19:14 -06:00
c11b90f04e update to glance config to make weather relevant 2024-09-23 10:28:38 -06:00
dce59dad9c Added glance to stack 2024-09-22 08:16:55 -06:00
6d1ebc61d6 fixed formatting and added tags 2024-09-20 16:46:16 -06:00
3de01f5464 added atuin dotfile config and bat stuff 2024-08-23 13:34:48 -06:00
30b867686d add duf to packages 2024-08-19 15:31:33 -06:00
96 changed files with 3472 additions and 215 deletions

5
.gitignore vendored
View File

@@ -1,2 +1,5 @@
.python-version
secrets.enc
secrets.enc
vault_pass
DEPLOYMENT_LEARNINGS.md
group_vars/all/vault.yml

164
CLAUDE.md Normal file
View File

@@ -0,0 +1,164 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Overview
This is a personal infrastructure Ansible playbook that automates deployment and management of 27 self-hosted Docker services across two domains (`thesatelliteoflove.com` and `nerder.land`). The setup uses Tailscale VPN for secure networking and Caddy for reverse proxy with automated HTTPS.
**Important**: Always review `DEPLOYMENT_LEARNINGS.md` when working on this repository for lessons learned and troubleshooting guidance.
## Common Commands
### Initial Setup
```bash
# Install Ansible dependencies
ansible-galaxy install -r requirements.yml
# Bootstrap new server (creates user, installs Tailscale, security hardening)
ansible-playbook bootstrap.yml -i hosts.yml
# Deploy all Docker services
ansible-playbook site.yml -i hosts.yml
# Update DNS records in AWS Route53
ansible-playbook dns.yml -i hosts.yml
```
### Service Management
```bash
# Deploy specific services using tags (now properly isolated)
ansible-playbook site.yml -i hosts.yml --tags caddy --vault-password-file vault_pass
ansible-playbook site.yml -i hosts.yml --tags authentik --vault-password-file vault_pass
ansible-playbook site.yml -i hosts.yml --tags mmdl --vault-password-file vault_pass
ansible-playbook site.yml -i hosts.yml --tags docker --vault-password-file vault_pass # all docker services
# Deploy services by category (new organized structure)
ansible-playbook site.yml -i hosts.yml --tags infrastructure --vault-password-file vault_pass
ansible-playbook site.yml -i hosts.yml --tags media,productivity --vault-password-file vault_pass
ansible-playbook site.yml -i hosts.yml --tags development,monitoring --vault-password-file vault_pass
# Deploy only infrastructure components
ansible-playbook site.yml -i hosts.yml --tags common,cron --vault-password-file vault_pass
```
## Architecture
### Host Configuration
- **Bootstrap Host** (`netcup`): 152.53.36.98 - Initial server setup target
- **Docker Host** (`docker-01`): 100.70.169.99 - Main service deployment via Tailscale
### Role Structure
- **bootstrap**: Initial server hardening, user creation, Tailscale VPN setup
- **common**: Basic system configuration, UFW firewall management
- **docker**: Comprehensive service deployment (24 containerized applications, organized by category)
- **cron**: Scheduled task management (currently Warhammer RSS feed generation)
### Docker Role Organization (Reorganized into Logical Categories)
The docker role is now organized into logical service groups under `roles/docker/tasks/`:
- **infrastructure/**: Core platform components
- Caddy (reverse proxy), Authentik (SSO), Dockge (container management)
- **development/**: Development and collaboration tools
- Gitea, Code Server, ByteStash
- **media/**: Content creation and consumption
- Audiobookshelf, Calibre, Ghost blog, Pinchflat, Pinry, Karakeep (formerly Hoarder), Manyfold
- **productivity/**: Personal organization and document management
- Paperless-NGX, MMDL, Baikal (CalDAV/CardDAV), Syncthing, Heyform, Dawarich, Palmr, Obsidian LiveSync
- **communication/**: Social media and external communication
- GoToSocial (Fediverse), Postiz (social media management)
- **monitoring/**: System monitoring and alerts
- Changedetection, Glance dashboard, AppriseAPI, Gotify
### Variable Management
**Critical**: This infrastructure uses a centralized variable hierarchy in `group_vars/all/`:
- **domains.yml**: Domain and subdomain mappings (use `{{ subdomains.service }}`)
- **infrastructure.yml**: Network configuration, Docker settings (`{{ docker.network_name }}`, `{{ docker.hairpin_ip }}`)
- **vault.yml**: Encrypted secrets with `vault_` prefix
- **services.yml**: Service-specific configuration and feature flags
**Important**: All templates use variables instead of hardcoded values. Never hardcode domains, IPs, or secrets.
### Data Structure
- All service data stored in `/opt/stacks/[service-name]/` on docker host
- Docker Compose files generated from Jinja2 templates in `roles/docker/templates/`
- Environment files templated for services requiring configuration
- All configurations use centralized variables for consistency
## Key Implementation Details
### Template-Driven Configuration
The docker role uses Jinja2 templates exclusively for all services. When modifying services:
- Update templates in `roles/docker/templates/[service]-compose.yml.j2`
- Environment files use `.env.j2` templates where needed
- Task files organized by category in `roles/docker/tasks/[category]/[service].yml`
- All services now use templated configurations (no static compose files)
### DNS Management
The `dns.yml` playbook manages AWS Route53 records for both domains. All subdomains point to the netcup server (152.53.36.98), with Caddy handling internal routing to the docker host via Tailscale.
### Security Architecture
- Tailscale provides secure networking between management and service hosts
- Services are network-isolated using Docker
- Caddy handles SSL termination with automatic Let's Encrypt certificates
- UFW firewall managed through Docker integration script
### Service Dependencies
Many services depend on Authentik for SSO. When deploying new services, consider:
- Whether SSO integration is needed
- Caddy routing configuration for subdomain access
- Network connectivity requirements within Docker stack
- Hairpinning fixes for internal service-to-service communication
### Hairpinning Resolution
Services inside Docker containers cannot reach external domains that resolve to the same server. Fix by adding `extra_hosts` mappings:
```yaml
extra_hosts:
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
- "{{ subdomains.cal }}:{{ docker.hairpin_ip }}"
```
Common domains requiring hairpinning fixes:
- `{{ subdomains.auth }}` (Authentik SSO)
- `{{ subdomains.cal }}` (Baikal CalDAV)
- Any service domain the container needs to communicate with
**Note**: Use variables instead of hardcoded values for maintainability.
### Service-Specific Reference Configurations
- **Dawarich**: Based on production compose file at https://github.com/Freika/dawarich/blob/master/docker/docker-compose.production.yml
## Service Memories
- palmr is the service that responds on files.thesatelliteoflove.com
- karakeep (formerly called hoarder) is deployed with both 'hoarder' and 'karakeep' tags for backward compatibility
- whenever i ask you what containers need updates, run dockcheck and return a list of containers needing updates
- when i ask for the status container updates i want you to run dockcheck on the docker host https://github.com/mag37/dockcheck?ref=selfh.st
- this is your reference for glance configuration https://github.com/glanceapp/glance/blob/main/docs/configuration.md#configuring-glance
## Variable Management Implementation Notes
**Major Infrastructure Update**: Variable management system was implemented to replace all hardcoded values with centralized variables.
### Key Changes Made:
- Created comprehensive `group_vars/all/` structure
- Updated all Docker Compose templates to use variables
- Fixed service tag isolation (individual service tags now deploy only that service)
- Standardized domain and network configuration
- Organized secrets by service with consistent `vault_` prefix
### Service Tag Fix:
**Critical**: Service tags are now properly isolated. `--tags mmdl` deploys only MMDL (5 tasks), not the entire productivity category.
### Template Pattern:
All templates now follow this pattern:
```yaml
# Use variables, not hardcoded values
glance.url: "https://{{ subdomains.service }}/"
networks:
default:
external: true
name: "{{ docker.network_name }}"
extra_hosts:
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
```

162
README.md Normal file
View File

@@ -0,0 +1,162 @@
# Personal Infrastructure Ansible Playbook
This Ansible playbook automates the setup and management of a personal self-hosted infrastructure running Docker containers for various services.
## Overview
The playbook manages two main environments:
- **Bootstrap server** (`netcup`): Initial server setup with Tailscale VPN
- **Docker server** (`docker-01`): Main application server running containerized services
## Services Deployed
The Docker role deploys and manages 27 self-hosted services organized into logical categories:
### Infrastructure
- **Caddy** (Reverse proxy with automatic HTTPS)
- **Authentik** (SSO/Identity Provider)
- **Dockge** (Container management)
### Development
- **Gitea** (Git repository hosting)
- **Code Server** (VS Code in browser)
- **ByteStash** (Code snippet management)
### Media
- **Audiobookshelf** (Audiobook server)
- **Calibre** (E-book management)
- **Ghost** (Blog platform)
- **Pinchflat** (Media downloader)
- **Pinry** (Pinterest-like board)
- **Hoarder** (Bookmark manager)
- **Manyfold** (3D model organizer)
### Productivity
- **Paperless-NGX** (Document management)
- **MMDL** (Task management)
- **Baikal** (CalDAV/CardDAV server)
- **Syncthing** (File synchronization)
- **HeyForm** (Form builder)
- **Dawarich** (Location tracking)
- **Palmr** (File sharing)
- **Obsidian LiveSync** (Note synchronization)
### Communication
- **GoToSocial** (Fediverse/Mastodon)
- **Postiz** (Social media management)
### Monitoring
- **Changedetection** (Website change monitoring)
- **Glance** (Dashboard)
- **AppriseAPI** (Notification service)
- **Gotify** (Push notifications)
## Structure
```
├── site.yml # Main playbook
├── bootstrap.yml # Server bootstrap playbook
├── dns.yml # AWS Route53 DNS management
├── hosts.yml # Inventory file
├── requirements.yml # External role dependencies
└── roles/
├── bootstrap/ # Initial server setup
├── common/ # Common server configuration
├── cron/ # Scheduled tasks
└── docker/ # Docker services deployment
```
## Roles Documentation
Each role has detailed documentation in its respective directory:
### [Bootstrap Role](roles/bootstrap/README.md)
Performs initial server setup and hardening:
- Creates user accounts with SSH key authentication
- Configures passwordless sudo and security hardening
- Installs essential packages and configures UFW firewall
- Sets up Tailscale VPN for secure network access
### [Common Role](roles/common/README.md)
Provides shared configuration for all servers:
- Installs common packages (aptitude)
- Enables UFW firewall with default deny policy
- Ensures consistent base configuration across infrastructure
### [Cron Role](roles/cron/README.md)
Manages scheduled tasks and automation:
- **Warhammer RSS Feed Updater**: Daily job that generates and updates RSS feeds
- Integrates with Docker services for content generation
- Supports easy addition of new scheduled tasks
### [Docker Role](roles/docker/README.md)
The most comprehensive role, deploying 25 containerized services organized into logical categories:
- **Infrastructure**: Caddy reverse proxy, Authentik SSO, Dockge management
- **Development**: Gitea, Code Server, Matrix communication
- **Media**: Audiobookshelf, Calibre, Ghost blog, Pinchflat, and more
- **Productivity**: Paperless-NGX, MMDL task management, Baikal calendar
- **Communication**: GoToSocial, Postiz social media management
- **Monitoring**: Glance dashboard, Changedetection, AppriseAPI notifications
- **Template-Driven**: All services use Jinja2 templates for consistent configuration
- **Category-Based Deployment**: Deploy services by category using Ansible tags
## Usage
### Prerequisites
1. Install Ansible and required collections:
```bash
ansible-galaxy install -r requirements.yml
```
2. Configure your inventory in `hosts.yml` with your server details
### Bootstrap a New Server
```bash
ansible-playbook bootstrap.yml -i hosts.yml
```
This will:
- Create a user account
- Install and configure Tailscale VPN
- Set up basic security
### Deploy Docker Services
```bash
ansible-playbook site.yml -i hosts.yml
```
Deploy specific services using tags:
```bash
# Deploy by service category
ansible-playbook site.yml -i hosts.yml --tags infrastructure
ansible-playbook site.yml -i hosts.yml --tags media,productivity
# Deploy individual services
ansible-playbook site.yml -i hosts.yml --tags caddy
ansible-playbook site.yml -i hosts.yml --tags authentik
ansible-playbook site.yml -i hosts.yml --tags mmdl
```
### Manage DNS Records
```bash
ansible-playbook dns.yml -i hosts.yml
```
Updates AWS Route53 DNS records for configured domains (`thesatelliteoflove.com` and `nerder.land`).
## Configuration
- Service configurations are templated in `roles/docker/templates/`
- Environment variables and secrets should be managed through Ansible Vault
- Docker Compose files are generated from Jinja2 templates
## Security Notes
- Uses Tailscale for secure network access
- Caddy provides automatic HTTPS with Let's Encrypt
- Services are containerized for isolation
- UFW firewall rules are managed via Docker integration

View File

@@ -3,11 +3,8 @@
become: true
vars:
created_username: phil
vars_prompt:
- name: tailscale_key
prompt: Enter the tailscale key
roles:
- bootstrap
- role: artis3n.tailscale
vars:
tailscale_authkey: "{{ tailscale_key }}"
tailscale_authkey: "{{ vault_infrastructure.tailscale_key }}"

78
dns.yml Normal file
View File

@@ -0,0 +1,78 @@
---
# dns.yml
- name: Add A Records for thesatelliteoflove.com and nerder.land
hosts: localhost
gather_facts: false
vars:
# Domains to manage DNS records for
domains:
- name: thesatelliteoflove.com
dns_records:
- name: "pin"
ip: "152.53.36.98"
- name: "home"
ip: "152.53.36.98"
- name: "git"
ip: "152.53.36.98"
- name: "social"
ip: "152.53.36.98"
- name: "auth"
ip: "152.53.36.98"
- name: "audio"
ip: "152.53.36.98"
- name: "books"
ip: "152.53.36.98"
- name: "paper"
ip: "152.53.36.98"
- name: "code"
ip: "152.53.36.98"
- name: "snippets"
ip: "152.53.36.98"
- name: cal
ip: "152.53.36.98"
- name: phlog
ip: "152.53.36.98"
- name: loclog
ip: "152.53.36.98"
- name: watcher
ip: "152.53.36.98"
- name: models
ip: "152.53.36.98"
- name: tasks
ip: "152.53.36.98"
- name: post
ip: "152.53.36.98"
- name: files
ip: "152.53.36.98"
- name: bookmarks
ip: "152.53.36.98"
- name: gotify
ip: "152.53.36.98"
- name: gotify-assistant
ip: "152.53.36.98"
- name: pdg
ip: "152.53.36.98"
- name: kanboard
ip: "152.53.36.98"
- name: grocy
ip: "152.53.36.98"
- name: nerder.land
dns_records:
- name: "forms"
ip: "152.53.36.98"
- name: "repair"
ip: "152.53.36.98"
tasks:
- name: Add A records for subdomains of each domain
amazon.aws.route53:
state: present
zone: "{{ item.0.name }}"
record: "{{ item.0.name if item.1.name == '@' else item.1.name + '.' + item.0.name }}"
type: A
ttl: 300
value: "{{ item.1.ip }}"
loop: "{{ query('subelements', domains, 'dns_records') }}"
loop_control:
loop_var: item

View File

@@ -0,0 +1,43 @@
# Domain Configuration
primary_domain: "thesatelliteoflove.com"
secondary_domain: "nerder.land"
# Subdomain mappings
subdomains:
auth: "auth.{{ primary_domain }}"
git: "git.{{ primary_domain }}"
cal: "cal.{{ primary_domain }}"
docs: "docs.{{ primary_domain }}"
phlog: "phlog.{{ primary_domain }}" # Ghost blog
bookmarks: "bookmarks.{{ primary_domain }}" # Hoarder/Karakeep
heyform: "forms.{{ secondary_domain }}" # Heyform on nerder.land
media: "media.{{ primary_domain }}"
audio: "audio.{{ primary_domain }}" # Audiobookshelf
books: "books.{{ primary_domain }}" # Calibre
models: "models.{{ primary_domain }}" # Manyfold
pinchflat: "pinchflat.{{ primary_domain }}"
pin: "pin.{{ primary_domain }}" # Pinry
paper: "paper.{{ primary_domain }}" # Paperless-NGX
tasks: "tasks.{{ primary_domain }}" # MMDL
syncthing: "syncthing.{{ primary_domain }}"
loclog: "loclog.{{ primary_domain }}" # Dawarich
files: "files.{{ primary_domain }}" # Palmr file sharing
social: "social.{{ primary_domain }}" # GoToSocial
post: "post.{{ primary_domain }}" # Postiz
home: "home.{{ primary_domain }}" # Glance
watcher: "watcher.{{ primary_domain }}" # Changedetection
appriseapi: "appriseapi.{{ primary_domain }}"
dockge: "dockge.{{ primary_domain }}"
code: "code.{{ primary_domain }}" # Code Server
bytestash: "snippets.{{ primary_domain }}" # ByteStash code snippets
gotify: "gotify.{{ primary_domain }}" # Gotify notifications
gotify_assistant: "gotify-assistant.{{ primary_domain }}" # iGotify iOS assistant
kanboard: "kanboard.{{ primary_domain }}" # Kanboard project management
grocy: "grocy.{{ primary_domain }}" # Grocy kitchen ERP
# Email domains for notifications
email_domains:
updates: "updates.{{ primary_domain }}"
auth_email: "auth@updates.{{ primary_domain }}"
git_email: "git@updates.{{ primary_domain }}"
cal_email: "cal@updates.{{ primary_domain }}"

View File

@@ -0,0 +1,26 @@
# Infrastructure Configuration
# Docker configuration
docker:
network_name: "lava"
stacks_path: "/opt/stacks"
hairpin_ip: "172.20.0.5"
# SMTP configuration
smtp:
host: "smtp.resend.com"
username: "resend"
from_domain: "{{ email_domains.updates }}"
# Network configuration
network:
netcup_ip: "152.53.36.98"
docker_host_ip: "100.70.169.99"
# Paths
paths:
stacks: "{{ docker.stacks_path }}"
# Notification services
notifications:
appriseapi_endpoint: "http://apprise:8000/notify/apprise"

View File

@@ -0,0 +1,25 @@
# Docker Services Configuration
# Service categories for organization
service_categories:
infrastructure: ["caddy", "authentik", "dockge"]
development: ["gitea", "codeserver"]
media: ["audiobookshelf", "calibre", "ghost", "pinchflat", "pinry", "hoarder", "manyfold"]
productivity: ["paperlessngx", "baikal", "syncthing", "mmdl", "heyform", "dawarich", "pingvin"]
communication: ["gotosocial", "postiz"]
monitoring: ["glance", "changedetection", "appriseapi", "gotify"]
# Common service configuration
services:
common:
restart_policy: "unless-stopped"
network: "{{ docker.network_name }}"
# Service-specific configurations
dawarich:
db_name: "dawarich"
db_user: "dawarich"
mmdl:
db_name: "mmdl"
db_user: "mmdl"

41
roles/bootstrap/README.md Normal file
View File

@@ -0,0 +1,41 @@
# Bootstrap Role
## Purpose
Performs initial server setup and hardening for new Ubuntu/Debian servers.
## What It Does
### User Management
- Creates a new user account with sudo privileges (specified by `created_username` variable)
- Configures passwordless sudo for the sudo group
- Sets up SSH key authentication using your local `~/.ssh/id_ed25519.pub` key
- Disables root password authentication
### System Packages
- Installs `aptitude` for better package management
- Installs essential packages:
- `curl` - HTTP client
- `vim` - Text editor
- `git` - Version control
- `ufw` - Uncomplicated Firewall
### Security Configuration
- Configures UFW firewall to:
- Allow SSH connections
- Enable firewall with default deny policy
- Hardens SSH configuration
## Variables Required
- `created_username`: The username to create (typically set in bootstrap.yml)
- `tailscale_key`: Tailscale authentication key (prompted during playbook run)
## Dependencies
- Requires the `artis3n.tailscale` role for VPN setup
- Requires your SSH public key at `~/.ssh/id_ed25519.pub`
## Usage
```bash
ansible-playbook bootstrap.yml -i hosts.yml
```
This role is designed to be run once on a fresh server before deploying other services.

23
roles/common/README.md Normal file
View File

@@ -0,0 +1,23 @@
# Common Role
## Purpose
Provides shared configuration and security setup that applies to all servers in the infrastructure.
## What It Does
### System Packages
- Installs `aptitude` for better package management and dependency resolution
- Updates package cache to ensure latest package information
### Security Configuration
- Enables UFW (Uncomplicated Firewall) with default deny policy
- Provides baseline firewall protection for all managed servers
## Usage
This role is automatically applied to all servers in the infrastructure as part of the main site.yml playbook. It ensures consistent base configuration across all managed systems.
## Dependencies
None - this is a foundational role that other roles can depend on.
## Notes
This role is designed to be lightweight and provide only the most essential common functionality. Server-specific configurations should be handled by dedicated roles like `docker` or `bootstrap`.

View File

@@ -1,6 +1,8 @@
- name: Install aptitude
- name: Install common packages
apt:
name: aptitude
name:
- aptitude
- jq
state: latest
update_cache: true

37
roles/cron/README.md Normal file
View File

@@ -0,0 +1,37 @@
# Cron Role
## Purpose
Manages scheduled tasks and automated maintenance jobs for the infrastructure.
## What It Does
### Warhammer RSS Feed Updater
- Copies `update_warhammer_feed.sh` script to `/usr/local/bin/` with executable permissions
- Creates a daily cron job that runs at 09:10 AM
- The script performs these actions:
1. Creates a temporary directory `/tmp/warhammer_feed`
2. Runs a custom Docker container (`git.thesatelliteoflove.com/phil/rss-warhammer`) to generate RSS feed
3. Copies the generated `warhammer_rss_feed.xml` to `/opt/stacks/caddy/site/tsol/feeds/`
4. Restarts the Glance dashboard stack to reflect the updated feed
## Files Managed
- `/usr/local/bin/update_warhammer_feed.sh` - RSS feed update script
- Cron job: "Update Warhammer RSS Feed" (daily at 09:10)
## Dependencies
- Requires Docker to be installed and running
- Depends on the following Docker stacks being deployed:
- Custom RSS generator container at `git.thesatelliteoflove.com/phil/rss-warhammer`
- Caddy web server stack at `/opt/stacks/caddy/`
- Glance dashboard stack at `/opt/stacks/glance/`
## Usage
This role is automatically applied as part of the main site.yml playbook with the `cron` tag.
```bash
# Deploy only cron jobs
ansible-playbook site.yml -i hosts.yml --tags cron
```
## Customization
To add additional cron jobs, create new tasks in the main.yml file following the same pattern as the Warhammer feed updater.

View File

@@ -0,0 +1,15 @@
#!/bin/bash
# Create and navigate to a temporary directory
TMP_DIR="/tmp/warhammer_feed"
mkdir -p "$TMP_DIR"
cd "$TMP_DIR" || exit 1
# Run the Docker command to generate the RSS feed
docker run --rm -v "$TMP_DIR":/app/output git.thesatelliteoflove.com/phil/rss-warhammer
# Copy the generated file to the desired location
cp "$TMP_DIR/warhammer_rss_feed.xml" /opt/stacks/caddy/site/tsol/feeds
# Restart the Docker stack
docker compose -f /opt/stacks/glance/compose.yml restart

View File

@@ -0,0 +1,6 @@
---
# Handler to restart systemd-journald service
- name: restart rsyslog
systemd:
name: systemd-journald
state: restarted

115
roles/cron/tasks/main.yml Normal file
View File

@@ -0,0 +1,115 @@
---
# Enable cron logging in systemd-journald (already enabled by default)
# We'll rely on journalctl for cron execution logs
# Ensure the script is copied to the target machine
- name: Copy the warhammer feed update script
copy:
src: update_warhammer_feed.sh
dest: /usr/local/bin/update_warhammer_feed.sh
mode: '0755'
owner: root
group: root
# Create the cron job to run the script at 09:10 every day
- name: Create cron job for warhammer feed update
cron:
name: "Update Warhammer RSS Feed"
minute: "10"
hour: "9"
user: root
job: "/usr/local/bin/update_warhammer_feed.sh"
# Create .local/bin directory for phil user
- name: Ensure .local/bin directory exists for phil
file:
path: /home/phil/.local/bin
state: directory
mode: '0755'
owner: phil
group: phil
# Install dockcheck script in phil's .local/bin
- name: Download dockcheck.sh script
get_url:
url: https://raw.githubusercontent.com/mag37/dockcheck/main/dockcheck.sh
dest: /home/phil/.local/bin/dockcheck.sh
mode: '0755'
owner: phil
group: phil
# Create .config directory for phil user
- name: Ensure .config directory exists for phil
file:
path: /home/phil/.config
state: directory
mode: '0755'
owner: phil
group: phil
# Create notify_templates directory alongside dockcheck.sh
- name: Ensure notify_templates directory exists in .local/bin
file:
path: /home/phil/.local/bin/notify_templates
state: directory
mode: '0755'
owner: phil
group: phil
# Download notify_v2.sh script for dockcheck notifications
- name: Download notify_v2.sh script
get_url:
url: https://raw.githubusercontent.com/mag37/dockcheck/main/notify_templates/notify_v2.sh
dest: /home/phil/.local/bin/notify_templates/notify_v2.sh
mode: '0755'
owner: phil
group: phil
# Download notify_gotify.sh script for dockcheck notifications
- name: Download notify_gotify.sh script
get_url:
url: https://raw.githubusercontent.com/mag37/dockcheck/main/notify_templates/notify_gotify.sh
dest: /home/phil/.local/bin/notify_templates/notify_gotify.sh
mode: '0755'
owner: phil
group: phil
# Template dockcheck configuration file
- name: Template dockcheck configuration
template:
src: dockcheck.config.j2
dest: /home/phil/.config/dockcheck.config
mode: '0644'
owner: phil
group: phil
# Create log directory for dockcheck
- name: Create dockcheck log directory
file:
path: /var/log/dockcheck
state: directory
mode: '0755'
owner: phil
group: phil
# Create dockcheck wrapper script to avoid cron escaping issues
- name: Create dockcheck wrapper script
copy:
dest: /home/phil/.local/bin/run_dockcheck.sh
mode: '0755'
owner: phil
group: phil
content: |
#!/bin/bash
cd /home/phil
/home/phil/.local/bin/dockcheck.sh >> /var/log/dockcheck/dockcheck.log 2>&1
echo "$(date "+%Y-%m-%d %H:%M:%S") - Dockcheck completed with exit code $?" >> /var/log/dockcheck/dockcheck.log
# Create cron job for dockcheck as phil user with logging
- name: Create cron job for dockcheck container updates
cron:
name: "Check Docker container updates"
minute: "0"
hour: "8"
user: phil
job: "/home/phil/.local/bin/run_dockcheck.sh"

View File

@@ -0,0 +1,18 @@
# Dockcheck Configuration - Check only, no updates
# Don't update, just check for updates
# DontUpdate=true
OnlyLabel=true
AutoMode=true
# Enable notifications
Notify=true
# Exclude containers from checking
Exclude="authentik-postgresql-1,dawarich_redis,dawarich_db"
# Notification channels
NOTIFY_CHANNELS="gotify"
# Gotify notification configuration
GOTIFY_DOMAIN="https://{{ subdomains.gotify }}"
GOTIFY_TOKEN="{{ vault_dockcheck.gotify_token }}"

228
roles/docker/README.md Normal file
View File

@@ -0,0 +1,228 @@
# Docker Role
## Purpose
Deploys and manages a comprehensive self-hosted infrastructure with 24 containerized services organized into logical categories, transforming a server into a personal cloud platform with authentication, media management, productivity tools, and development services.
## Architecture Overview
### Network Configuration
- **External Network**: All services connect to shared Docker network (configurable)
- **Reverse Proxy**: Caddy handles all ingress traffic with automatic HTTPS
- **Service Discovery**: Container-to-container communication using service names
- **Firewall Integration**: UFW-Docker script properly configures firewall rules
### Security Features
- **Centralized SSO**: Authentik provides OIDC authentication for most services
- **Network Isolation**: Services restricted to appropriate network segments
- **Container Hardening**: Non-root users, capability dropping, security options
- **Secret Management**: Ansible vault for sensitive configuration
- **Variable Management**: Centralized variable hierarchy using group_vars structure
## Services Deployed (Organized by Category)
### Infrastructure (`infrastructure/`)
- **Caddy** - Reverse proxy with automatic HTTPS (static IP: 172.20.0.5)
- **Authentik** - Enterprise authentication server (OIDC/SAML SSO)
- **Dockge** - Docker compose stack management UI
### Development (`development/`)
- **Gitea** - Self-hosted Git with CI/CD runners
- **Code Server** - VS Code in the browser
- **ByteStash** - Code snippet management and organization
### Media (`media/`)
- **Audiobookshelf** - Audiobook and podcast server
- **Calibre** - E-book management and conversion
- **Ghost** - Modern blogging platform
- **Pinchflat** - YouTube video archiving
- **Pinry** - Pinterest-like image board
- **Karakeep** - Bookmark management with AI tagging
- **Manyfold** - 3D model file organization
### Productivity (`productivity/`)
- **Paperless-ngx** - Document management with OCR
- **MMDL** - Task and calendar management with CalDAV integration
- **Baikal** - CalDAV/CardDAV server
- **Syncthing** - Decentralized file sync
- **Heyform** - Form builder and surveys
- **Dawarich** - Location tracking
- **Palmr** - File sharing service
- **Obsidian LiveSync** - CouchDB backend for note synchronization
### Communication (`communication/`)
- **GoToSocial** - Lightweight ActivityPub server
- **Postiz** - Social media management
### Monitoring (`monitoring/`)
- **Glance** - Customizable dashboard with monitoring
- **Change Detection** - Website monitoring
- **Apprise API** - Unified notifications
- **Gotify** - Self-hosted push notification service
## Deployment Patterns
### Standardized Service Deployment
Each service follows a consistent pattern:
1. Creates `/opt/stacks/[service-name]` directory structure
2. Generates Docker Compose file from Jinja2 template
3. Deploys using `community.docker.docker_compose_v2`
4. Configures environment variables from vault secrets
### Template System
- **Compose Templates**: `.j2` files in `templates/` for dynamic configuration
- **Environment Templates**: Separate `.env.j2` files for services requiring environment variables
- **Variable Substitution**: Uses centralized variable hierarchy from group_vars structure
- **Domain Management**: Centralized domain and subdomain configuration
- **Network Configuration**: Standardized Docker network and IP address management
## Shell Environment Setup
The role also configures the shell environment:
- **Zsh Installation**: Sets zsh as default shell
- **Atuin**: Command history sync and search
- **Bat**: Enhanced `cat` command with syntax highlighting
## File Organization
```
roles/docker/
├── tasks/
│ ├── main.yml # Orchestrates all deployments
│ ├── shell.yml # Shell environment setup
│ ├── infrastructure/
│ │ ├── main.yml # Infrastructure category orchestrator
│ │ ├── caddy.yml # Reverse proxy
│ │ └── authentik.yml # Authentication
│ ├── development/
│ │ ├── main.yml # Development category orchestrator
│ │ ├── gitea.yml # Git hosting
│ │ └── codeserver.yml # VS Code server
│ ├── media/ # Media services (7 services)
│ ├── productivity/ # Productivity services (7 services)
│ ├── communication/ # Communication services (2 services)
│ └── monitoring/ # Monitoring services (3 services)
├── templates/
│ ├── [service]-compose.yml.j2 # Docker Compose templates (all templated)
│ ├── [service]-env.j2 # Environment variable templates
│ └── [service]-*.j2 # Service-specific templates
├── files/
│ ├── Caddyfile # Caddy configuration
│ ├── ufw-docker.sh # Firewall integration script
│ ├── client # Matrix well-known client file
│ └── server # Matrix well-known server file
└── handlers/
└── main.yml # Service restart handlers
```
## Usage
### Deploy All Services
```bash
ansible-playbook site.yml -i hosts.yml --tags docker
```
### Deploy by Service Category
```bash
# Deploy entire service categories
ansible-playbook site.yml -i hosts.yml --tags infrastructure
ansible-playbook site.yml -i hosts.yml --tags development
ansible-playbook site.yml -i hosts.yml --tags media
ansible-playbook site.yml -i hosts.yml --tags productivity
ansible-playbook site.yml -i hosts.yml --tags communication
ansible-playbook site.yml -i hosts.yml --tags monitoring
# Deploy multiple categories
ansible-playbook site.yml -i hosts.yml --tags infrastructure,monitoring
```
### Deploy Individual Services
```bash
# Deploy specific services
ansible-playbook site.yml -i hosts.yml --tags authentik
ansible-playbook site.yml -i hosts.yml --tags gitea,codeserver
ansible-playbook site.yml -i hosts.yml --tags mmdl
```
## Service-Specific Notes
### MMDL (Task Management)
- **URL**: https://tasks.thesatelliteoflove.com
- **Initial Setup**: Visit `/install` endpoint first to run database migrations
- **Authentication**: Integrates with Authentik OIDC provider
- **Database**: Uses MySQL 8.0 with automatic schema migration
- **Features**: CalDAV integration, multiple account support, task management
## Dependencies
### System Requirements
- Docker CE installed and running
- UFW firewall configured
- DNS records pointing to the server
- Valid SSL certificates (handled automatically by Caddy)
### External Services
- **DNS**: Requires subdomains configured for each service
- **Email**: Gitea uses Resend for notifications
- **Storage**: All services persist data to `/opt/stacks/[service]/`
## Configuration
### Variable Structure
The role uses a centralized variable hierarchy in `group_vars/all/`:
- **domains.yml**: Domain and subdomain mappings for all services
- **infrastructure.yml**: Network configuration, Docker settings, and system parameters
- **vault.yml**: Encrypted secrets including API keys, passwords, and OAuth credentials
- **services.yml**: Service-specific configuration and feature flags
### Required Variables (in vault.yml)
- Authentication credentials for various services (vault_*)
- API keys for external integrations
- OAuth client secrets for SSO integration
- Database passwords and connection strings
- SMTP credentials for notifications
### Network Configuration
Services expect to be accessible via subdomains of configured domains:
- `auth.thesatelliteoflove.com` - Authentik
- `git.thesatelliteoflove.com` - Gitea
- `books.thesatelliteoflove.com` - Calibre
- `tasks.thesatelliteoflove.com` - MMDL
- (and many more...)
## Monitoring & Management
### Glance Dashboard Integration
All services include Glance labels for dashboard monitoring:
- Service health status
- Container resource usage
- Parent-child relationships for multi-container services
### Operational Features
- Automatic container restart policies
- Health checks for database services
- Centralized logging and monitoring
- Backup-ready data structure in `/opt/stacks/`
## Security Considerations
### Network Security
- UFW-Docker integration for proper firewall rules
- Services isolated to appropriate network segments
- Restricted access for sensitive tools (Stirling PDF)
### Authentication
- Centralized SSO through Authentik for most services
- OAuth integration where supported
- Secure secret management through Ansible vault
### Container Security
- Non-root container execution (UID/GID 1000:1000)
- Security options: `no-new-privileges: true`
- Capability dropping and minimal permissions
## Troubleshooting
### Common Issues
- **Database Connection**: Ensure MySQL containers use proper authentication plugins
- **OAuth Discovery**: Check issuer URLs don't have trailing slashes
- **Migration Failures**: Visit service `/install` endpoints for database setup
- **Network Issues**: Verify containers are on the same Docker network

View File

@@ -2,16 +2,93 @@ auth.thesatelliteoflove.com {
reverse_proxy authentik-server-1:9000
}
tasks.thesatelliteoflove.com {
paper.thesatelliteoflove.com {
reverse_proxy paperlessngx-webserver-1:8000
}
pin.thesatelliteoflove.com {
reverse_proxy pinry-pinry-1:80
}
cal.thesatelliteoflove.com {
redir /.well-known/caldav /dav.php 302
redir /.well-known/carddav /dav.php 302
reverse_proxy baikal-baikal-1:80
}
books.thesatelliteoflove.com {
reverse_proxy authentik-server-1:9000
}
audio.thesatelliteoflove.com {
reverse_proxy audiobookshelf-audiobookshelf-1:80
}
post.thesatelliteoflove.com {
reverse_proxy postiz:5000
}
loclog.thesatelliteoflove.com {
reverse_proxy dawarich_app:3000
}
watcher.thesatelliteoflove.com {
reverse_proxy changedetection:5000
}
tasks.thesatelliteoflove.com {
reverse_proxy mmdl:3000
}
kanboard.thesatelliteoflove.com {
reverse_proxy kanboard:80
}
grocy.thesatelliteoflove.com {
# API endpoints bypass forward auth for mobile apps
handle /api/* {
reverse_proxy grocy:80
}
# Web interface requires Authentik authentication
forward_auth authentik-server-1:9000 {
uri /outpost.goauthentik.io/auth/caddy
copy_headers {
X-authentik-username
X-authentik-groups
X-authentik-email
X-authentik-name
X-authentik-uid
}
}
reverse_proxy grocy:80
}
phlog.thesatelliteoflove.com {
reverse_proxy ghost-1-ghost-1:2368
}
code.thesatelliteoflove.com {
reverse_proxy authentik-server-1:9000
}
snippets.thesatelliteoflove.com {
reverse_proxy bytestash:5000
}
files.thesatelliteoflove.com {
reverse_proxy palmr-palmr-1:5487
}
git.thesatelliteoflove.com {
reverse_proxy gitea:3000
}
thesatelliteoflove.com {
root * /srv
reverse_proxy /micropub/* micropub_server-micropub-1:5000
reverse_proxy /micropub micropub_server-micropub-1:5000
root * /srv/tsol
file_server
}
@@ -23,6 +100,45 @@ social.thesatelliteoflove.com {
reverse_proxy gotosocial:8080
}
grist.thesatelliteoflove.com {
reverse_proxy grist-grist-1:8484
}
models.thesatelliteoflove.com {
reverse_proxy manyfold-app-1:3214
}
home.thesatelliteoflove.com {
reverse_proxy authentik-server-1:9000
}
gotify.thesatelliteoflove.com {
reverse_proxy gotify:80
}
gotify-assistant.thesatelliteoflove.com {
reverse_proxy igotify-assistant:8080
}
pdg.thesatelliteoflove.com {
root * /srv/pdg
try_files {path} {path}.html {path}/ =404
file_server
encode gzip
handle_errors {
rewrite * /{err.status_code}.html
file_server
}
}
repair.nerder.land {
root * /srv/repair
file_server
}
nerder.land {
root * /srv/nerderland
file_server
}
forms.nerder.land {
reverse_proxy heyform-heyform-1:8000
}

View File

@@ -1,22 +0,0 @@
services:
caddy:
image: caddy:2.8.4
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- ./site:/srv
- caddy_data:/data
- caddy_config:/config
volumes:
caddy_data:
caddy_config:
networks:
default:
external: true
name: lava

View File

@@ -1,26 +0,0 @@
version: "3"
services:
server:
image: gitea/gitea:1.22.1
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
restart: unless-stopped
volumes:
- gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- 222:22
extra_hosts:
- 'auth.thesatelliteoflove.com:172.20.0.2'
volumes:
gitea:
driver: local
networks:
default:
external: true
name: lava

View File

@@ -1,24 +0,0 @@
version: "3"
services:
tasks.md:
image: baldissaramatheus/tasks.md
container_name: tasksmd
environment:
- PUID=1000
- PGID=1000
volumes:
- tasksmd-data:/tasks
- tasksmd-config:/config
restart: unless-stopped
volumes:
tasksmd-data:
driver: local
tasksmd-config:
driver: local
networks:
default:
external: true
name: lava

View File

@@ -0,0 +1,21 @@
# roles/docker/handlers/main.yml
- name: restart glance
community.docker.docker_compose_v2:
project_src: /opt/stacks/glance
files:
- compose.yml
state: restarted
- name: restart caddy
community.docker.docker_compose_v2:
project_src: /opt/stacks/caddy
files:
- compose.yml
state: restarted
- name: restart obsidian-livesync
community.docker.docker_compose_v2:
project_src: /opt/stacks/obsidian-livesync
files:
- docker-compose.yml
state: restarted

View File

@@ -4,6 +4,7 @@
state: directory
loop:
- /opt/stacks/gotosocial
- /opt/stacks/gotosocial/backup
- name: Template out the compose file
ansible.builtin.template:

View File

@@ -0,0 +1,10 @@
---
# Communication services - Social media, messaging, and external communication
- name: Install gotosocial
import_tasks: gotosocial.yml
tags: gotosocial
- name: Install postiz
import_tasks: postiz.yml
tags: postiz

View File

@@ -0,0 +1,19 @@
- name: make postiz directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/postiz
- name: Template out the compose file
ansible.builtin.template:
src: postiz-compose.yml.j2
dest: /opt/stacks/postiz/compose.yml
owner: root
mode: 644
- name: deploy postiz stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/postiz
files:
- compose.yml

View File

@@ -0,0 +1,19 @@
- name: make bytestash directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/bytestash
- name: Template out the compose file
ansible.builtin.template:
src: bytestash-compose.yml.j2
dest: /opt/stacks/bytestash/compose.yml
owner: root
mode: 644
- name: deploy bytestash stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/bytestash
files:
- compose.yml

View File

@@ -0,0 +1,19 @@
- name: make codeserver directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/codeserver
- name: Template out the compose file
ansible.builtin.template:
src: codeserver-compose.yml.j2
dest: /opt/stacks/codeserver/compose.yml
owner: root
mode: 644
- name: deploy codeserver stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/codeserver
files:
- compose.yml

View File

@@ -4,10 +4,11 @@
state: directory
loop:
- /opt/stacks/gitea
- /opt/stacks/gitea/data
- name: copy gitea compose file
ansible.builtin.copy:
src: gitea-compose.yml
- name: Template out the compose file
ansible.builtin.template:
src: gitea-compose.yml.j2
dest: /opt/stacks/gitea/compose.yml
owner: root
mode: 644

View File

@@ -0,0 +1,15 @@
---
# Development services - Code, collaboration, and development tools
- name: Install gitea
import_tasks: gitea.yml
tags: gitea
- name: Install codeserver
import_tasks: codeserver.yml
tags: codeserver
- name: Install bytestash
import_tasks: bytestash.yml
tags: bytestash

View File

@@ -11,10 +11,11 @@
dest: /opt/stacks/caddy/Caddyfile
owner: root
mode: 644
notify: restart caddy
- name: copy caddy compose file
ansible.builtin.copy:
src: caddy-compose.yml
- name: template caddy compose file
ansible.builtin.template:
src: caddy-compose.yml.j2
dest: /opt/stacks/caddy/compose.yml
owner: root
mode: 644
@@ -22,6 +23,6 @@
- name: deploy caddy stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/caddy
recreate: always
recreate: never
files:
- compose.yml

View File

@@ -0,0 +1,17 @@
---
# Infrastructure services - Core platform components
- name: Install caddy
import_tasks: caddy.yml
tags: caddy
- name: Install authentik
import_tasks: authentik.yml
tags: authentik
- name: Deploy dockge stack
community.docker.docker_compose_v2:
project_src: /opt/dockge
files:
- dockge.yml
tags: dockge

View File

@@ -8,6 +8,7 @@
- python3-pip
- virtualenv
- python3-setuptools
- duf
state: latest
update_cache: true
@@ -48,36 +49,34 @@
- /opt/stacks
- /opt/dockge
- name: copy dockge compose file
ansible.builtin.copy:
src: dockge-compose.yml
- name: template dockge compose file
ansible.builtin.template:
src: dockge-compose.yml.j2
dest: /opt/dockge/dockge.yml
owner: root
mode: 644
- name: deploy dockge stack
community.docker.docker_compose_v2:
project_src: /opt/dockge
files:
- dockge.yml
# Deploy services by category for better organization and dependency management
- name: Deploy infrastructure services
import_tasks: infrastructure/main.yml
tags: infrastructure
- name: Install caddy
import_tasks: caddy.yml
- name: Deploy development services
import_tasks: development/main.yml
tags: development
- name: Install gitea
import_tasks: gitea.yml
- name: Deploy media services
import_tasks: media/main.yml
tags: media
- name: Install hoarder
import_tasks: hoarder.yml
- name: Deploy productivity services
import_tasks: productivity/main.yml
tags: productivity
- name: Install authentik
import_tasks: authentik.yml
- name: Deploy monitoring services
import_tasks: monitoring/main.yml
tags: monitoring
- name: Install gotosocial
import_tasks: gotosocial.yml
- name: Install grist
import_tasks: grist.yml
- name: Install tasksmd
import_tasks: tasksmd.yml
- name: Deploy communication services
import_tasks: communication/main.yml
tags: communication

View File

@@ -0,0 +1,19 @@
- name: make audiobookshelf directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/audiobookshelf
- name: Template out the compose file
ansible.builtin.template:
src: audiobookshelf-compose.yml.j2
dest: /opt/stacks/audiobookshelf/compose.yml
owner: root
mode: 644
- name: deploy audiobookshelf stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/audiobookshelf
files:
- compose.yml

View File

@@ -0,0 +1,19 @@
- name: make calibre directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/calibre
- name: Template out the compose file
ansible.builtin.template:
src: calibre-compose.yml.j2
dest: /opt/stacks/calibre/compose.yml
owner: root
mode: 644
- name: deploy calibre stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/calibre
files:
- compose.yml

View File

@@ -0,0 +1,19 @@
- name: make ghost-1 directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/ghost-1
- name: Template out the compose file
ansible.builtin.template:
src: ghost-1-compose.yml.j2
dest: /opt/stacks/ghost-1/compose.yml
owner: root
mode: 644
- name: deploy ghost-1 stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/ghost-1
files:
- compose.yml

View File

@@ -5,9 +5,9 @@
loop:
- /opt/stacks/hoarder
- name: copy hoarder compose file
ansible.builtin.copy:
src: hoarder-compose.yml
- name: template hoarder compose file
ansible.builtin.template:
src: hoarder-compose.yml.j2
dest: /opt/stacks/hoarder/compose.yml
owner: root
mode: 644

View File

@@ -0,0 +1,32 @@
---
# Media services - Content creation, management, and consumption
- name: Install audiobookshelf
import_tasks: audiobookshelf.yml
tags: audiobookshelf
- name: Install calibre
import_tasks: calibre.yml
tags: calibre
- name: Install ghost-1
import_tasks: ghost-1.yml
tags: ghost-1
- name: Install pinchflat
import_tasks: pinchflat.yml
tags: pinchflat
- name: Install pinry
import_tasks: pinry.yml
tags: pinry
- name: Install karakeep
import_tasks: hoarder.yml
tags:
- hoarder
- karakeep
- name: Install manyfold
import_tasks: manyfold.yml
tags: manyfold

View File

@@ -0,0 +1,29 @@
- name: make manyfold directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/manyfold
- name: make manyfold data directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
owner: 1000
group: 1000
loop:
- /opt/stacks/manyfold/config
- /opt/stacks/manyfold/models
- name: Template out the compose file
ansible.builtin.template:
src: manyfold-compose.yml.j2
dest: /opt/stacks/manyfold/compose.yml
owner: root
mode: 644
- name: deploy manyfold stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/manyfold
files:
- compose.yml

View File

@@ -0,0 +1,19 @@
- name: make pinchflat directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/pinchflat
- name: Template out the compose file
ansible.builtin.template:
src: pinchflat-compose.yml.j2
dest: /opt/stacks/pinchflat/compose.yml
owner: root
mode: 644
- name: deploy pinchflat stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/pinchflat
files:
- compose.yml

View File

@@ -1,19 +1,19 @@
- name: make grist directories
- name: make pinry directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/grist
- /opt/stacks/pinry
- name: Template out the compose file
ansible.builtin.template:
src: grist-compose.yml.j2
dest: /opt/stacks/grist/compose.yml
src: pinry-compose.yml.j2
dest: /opt/stacks/pinry/compose.yml
owner: root
mode: 644
- name: deploy grist stack
- name: deploy pinry stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/grist
project_src: /opt/stacks/pinry
files:
- compose.yml

View File

@@ -0,0 +1,19 @@
- name: make appriseapi directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/appriseapi
- name: Template out the compose file
ansible.builtin.template:
src: appriseapi-compose.yml.j2
dest: /opt/stacks/appriseapi/compose.yml
owner: root
mode: 644
- name: deploy appriseapi stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/appriseapi
files:
- compose.yml

View File

@@ -0,0 +1,19 @@
- name: make changedetection directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/changedetection
- name: Template out the compose file
ansible.builtin.template:
src: changedetection-compose.yml.j2
dest: /opt/stacks/changedetection/compose.yml
owner: root
mode: 644
- name: deploy changedetection stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/changedetection
files:
- compose.yml

View File

@@ -0,0 +1,28 @@
- name: make glance directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
- /opt/stacks/glance
- /opt/stacks/glance/config
- name: Template out the compose file
ansible.builtin.template:
src: glance-compose.yml.j2
dest: /opt/stacks/glance/compose.yml
owner: root
mode: '0644'
- name: Template out the config file
ansible.builtin.template:
src: glance.yml.j2
dest: /opt/stacks/glance/config/glance.yml
owner: root
mode: '0644'
notify: restart glance
- name: deploy glances stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/glance
files:
- compose.yml

View File

@@ -0,0 +1,19 @@
- name: Create gotify directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
- /opt/stacks/gotify
- name: Template out the gotify compose file
ansible.builtin.template:
src: gotify-compose.yml.j2
dest: /opt/stacks/gotify/compose.yml
owner: root
mode: 644
- name: Deploy gotify stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/gotify
files:
- compose.yml

View File

@@ -0,0 +1,18 @@
---
# Monitoring services - System monitoring, alerts, and dashboards
- name: Install glance
import_tasks: glance.yml
tags: glance
- name: Install changedetection
import_tasks: changedetection.yml
tags: changedetection
- name: Install appriseapi
import_tasks: appriseapi.yml
tags: appriseapi
- name: Install gotify
import_tasks: gotify.yml
tags: gotify

View File

@@ -0,0 +1,19 @@
- name: make baikal directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/baikal
- name: Template out the compose file
ansible.builtin.template:
src: baikal-compose.yml.j2
dest: /opt/stacks/baikal/compose.yml
owner: root
mode: 644
- name: deploy baikal stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/baikal
files:
- compose.yml

View File

@@ -0,0 +1,19 @@
- name: make dawarich directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/dawarich
- name: Template out the compose file
ansible.builtin.template:
src: dawarich-compose.yml.j2
dest: /opt/stacks/dawarich/compose.yml
owner: root
mode: 644
- name: deploy dawarich stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/dawarich
files:
- compose.yml

View File

@@ -0,0 +1,18 @@
---
- name: Create grocy directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
- /opt/stacks/grocy
- name: Template grocy compose file
ansible.builtin.template:
src: grocy-compose.yml.j2
dest: /opt/stacks/grocy/compose.yml
- name: Deploy grocy stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/grocy
files:
- compose.yml

View File

@@ -0,0 +1,19 @@
- name: make heyform directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/heyform
- name: Template out the compose file
ansible.builtin.template:
src: heyform-compose.yml.j2
dest: /opt/stacks/heyform/compose.yml
owner: root
mode: 644
- name: deploy heyform stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/heyform
files:
- compose.yml

View File

@@ -0,0 +1,18 @@
---
- name: Create kanboard directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
- /opt/stacks/kanboard
- name: Template kanboard compose file
ansible.builtin.template:
src: kanboard-compose.yml.j2
dest: /opt/stacks/kanboard/compose.yml
- name: Deploy kanboard stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/kanboard
files:
- compose.yml

View File

@@ -0,0 +1,42 @@
---
# Productivity services - Task management, document handling, and personal organization
- name: Install paperlessngx
import_tasks: paperlessngx.yml
tags: paperlessngx
- name: Install baikal
import_tasks: baikal.yml
tags: baikal
- name: Install syncthing
import_tasks: syncthing.yml
tags: syncthing
- name: Install mmdl
import_tasks: mmdl.yml
tags: mmdl
- name: Install heyform
import_tasks: heyform.yml
tags: heyform
- name: Install dawarich
import_tasks: dawarich.yml
tags: dawarich
- name: Install palmr
import_tasks: palmr.yml
tags: palmr
- name: Install obsidian-livesync
import_tasks: obsidian-livesync.yml
tags: obsidian-livesync
- name: Install kanboard
import_tasks: kanboard.yml
tags: kanboard
- name: Install grocy
import_tasks: grocy.yml
tags: grocy

View File

@@ -0,0 +1,25 @@
---
- name: Create mmdl directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
- /opt/stacks/mmdl
- /opt/stacks/mmdl/data
- /opt/stacks/mmdl/mysql
- name: Template mmdl environment file
ansible.builtin.template:
src: mmdl-env.j2
dest: /opt/stacks/mmdl/.env.local
- name: Template mmdl compose file
ansible.builtin.template:
src: mmdl-compose.yml.j2
dest: /opt/stacks/mmdl/compose.yml
- name: Deploy mmdl stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/mmdl
files:
- compose.yml

View File

@@ -0,0 +1,20 @@
---
- name: make obsidian-livesync directories
ansible.builtin.file:
path: "{{ paths.stacks }}/obsidian-livesync"
state: directory
mode: '0755'
- name: Template out the compose file
ansible.builtin.template:
src: obsidian-livesync-compose.yml.j2
dest: "{{ paths.stacks }}/obsidian-livesync/docker-compose.yml"
mode: '0644'
notify: restart obsidian-livesync
- name: deploy obsidian-livesync stack
community.docker.docker_compose_v2:
project_src: "{{ paths.stacks }}/obsidian-livesync"
state: present
tags:
- obsidian-livesync

View File

@@ -0,0 +1,19 @@
- name: make palmr directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
- /opt/stacks/palmr
- name: Template out the compose file
ansible.builtin.template:
src: palmr-compose.yml.j2
dest: /opt/stacks/palmr/compose.yml
owner: root
mode: 644
- name: deploy palmr stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/palmr
files:
- compose.yml

View File

@@ -0,0 +1,26 @@
- name: make paperlessngx directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/paperlessngx
- name: Template out the compose file
ansible.builtin.template:
src: paperlessngx-compose.yml.j2
dest: /opt/stacks/paperlessngx/compose.yml
owner: root
mode: 644
- name: Template out the .env file
ansible.builtin.template:
src: paperlessngx.env.j2
dest: /opt/stacks/paperlessngx/docker-compose.env
owner: root
mode: 644
- name: deploy paperlessngx stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/paperlessngx
files:
- compose.yml

View File

@@ -0,0 +1,19 @@
- name: make syncthing directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/syncthing
- name: Template out the compose file
ansible.builtin.template:
src: syncthing-compose.yml.j2
dest: /opt/stacks/syncthing/compose.yml
owner: root
mode: 644
- name: deploy syncthing stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/syncthing
files:
- compose.yml

View File

@@ -25,3 +25,22 @@
become: true
become_user: "{{ ansible_user }}"
- name: setup dotfile sync
ansible.builtin.blockinfile:
path: "/home/{{ ansible_user }}/.config/atuin/config.toml"
block: |
[dotfiles]
enabled = true
marker: ""
- name: Install bat
apt:
name: bat
state: latest
update_cache: true
- name: Create symlink for batcat
ansible.builtin.file:
src: /usr/bin/batcat
dest: "/home/{{ ansible_user }}/.local/bin/bat"
state: link

View File

@@ -1,19 +0,0 @@
- name: make tasksmd directories
ansible.builtin.file:
path: "{{ item}}"
state: directory
loop:
- /opt/stacks/tasksmd
- name: copy tasksmd compose file
ansible.builtin.copy:
src: tasksmd-compose.yml
dest: /opt/stacks/tasksmd/compose.yml
owner: root
mode: 644
- name: deploy tasksmd stack
community.docker.docker_compose_v2:
project_src: /opt/stacks/tasksmd
files:
- compose.yml

View File

@@ -0,0 +1,30 @@
services:
apprise:
container_name: apprise
ports:
- {{ network.docker_host_ip }}:8000:8000
environment:
- APPRISE_STATEFUL_MODE=simple
- APPRISE_WORKER_COUNT=1
volumes:
- config:/config
- plugin:/plugin
- attach:/attach
image: caronc/apprise:latest
extra_hosts:
- "{{ subdomains.gotify }}:{{ docker.hairpin_ip }}"
labels:
glance.name: Apprise
glance.icon: si:imessage
glance.url: https://{{ subdomains.appriseapi }}/
glance.description: Apprise api server
glance.id: apprise
mag37.dockcheck.update: true
volumes:
config:
attach:
plugin:
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,32 @@
services:
audiobookshelf:
image: ghcr.io/advplyr/audiobookshelf:latest
volumes:
- audiobooks:/audiobooks
- podcasts:/podcasts
- config:/config
- metadata:/metadata
environment:
- TZ=America/Denver
- DISABLE_SSRF_REQUEST_FILTER=1
extra_hosts:
- '{{ subdomains.auth }}:172.20.0.5'
labels:
glance.name: Audiobookshelf
glance.icon: si:audiobookshelf
glance.url: https://{{ subdomains.audio }}/
glance.description: Audio book server
mag37.dockcheck.update: true
volumes:
audiobooks:
driver: local
podcasts:
driver: local
config:
driver: local
metadata:
driver: local
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -18,6 +18,9 @@ services:
POSTGRES_DB: ${PG_DB:-authentik}
env_file:
- .env
labels:
glance.parent: authentik
glance.name: DB
trout:
image: docker.io/library/redis:alpine
command: --save 60 1 --loglevel warning
@@ -30,8 +33,11 @@ services:
timeout: 3s
volumes:
- trout:/data
labels:
glance.parent: authentik
glance.name: Redis
server:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.6.3}
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2025.6.4}
restart: unless-stopped
command: server
environment:
@@ -51,8 +57,14 @@ services:
depends_on:
- postgresql
- trout
labels:
glance.name: Authentik
glance.icon: si:authentik
glance.url: https://auth.thesatelliteoflove.com/
glance.description: Authentication server
glance.id: authentik
worker:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.6.3}
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2025.6.4}
restart: unless-stopped
command: worker
environment:
@@ -78,6 +90,9 @@ services:
depends_on:
- postgresql
- trout
labels:
glance.parent: authentik
glance.name: Worker
volumes:
database:

View File

@@ -1,2 +1,15 @@
PG_PASS={{ authentik_pg_pass }}
AUTHENTIK_SECRET_KEY={{ authentik_secret_key }}
PG_PASS={{ vault_authentik.postgres_password }}
AUTHENTIK_SECRET_KEY={{ vault_authentik.secret_key }}
# SMTP Host Emails are sent to
AUTHENTIK_EMAIL__HOST={{ smtp.host }}
AUTHENTIK_EMAIL__PORT=25
# Optionally authenticate (don't add quotation marks to your password)
AUTHENTIK_EMAIL__USERNAME={{ smtp.username }}
AUTHENTIK_EMAIL__PASSWORD={{ vault_smtp.password }}
# Use StartTLS
AUTHENTIK_EMAIL__USE_TLS=true
# Use SSL
AUTHENTIK_EMAIL__USE_SSL=false
AUTHENTIK_EMAIL__TIMEOUT=10
# Email address authentik will send from, should have a correct @domain
AUTHENTIK_EMAIL__FROM=auth@{{ email_domains.updates }}

View File

@@ -0,0 +1,21 @@
services:
baikal:
image: ckulka/baikal:0.10.1-nginx
restart: unless-stopped
volumes:
- config:/var/www/baikal/config
- data:/var/www/baikal/Specific
labels:
glance.name: Baikal
glance.icon: si:protoncalendar
glance.url: https://{{ subdomains.cal }}/
glance.description: CalDav server
mag37.dockcheck.update: true
volumes:
config:
data:
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,37 @@
services:
bytestash:
image: ghcr.io/jordan-dalby/bytestash:latest
container_name: bytestash
restart: unless-stopped
volumes:
- bytestash_data:/data/snippets
environment:
JWT_SECRET: "{{ vault_bytestash.jwt_secret }}"
TOKEN_EXPIRY: "24h"
ALLOW_NEW_ACCOUNTS: "true"
DEBUG: "false"
DISABLE_ACCOUNTS: "false"
DISABLE_INTERNAL_ACCOUNTS: "false"
OIDC_ENABLED: "true"
OIDC_DISPLAY_NAME: "Login with Authentik"
OIDC_ISSUER_URL: "https://{{ subdomains.auth }}/application/o/bytestash/"
OIDC_CLIENT_ID: "{{ vault_bytestash.oidc_client_id }}"
OIDC_CLIENT_SECRET: "{{ vault_bytestash.oidc_client_secret }}"
OIDC_SCOPES: "openid profile email"
extra_hosts:
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
labels:
glance.name: ByteStash
glance.icon: si:code
glance.url: https://{{ subdomains.bytestash }}/
glance.description: Code snippet manager
glance.id: bytestash
volumes:
bytestash_data:
driver: local
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,32 @@
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
- "8448:8448"
- "8448:8448/udp"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- ./site:/srv
- caddy_data:/data
- caddy_config:/config
labels:
glance.name: Caddy
glance.icon: si:caddy
glance.url: https://{{ primary_domain }}/
glance.description: Reverse proxy
mag37.dockcheck.update: true
networks:
default:
ipv4_address: {{ docker.hairpin_ip }}
volumes:
caddy_data:
caddy_config:
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,30 @@
---
services:
calibre-web:
image: lscr.io/linuxserver/calibre-web:latest
container_name: calibre-web
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
- DOCKER_MODS=linuxserver/mods:universal-calibre #optional
- OAUTHLIB_RELAX_TOKEN_SCOPE=1 #optional
volumes:
- config:/config
- books:/books
restart: unless-stopped
labels:
glance.name: Calibre
glance.icon: si:calibreweb
glance.url: https://{{ subdomains.books }}/
glance.description: Book server
mag37.dockcheck.update: true
volumes:
config:
driver: local
books:
driver: local
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,137 @@
version: '3.2'
services:
changedetection:
image: ghcr.io/dgtlmoon/changedetection.io
container_name: changedetection
hostname: changedetection
labels:
glance.name: Changedetection
glance.icon: si:watchtower
glance.url: https://{{ subdomains.watcher }}/
glance.description: Changedetection
glance.id: changedetection
mag37.dockcheck.update: true
volumes:
- changedetection-data:/datastore
# Configurable proxy list support, see https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration#proxy-list-support
# - ./proxies.json:/datastore/proxies.json
environment:
# Default listening port, can also be changed with the -p option
# - PORT=5000
#
# Log levels are in descending order. (TRACE is the most detailed one)
# Log output levels: TRACE, DEBUG(default), INFO, SUCCESS, WARNING, ERROR, CRITICAL
# - LOGGER_LEVEL=TRACE
#
# Alternative WebDriver/selenium URL, do not use "'s or 's!
# - WEBDRIVER_URL=http://browser-chrome:4444/wd/hub
#
# WebDriver proxy settings webdriver_proxyType, webdriver_ftpProxy, webdriver_noProxy,
# webdriver_proxyAutoconfigUrl, webdriver_autodetect,
# webdriver_socksProxy, webdriver_socksUsername, webdriver_socksVersion, webdriver_socksPassword
#
# https://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.proxy
#
# Alternative target "Chrome" Playwright URL, do not use "'s or 's!
# "Playwright" is a driver/librarythat allows changedetection to talk to a Chrome or similar browser.
- PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000
#
# Playwright proxy settings playwright_proxy_server, playwright_proxy_bypass, playwright_proxy_username, playwright_proxy_password
#
# https://playwright.dev/python/docs/api/class-browsertype#browser-type-launch-option-proxy
#
# Plain requests - proxy support example.
# - HTTP_PROXY=socks5h://10.10.1.10:1080
# - HTTPS_PROXY=socks5h://10.10.1.10:1080
#
# An exclude list (useful for notification URLs above) can be specified by with
# - NO_PROXY="localhost,192.168.0.0/24"
#
# Base URL of your changedetection.io install (Added to the notification alert)
- BASE_URL=https://{{ subdomains.watcher }}
# Respect proxy_pass type settings, `proxy_set_header Host "localhost";` and `proxy_set_header X-Forwarded-Prefix /app;`
# More here https://github.com/dgtlmoon/changedetection.io/wiki/Running-changedetection.io-behind-a-reverse-proxy-sub-directory
# - USE_X_SETTINGS=1
#
# Hides the `Referer` header so that monitored websites can't see the changedetection.io hostname.
# - HIDE_REFERER=true
#
# Default number of parallel/concurrent fetchers
# - FETCH_WORKERS=10
#
# Absolute minimum seconds to recheck, overrides any watch minimum, change to 0 to disable
# - MINIMUM_SECONDS_RECHECK_TIME=3
#
# If you want to watch local files file:///path/to/file.txt (careful! security implications!)
# - ALLOW_FILE_URI=False
#
# For complete privacy if you don't want to use the 'check version' / telemetry service
# - DISABLE_VERSION_CHECK=true
#
# A valid timezone name to run as (for scheduling watch checking) see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
- TZ=America/Denver
# Comment out ports: when using behind a reverse proxy , enable networks: etc.
# ports:
# - 5000:5000
restart: unless-stopped
extra_hosts:
- "{{ subdomains.gotify }}:{{ docker.hairpin_ip }}"
# Used for fetching pages via WebDriver+Chrome where you need Javascript support.
# Now working on arm64 (needs testing on rPi - tested on Oracle ARM instance)
# replace image with seleniarm/standalone-chromium:4.0.0-20211213
# If WEBDRIVER or PLAYWRIGHT are enabled, changedetection container depends on that
# and must wait before starting (substitute "browser-chrome" with "playwright-chrome" if last one is used)
depends_on:
sockpuppetbrowser:
condition: service_started
# Sockpuppetbrowser is basically chrome wrapped in an API for allowing fast fetching of web-pages.
# RECOMMENDED FOR FETCHING PAGES WITH CHROME
sockpuppetbrowser:
hostname: sockpuppetbrowser
labels:
glance.parent: changedetection
glance.name: Browser
mag37.dockcheck.update: true
image: dgtlmoon/sockpuppetbrowser:latest
cap_add:
- SYS_ADMIN
## SYS_ADMIN might be too much, but it can be needed on your platform https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md#running-puppeteer-on-gitlabci
restart: unless-stopped
environment:
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1024
- SCREEN_DEPTH=16
- MAX_CONCURRENT_CHROME_PROCESSES=10
extra_hosts:
- "{{ subdomains.gotify }}:{{ docker.hairpin_ip }}"
# Used for fetching pages via Playwright+Chrome where you need Javascript support.
# Note: Works well but is deprecated, does not fetch full page screenshots (doesnt work with Visual Selector)
# Does not report status codes (200, 404, 403) and other issues
# browser-chrome:
# hostname: browser-chrome
# image: selenium/standalone-chrome:4
# environment:
# - VNC_NO_PASSWORD=1
# - SCREEN_WIDTH=1920
# - SCREEN_HEIGHT=1080
# - SCREEN_DEPTH=24
# volumes:
# # Workaround to avoid the browser crashing inside a docker container
# # See https://github.com/SeleniumHQ/docker-selenium#quick-start
# - /dev/shm:/dev/shm
# restart: unless-stopped
volumes:
changedetection-data:
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,23 @@
services:
codeserver:
stdin_open: true
tty: true
labels:
glance.name: Code Server
glance.icon: si:vscodium
glance.url: https://{{ subdomains.code }}/
glance.description: Code Server
mag37.dockcheck.update: true
container_name: codeserver
volumes:
- home:/home
environment:
- DOCKER_USER=$USER
image: codercom/code-server:latest
volumes:
home:
driver: local
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,158 @@
services:
dawarich_redis:
image: redis:7.4-alpine
container_name: dawarich_redis
labels:
glance.parent: dawarich
glance.name: Redis
volumes:
- dawarich_redis_data:/data
restart: always
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
retries: 5
start_period: 30s
timeout: 10s
dawarich_db:
image: postgis/postgis:17-3.5-alpine
shm_size: 1G
labels:
glance.parent: dawarich
glance.name: DB
container_name: dawarich_db
volumes:
- dawarich_db_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: {{ vault_dawarich.postgres_password }}
POSTGRES_DB: dawarich_production
restart: always
healthcheck:
test: [ "CMD", "pg_isready", "-U", "postgres" ]
interval: 10s
retries: 5
start_period: 30s
timeout: 10s
dawarich_app:
image: freikin/dawarich:latest
container_name: dawarich_app
labels:
glance.name: Dawarich
glance.icon: si:openstreetmap
glance.url: https://{{ subdomains.loclog }}/
glance.description: Dawarich
glance.id: dawarich
volumes:
- dawarich_public:/var/app/public
- dawarich_watched:/var/app/tmp/imports/watched
- dawarich_storage:/var/app/storage
stdin_open: true
tty: true
entrypoint: web-entrypoint.sh
command: ['bin/rails', 'server', '-p', '3000', '-b', '::']
restart: on-failure
environment:
RAILS_ENV: production
DATABASE_HOST: dawarich_db
DATABASE_PORT: 5432
DATABASE_USERNAME: postgres
DATABASE_PASSWORD: {{ vault_dawarich.postgres_password }}
DATABASE_NAME: dawarich_production
REDIS_URL: redis://dawarich_redis:6379
MIN_MINUTES_SPENT_IN_CITY: 60
APPLICATION_HOSTS: {{ subdomains.loclog }},localhost,::1,127.0.0.1
TIME_ZONE: America/Denver
APPLICATION_PROTOCOL: http
DISTANCE_UNIT: mi
PROMETHEUS_EXPORTER_ENABLED: false
PROMETHEUS_EXPORTER_HOST: 0.0.0.0
PROMETHEUS_EXPORTER_PORT: 9394
SECRET_KEY_BASE: {{ vault_dawarich.secret_key_base }}
RAILS_LOG_TO_STDOUT: "true"
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "5"
healthcheck:
test: [ "CMD-SHELL", "wget -qO - http://127.0.0.1:3000/api/v1/health | grep -q '\"status\"\\s*:\\s*\"ok\"'" ]
interval: 10s
retries: 30
start_period: 30s
timeout: 10s
depends_on:
dawarich_db:
condition: service_healthy
restart: true
dawarich_redis:
condition: service_healthy
restart: true
deploy:
resources:
limits:
cpus: '0.50'
memory: '2G'
dawarich_sidekiq:
image: freikin/dawarich:latest
container_name: dawarich_sidekiq
labels:
glance.parent: dawarich
glance.name: Sidekiq
volumes:
- dawarich_public:/var/app/public
- dawarich_watched:/var/app/tmp/imports/watched
- dawarich_storage:/var/app/storage
stdin_open: true
tty: true
entrypoint: sidekiq-entrypoint.sh
command: ['sidekiq']
restart: on-failure
environment:
RAILS_ENV: production
DATABASE_HOST: dawarich_db
DATABASE_PORT: 5432
DATABASE_USERNAME: postgres
DATABASE_PASSWORD: {{ vault_dawarich.postgres_password }}
DATABASE_NAME: dawarich_production
REDIS_URL: redis://dawarich_redis:6379
MIN_MINUTES_SPENT_IN_CITY: 60
APPLICATION_HOSTS: {{ subdomains.loclog }},localhost,::1,127.0.0.1
TIME_ZONE: America/Denver
APPLICATION_PROTOCOL: http
DISTANCE_UNIT: mi
PROMETHEUS_EXPORTER_ENABLED: false
SECRET_KEY_BASE: {{ vault_dawarich.secret_key_base }}
RAILS_LOG_TO_STDOUT: "true"
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "5"
healthcheck:
test: ["CMD-SHELL", "ps aux | grep '[s]idekiq' || exit 1"]
interval: 10s
retries: 30
start_period: 30s
timeout: 10s
depends_on:
dawarich_app:
condition: service_healthy
restart: true
dawarich_db:
condition: service_healthy
restart: true
dawarich_redis:
condition: service_healthy
restart: true
volumes:
dawarich_db_data:
dawarich_redis_data:
dawarich_public:
dawarich_watched:
dawarich_storage:
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -2,6 +2,11 @@ services:
dockge:
image: louislam/dockge:1
restart: unless-stopped
labels:
glance.name: Dockge
glance.icon: si:docker
glance.url: http://netcup.porgy-porgy.ts.net:5001
glance.description: Docker management
ports:
# Host Port : Container Port
- 5001:5001

View File

@@ -0,0 +1,27 @@
services:
ghost:
image: ghost:5-alpine
restart: unless-stopped
environment:
- database__client=sqlite3
- database__connection__filename=/var/lib/ghost/content/data/ghost.db
- database__useNullAsDefault=true
- url=https://{{ subdomains.phlog }}
volumes:
- ghost:/var/lib/ghost/content
extra_hosts:
- '{{ subdomains.phlog }}:172.20.0.5'
labels:
glance.name: Ghost
glance.icon: si:ghost
glance.url: https://{{ subdomains.phlog }}/
glance.description: Photo Blog
mag37.dockcheck.update: true
volumes:
ghost:
driver: local
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,62 @@
version: "3"
services:
server:
image: gitea/gitea:1
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
- GITEA__mailer__ENABLED=true
- GITEA__mailer__FROM=git@{{ email_domains.updates }}
- GITEA__mailer__PROTOCOL=smtps
- GITEA__mailer__SMTP_ADDR={{ smtp.host }}
- GITEA__mailer__SMTP_PORT=465
- GITEA__mailer__USER={{ smtp.username }}
- GITEA__mailer__PASSWD={{ vault_smtp.password }}
restart: unless-stopped
labels:
glance.name: Gitea
glance.icon: si:gitea
glance.url: https://{{ subdomains.git }}/
glance.description: Code repo
glance.id: gitea
mag37.dockcheck.update: true
volumes:
- gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- 222:22
extra_hosts:
- '{{ subdomains.auth }}:{{ docker.hairpin_ip }}'
- '{{ subdomains.git }}:{{ docker.hairpin_ip }}'
runner:
image: gitea/act_runner:nightly
restart: unless-stopped
depends_on:
- server
environment:
- CONFIG_FILE=/config.yaml
- GITEA_INSTANCE_URL=http://gitea:3000
- GITEA_RUNNER_REGISTRATION_TOKEN={{ vault_infrastructure.gitea_runner_key }}
- GITEA_RUNNER_NAME=runner_1
- GITEA_RUNNER_LABELS=docker
extra_hosts:
- '{{ subdomains.auth }}:{{ docker.hairpin_ip }}'
- '{{ subdomains.git }}:{{ docker.hairpin_ip }}'
labels:
glance.parent: gitea
glance.name: Worker
mag37.dockcheck.update: true
volumes:
- ./runner-config.yaml:/config.yaml
- ./data:/data
- /var/run/docker.sock:/var/run/docker.sock
- {{ paths.stacks }}/caddy/site:/sites
volumes:
gitea:
driver: local
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,23 @@
services:
glance:
image: glanceapp/glance:latest
volumes:
- ./config:/app/config
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
extra_hosts:
- '{{ primary_domain }}:172.20.0.5'
- '{{ subdomains.watcher }}:172.20.0.5'
labels:
glance.name: Glance
glance.icon: si:homepage
glance.url: https://{{ subdomains.home }}/
glance.description: Homepage app
glance.id: glance
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,154 @@
pages:
- name: Home
head-widgets:
- type: markets
hide-header: true
markets:
- symbol: SPY
name: S&P 500
- symbol: VTSAX
name: Vanguard Total Stock Market
- symbol: BAI
name: Blackrock AI
- symbol: NLR
name: VanEck Uranium+Nuclear Energy
- symbol: BITO
name: Bitcoin ETF
columns:
- size: full
widgets:
- type: search
search-engine: kagi
new-tab: true
- size: small
widgets:
- type: weather
location: Nederland, Colorado, United States
units: imperial
- type: custom-api
title: Air Quality
cache: 10m
url: https://api.waqi.info/feed/geo:39.9676367;-105.4037992/?token={{ vault_glance.air_quality_key }}
template: |
{% raw %}{{ $aqi := printf "%03s" (.JSON.String "data.aqi") }}
{{ $aqiraw := .JSON.String "data.aqi" }}
{{ $updated := .JSON.String "data.time.iso" }}
{{ $humidity := .JSON.String "data.iaqi.h.v" }}
{{ $ozone := .JSON.String "data.iaqi.o3.v" }}
{{ $pm25 := .JSON.String "data.iaqi.pm25.v" }}
{{ $pressure := .JSON.String "data.iaqi.p.v" }}
<div class="flex justify-between">
<div class="size-h5">
{{ if le $aqi "050" }}
<div class="color-positive">Good air quality</div>
{{ else if le $aqi "100" }}
<div class="color-primary">Moderate air quality</div>
{{ else }}
<div class="color-negative">Bad air quality</div>
{{ end }}
</div>
</div>
<div class="color-highlight size-h2">AQI: {{ $aqiraw }}</div>
<div style="border-bottom: 1px solid; margin-block: 10px;"></div>
<div class="margin-block-2">
<div style="display: grid; grid-template-columns: 1fr 1fr; gap: 10px;">
<div>
<div class="size-h3 color-highlight">{{ $humidity }}%</div>
<div class="size-h6">HUMIDITY</div>
</div>
<div>
<div class="size-h3 color-highlight">{{ $ozone }} μg/m³</div>
<div class="size-h6">OZONE</div>
</div>
<div>
<div class="size-h3 color-highlight">{{ $pm25 }} μg/m³</div>
<div class="size-h6">PM2.5</div>
</div>
<div>
<div class="size-h3 color-highlight">{{ $pressure }} hPa</div>
<div class="size-h6">PRESSURE</div>
</div>
</div>
<div class="size-h6" style="margin-top: 10px;">Last Updated at {{ slice $updated 11 16 }}</div>
</div>{% endraw %}
- name: Mini Painting
columns:
- size: small
widgets:
- type: twitch-channels
channels:
- warhammer
- marcofrisoninjm
- miniac
- next_level_painting
- monument_hobbies
- visit_the_chronicler
- size: full
widgets:
- type: rss
limit: 10
collapse-after: 3
cache: 3h
feeds:
- url: https://thesatelliteoflove.com/feeds/warhammer_rss_feed.xml
title: Warhammer Community
- url: https://yenneferofexeter.wordpress.com/feed/
title: Yennefer of Exeter
- name: Self Hosting
columns:
- size: small
widgets:
- type: releases
show-source-icon: true
repositories:
- advplyr/audiobookshelf
- go-gitea/gitea
- louislam/dockge
- glanceapp/glance
- hoarder-app/hoarder
- goauthentik/authentik
- superseriousbusiness/gotosocial
- stonith404/pingvin-share
- caddyserver/caddy
- gitroomhq/postiz-app
- sabre-io/Baikal
- janeczku/calibre-web
- heyform/heyform
- paperless-ngx/paperless-ngx
- linuxserver/docker-calibre-web
- coder/code-server
- dgtlmoon/changedetection.io
- Freika/dawarich
- manyfold3d/manyfold
- caronc/apprise-api
- kieraneglin/pinchflat
- pinry/pinry
- syncthing/syncthing
- size: full
widgets:
- type: rss
limit: 10
collapse-after: 3
cache: 3h
feeds:
- url: https://selfh.st/rss/
- url: https://www.jeffgeerling.com/blog.xml
- url: https://samwho.dev/rss.xml
- url: https://awesomekling.github.io/feed.xml
- url: https://ishadeed.com/feed.xml
title: Ahmad Shadeed

View File

@@ -0,0 +1,47 @@
services:
gotify:
image: gotify/server:latest
container_name: gotify
restart: unless-stopped
volumes:
- gotify_data:/app/data
environment:
- GOTIFY_DEFAULTUSER_PASS={{ vault_gotify.admin_password }}
- TZ=America/Denver
labels:
glance.name: Gotify
glance.icon: si:gotify
glance.url: "https://{{ subdomains.gotify }}/"
glance.description: Push notification server
extra_hosts:
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
- "{{ subdomains.gotify_assistant }}:{{ docker.hairpin_ip }}"
igotify-assistant:
image: ghcr.io/androidseb25/igotify-notification-assist:latest
restart: unless-stopped
container_name: igotify-assistant
volumes:
- igotify_data:/app/data
environment:
TZ: America/Denver
depends_on:
- gotify
labels:
glance.name: iGotify Assistant
glance.icon: si:apple
glance.url: "https://{{ subdomains.gotify_assistant }}/"
glance.description: iOS notification assistant
mag37.dockcheck.update: true
extra_hosts:
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
- "{{ subdomains.gotify }}:{{ docker.hairpin_ip }}"
volumes:
gotify_data:
igotify_data:
networks:
default:
external: true
name: "{{ docker.network_name }}"

View File

@@ -1,30 +1,62 @@
version: "3.3"
services:
gotosocial:
image: superseriousbusiness/gotosocial:0.16.0
image: docker.io/superseriousbusiness/gotosocial:0.19.1
container_name: gotosocial
user: 1000:1000
extra_hosts:
- 'auth.thesatelliteoflove.com:172.20.0.2'
- '{{ subdomains.auth }}:{{ docker.hairpin_ip }}'
environment:
GTS_HOST: social.thesatelliteoflove.com
GTS_HOST: {{ subdomains.social }}
GTS_DB_TYPE: sqlite
GTS_DB_ADDRESS: /gotosocial/storage/sqlite.db
GTS_WAZERO_COMPILATION_CACHE: /gotosocial/.cache
GTS_LETSENCRYPT_ENABLED: "false"
GTS_LETSENCRYPT_EMAIL_ADDRESS: ""
GTS_TRUSTED_PROXIES: "172.20.0.2"
GTS_ACCOUNT_DOMAIN: thesatelliteoflove.com
GTS_TRUSTED_PROXIES: "{{ docker.hairpin_ip }}"
GTS_ACCOUNT_DOMAIN: {{ primary_domain }}
GTS_OIDC_ENABLED: "true"
GTS_OIDC_IDP_NAME: "Authentik"
GTS_OIDC_ISSUER: https://auth.thesatelliteoflove.com/application/o/gotosocial/
GTS_OIDC_CLIENT_ID: {{ gts_oidc_client_id }}
GTS_OIDC_CLIENT_SECRET: {{ gts_oidc_client_secret }}
GTS_OIDC_ISSUER: https://{{ subdomains.auth }}/application/o/gotosocial/
GTS_OIDC_CLIENT_ID: {{ vault_gotosocial.oidc.client_id }}
GTS_OIDC_CLIENT_SECRET: {{ vault_gotosocial.oidc.client_secret }}
GTS_OIDC_LINK_EXISTING: "true"
GTS_HTTP_CLIENT: "20s"
GTS_SMTP_HOST: "{{ smtp.host }}"
GTS_SMTP_PORT: "587"
GTS_SMTP_USERNAME: "{{ smtp.username }}"
GTS_SMTP_PASSWORD: {{ vault_smtp.password }}
GTS_SMTP_FROM: "social@{{ email_domains.updates }}"
TZ: UTC
volumes:
- gotosocial:/gotosocial/storage
restart: "always"
labels:
docker-volume-backup.stop-during-backup: true
glance.name: GoToSocial
glance.icon: si:mastodon
glance.url: https://{{ subdomains.social }}/
glance.description: Fediverse server
glance.id: gotosocial
backup:
image: offen/docker-volume-backup:v2
restart: always
labels:
glance.parent: gotosocial
glance.name: Backup
mag37.dockcheck.update: true
environment:
BACKUP_FILENAME: gts-backup-%Y-%m-%dT%H-%M-%S.tar.gz
BACKUP_CRON_EXPRESSION: "0 9 * * *"
BACKUP_PRUNING_PREFIX: gts-
BACKUP_RETENTION_DAYS: 7
AWS_S3_BUCKET_NAME: tsolbackups
AWS_ENDPOINT: s3.us-west-004.backblazeb2.com
AWS_ACCESS_KEY_ID: {{ vault_backup.access_key_id }}
AWS_SECRET_ACCESS_KEY: {{ vault_backup.secret_access_key }}
volumes:
- gotosocial:/backup/gts-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
gotosocial:
@@ -33,4 +65,4 @@ volumes:
networks:
default:
external: true
name: lava
name: {{ docker.network_name }}

View File

@@ -1,23 +0,0 @@
version: "3.3"
services:
grist:
volumes:
- grist:/persist
extra_hosts:
- 'auth.thesatelliteoflove.com:172.20.0.2'
environment:
- GRIST_SESSION_SECRET={{ grist_session_secret }}
- APP_HOME_URL=https://grist.thesatelliteoflove.com
- GRIST_OIDC_IDP_ISSUER=https://auth.thesatelliteoflove.com/application/o/grist/.well-known/openid-configuration
- GRIST_OIDC_IDP_CLIENT_ID={{ grist_oidc_client_id }}
- GRIST_OIDC_IDP_CLIENT_SECRET={{ grist_oidc_client_secret }}
image: gristlabs/grist
volumes:
grist:
driver: local
networks:
default:
external: true
name: lava

View File

@@ -0,0 +1,30 @@
services:
grocy:
image: lscr.io/linuxserver/grocy:latest
container_name: grocy
restart: unless-stopped
environment:
- PUID=1000
- PGID=1000
- TZ=America/Denver
volumes:
- ./config:/config
extra_hosts:
- "host.docker.internal:host-gateway"
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
labels:
glance.name: Grocy
glance.icon: si:grocyapp
glance.url: https://{{ subdomains.grocy }}/
glance.description: Kitchen ERP and inventory management
glance.id: grocy
mag37.dockcheck.update: true
volumes:
grocy_config:
driver: local
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,63 @@
services:
heyform:
image: heyform/community-edition:latest
restart: always
volumes:
# Persist uploaded images
- assets:/app/static/upload
depends_on:
- mongo
- keydb
labels:
glance.name: Heyform
glance.icon: si:googleforms
glance.url: https://{{ subdomains.heyform }}/
glance.description: Forms server
glance.id: heyform
environment:
- APP_HOMEPAGE_URL=http://{{ subdomains.heyform }}
- SESSION_KEY={{ vault_heyform.session_key }}
- FORM_ENCRYPTION_KEY={{ vault_heyform.encryption_key }}
- MONGO_URI='mongodb://mongo:27017/heyform'
- REDIS_HOST=keydb
- REDIS_PORT=6379
- SMTP_FROM=nerderland@{{ email_domains.updates }}
- SMTP_HOST={{ smtp.host }}
- SMTP_PORT=465
- SMTP_USER={{ smtp.username }}
- SMTP_PASSWORD={{ vault_smtp.password }}
- SMTP_SECURE=true
mongo:
image: percona/percona-server-mongodb:4.4
restart: always
labels:
glance.parent: heyform
glance.name: MongoDB
volumes:
# Persist MongoDB data
- database:/data/db
keydb:
image: eqalpha/keydb:latest
restart: always
command: keydb-server --appendonly yes
labels:
glance.parent: heyform
glance.name: KeyDB
volumes:
# Persist KeyDB data
- keydb:/data
volumes:
assets:
driver: local
database:
driver: local
keydb:
driver: local
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -1,7 +1,7 @@
version: "3.8"
services:
web:
image: ghcr.io/hoarder-app/hoarder-web:${HOARDER_VERSION:-release}
image: ghcr.io/karakeep-app/karakeep:latest
restart: unless-stopped
volumes:
- data:/data
@@ -9,18 +9,27 @@ services:
- 3000:3000
env_file:
- .env
extra_hosts:
- '{{ subdomains.auth }}:{{ docker.hairpin_ip }}'
- '{{ subdomains.bookmarks }}:{{ docker.hairpin_ip }}'
environment:
REDIS_HOST: redis
MEILI_ADDR: http://meilisearch:7700
DATA_DIR: /data
redis:
image: redis:7.2-alpine
restart: unless-stopped
volumes:
- redis:/data
BROWSER_WEB_URL: http://chrome:9222
labels:
glance.name: Karakeep
glance.icon: si:wikibooks
glance.url: https://{{ subdomains.bookmarks }}/
glance.description: Bookmark manager
glance.id: karakeep
mag37.dockcheck.update: true
chrome:
image: gcr.io/zenika-hub/alpine-chrome:123
restart: unless-stopped
labels:
glance.name: Chrome
glance.parent: karakeep
mag37.dockcheck.update: true
command:
- --no-sandbox
- --disable-gpu
@@ -29,36 +38,22 @@ services:
- --remote-debugging-port=9222
- --hide-scrollbars
meilisearch:
image: getmeili/meilisearch:v1.6
image: getmeili/meilisearch:v1.13.3
restart: unless-stopped
labels:
glance.name: Meilisearch
glance.parent: karakeep
mag37.dockcheck.update: true
env_file:
- .env
environment:
MEILI_NO_ANALYTICS: "true"
volumes:
- meilisearch:/meili_data
workers:
image: ghcr.io/hoarder-app/hoarder-workers:${HOARDER_VERSION:-release}
restart: unless-stopped
volumes:
- data:/data
env_file:
- .env
environment:
REDIS_HOST: redis
MEILI_ADDR: http://meilisearch:7700
BROWSER_WEB_URL: http://chrome:9222
DATA_DIR: /data
depends_on:
web:
condition: service_started
volumes:
redis:
meilisearch:
data:
networks:
default:
external: true
name: lava
name: {{ docker.network_name }}

View File

@@ -1,5 +1,10 @@
HOARDER_VERSION=release
NEXTAUTH_SECRET={{ hoarder_nextauth_secret }}
MEILI_MASTER_KEY={{ hoarder_meili_master_key }}
NEXTAUTH_URL=https://bookmarks.thesatelliteoflove.com
OPENAI_API_KEY={{ openai_api_key }}
KARAKEEP_VERSION=release
NEXTAUTH_SECRET={{ vault_hoarder.nextauth_secret }}
MEILI_MASTER_KEY={{ vault_hoarder.meili_master_key }}
NEXTAUTH_URL=https://{{ subdomains.bookmarks }}
OPENAI_API_KEY={{ vault_infrastructure.openai_api_key }}
OAUTH_CLIENT_SECRET={{ vault_hoarder.oidc.client_secret }}
OAUTH_CLIENT_ID=GTi0QBRH5TiTqZfxfAkYSQVVFouGdlOFMc2sjivN
OAUTH_PROVIDER_NAME=Authentik
OAUTH_WELLKNOWN_URL=https://{{ subdomains.auth }}/application/o/hoarder/.well-known/openid-configuration
OAUTH_ALLOW_DANGEROUS_EMAIL_ACCOUNT_LINKING=true

View File

@@ -0,0 +1,32 @@
services:
kanboard:
image: kanboard/kanboard:latest
container_name: kanboard
restart: unless-stopped
environment:
- PLUGIN_INSTALLER=true
- DB_DRIVER=sqlite
volumes:
- kanboard_data:/var/www/app/data
- kanboard_plugins:/var/www/app/plugins
extra_hosts:
- "host.docker.internal:host-gateway"
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
labels:
glance.name: Kanboard
glance.icon: si:kanboard
glance.url: https://{{ subdomains.kanboard }}/
glance.description: Project management and Kanban boards
glance.id: kanboard
mag37.dockcheck.update: true
volumes:
kanboard_data:
driver: local
kanboard_plugins:
driver: local
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,48 @@
services:
app:
image: ghcr.io/manyfold3d/manyfold-solo:latest
volumes:
# Uncomment to add a volume where a database file should be created.
# Don't change the part after the colon, it needs to be at /config
- ./config:/config
# Uncomment to add a filesystem volume for your model library (or multiple if
# you want multiple libraries), in the form <local_path>:<container_path>.
# The local path could be a folder that already contains models, in which case Manyfold
# will scan and import them, or it could be empty.
# The container path can be anything; you will need to enter it in the "new library" form.
- ./models:/models
environment:
SECRET_KEY_BASE: {{ vault_manyfold.secret_key }}
MULTIUSER: enabled
OIDC_CLIENT_ID: {{ vault_manyfold.oidc.client_id }}
OIDC_CLIENT_SECRET: {{ vault_manyfold.oidc.client_secret }}
OIDC_ISSUER: https://{{ subdomains.auth }}/application/o/manyfold/
OIDC_NAME: Authentik
PUBLIC_HOSTNAME: {{ subdomains.models }}
MAX_FILE_UPLOAD_SIZE: 5368709120
PUID: 1000
PGID: 1000
extra_hosts:
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
labels:
glance.name: Manyfold
glance.icon: si:open3d
glance.url: https://{{ subdomains.models }}/
glance.description: STL Storage
mag37.dockcheck.update: true
restart: unless-stopped
# Optional, but recommended for better security
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE
- SETUID
- SETGID
networks:
default:
external: true
name: "{{ docker.network_name }}"

View File

@@ -0,0 +1,48 @@
services:
mmdl:
image: intriin/mmdl:latest
container_name: mmdl
restart: unless-stopped
depends_on:
- mmdl_db
env_file:
- .env.local
extra_hosts:
- "host.docker.internal:host-gateway"
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
- "{{ subdomains.cal }}:{{ docker.hairpin_ip }}"
labels:
glance.name: MMDL
glance.icon: si:task
glance.url: https://{{ subdomains.tasks }}/
glance.description: Task and calendar management
glance.id: mmdl
mag37.dockcheck.update: true
mmdl_db:
image: mysql:8.0
container_name: mmdl_db
restart: unless-stopped
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_DATABASE: mmdl
MYSQL_USER: mmdl
MYSQL_PASSWORD: "{{ vault_mmdl.mysql_password }}"
MYSQL_ROOT_PASSWORD: "{{ vault_mmdl.mysql_root_password }}"
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
MYSQL_ROOT_HOST: "%"
volumes:
- mmdl_db:/var/lib/mysql
labels:
glance.parent: mmdl
glance.name: DB
mag37.dockcheck.update: true
volumes:
mmdl_db:
driver: local
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,42 @@
# Database Configuration
DB_HOST=mmdl_db
DB_USER=mmdl
DB_PASS={{ vault_mmdl.mysql_password }}
DB_PORT=3306
DB_DIALECT=mysql
DB_CHARSET=utf8mb4
DB_NAME=mmdl
# Encryption
AES_PASSWORD={{ vault_mmdl.aes_password }}
# SMTP Settings
SMTP_HOST={{ smtp.host }}
SMTP_USERNAME={{ smtp.username }}
SMTP_PASSWORD={{ vault_smtp.password }}
SMTP_FROM=tasks@{{ email_domains.updates }}
SMTP_PORT=587
SMTP_SECURE=true
# Authentication
USE_NEXT_AUTH=true
NEXTAUTH_URL=https://{{ subdomains.tasks }}
NEXTAUTH_SECRET={{ vault_mmdl.nextauth_secret }}
# Authentik OIDC Configuration
AUTHENTIK_ISSUER=https://{{ subdomains.auth }}/application/o/mmdl
AUTHENTIK_CLIENT_ID={{ vault_mmdl.oidc.client_id }}
AUTHENTIK_CLIENT_SECRET={{ vault_mmdl.oidc.client_secret }}
# User and Session Management
ALLOW_USER_REGISTRATION=false
MAX_CONCURRENT_LOGINS=3
OTP_VALIDITY_PERIOD=300
SESSION_VALIDITY_PERIOD=30
# Application Settings
API_URL=https://{{ subdomains.tasks }}
DEBUG_MODE=false
TEST_MODE=false
NEXT_API_DEBUG_MODE=false
SUBTASK_RECURSION_DEPTH=5

View File

@@ -0,0 +1,30 @@
services:
obsidian-livesync:
image: oleduc/docker-obsidian-livesync-couchdb:latest
container_name: obsidian-livesync
restart: unless-stopped
labels:
glance.name: Obsidian LiveSync
glance.icon: si:obsidian
glance.url: http://{{ network.docker_host_ip }}:5984
glance.description: Obsidian note synchronization
glance.id: obsidian-livesync
environment:
- SERVER_DOMAIN={{ network.docker_host_ip }}
- COUCHDB_USER={{ vault_obsidian.username }}
- COUCHDB_PASSWORD={{ vault_obsidian.password }}
- COUCHDB_DATABASE=obsidian
ports:
- "{{ network.docker_host_ip }}:5984:5984"
volumes:
- couchdb_data:/opt/couchdb/data
networks:
- default
volumes:
couchdb_data:
networks:
default:
external: true
name: "{{ docker.network_name }}"

View File

@@ -0,0 +1,30 @@
services:
palmr:
image: kyantech/palmr:latest
restart: unless-stopped
environment:
DISABLE_FILESYSTEM_ENCRYPTION: "false"
ENCRYPTION_KEY: "{{ vault_palmr.encryption_key }}"
PALMR_UID: "1000"
PALMR_GID: "1000"
SECURE_SITE: "true"
DEFAULT_LANGUAGE: "en-US"
TRUST_PROXY: "true"
extra_hosts:
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
labels:
glance.name: Palmr
glance.icon: si:files
glance.url: "https://{{ subdomains.files }}/"
glance.description: File sharing and storage
glance.id: palmr
mag37.dockcheck.update: true
volumes:
- palmr_data:/app/server
volumes:
palmr_data:
driver: local
networks:
default:
external: true
name: "{{ docker.network_name }}"

View File

@@ -0,0 +1,88 @@
services:
broker:
image: docker.io/library/redis:7
restart: unless-stopped
labels:
glance.parent: paperlessngx
glance.name: Redis
volumes:
- redisdata:/data
webserver:
image: ghcr.io/paperless-ngx/paperless-ngx:latest
restart: unless-stopped
labels:
glance.name: Paperless NGX
glance.icon: si:paperlessngx
glance.url: https://{{ subdomains.paper }}/
glance.description: Document server
glance.id: paperlessngx
depends_on:
- broker
- gotenberg
- tika
volumes:
- data:/usr/src/paperless/data
- media:/usr/src/paperless/media
- ./export:/usr/src/paperless/export
- ./consume:/usr/src/paperless/consume
env_file: docker-compose.env
extra_hosts:
- '{{ subdomains.auth }}:{{ docker.hairpin_ip }}'
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
PAPERLESS_OCR_USER_ARGS: '{"invalidate_digital_signatures": true}'
gotenberg:
image: docker.io/gotenberg/gotenberg:8.7
labels:
glance.parent: paperlessngx
glance.name: Gotenburg
restart: unless-stopped
# The gotenberg chromium route is used to convert .eml files. We do not
# want to allow external content like tracking pixels or even javascript.
command:
- "gotenberg"
- "--chromium-disable-javascript=true"
- "--chromium-allow-list=file:///tmp/.*"
tika:
image: docker.io/apache/tika:latest
labels:
glance.parent: paperlessngx
glance.name: Tika
restart: unless-stopped
backup:
image: offen/docker-volume-backup:v2
restart: always
labels:
glance.parent: paperlessngx
glance.name: Backup
mag37.dockcheck.update: true
environment:
BACKUP_FILENAME: pngx-backup-%Y-%m-%dT%H-%M-%S.tar.gz
BACKUP_CRON_EXPRESSION: "10 9 * * *"
BACKUP_PRUNING_PREFIX: pngx-
BACKUP_RETENTION_DAYS: 7
AWS_S3_BUCKET_NAME: tsolbackups
AWS_ENDPOINT: s3.us-west-004.backblazeb2.com
AWS_ACCESS_KEY_ID: {{ vault_backup.access_key_id }}
AWS_SECRET_ACCESS_KEY: {{ vault_backup.secret_access_key }}
volumes:
- media:/backup/pngx-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
media:
redisdata:
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,46 @@
# The UID and GID of the user used to run paperless in the container. Set this
# to your UID and GID on the host so that you have write access to the
# consumption directory.
#USERMAP_UID=1000
#USERMAP_GID=1000
# Additional languages to install for text recognition, separated by a
# whitespace. Note that this is
# different from PAPERLESS_OCR_LANGUAGE (default=eng), which defines the
# language used for OCR.
# The container installs English, German, Italian, Spanish and French by
# default.
# See https://packages.debian.org/search?keywords=tesseract-ocr-&searchon=names&suite=buster
# for available languages.
#PAPERLESS_OCR_LANGUAGES=tur ces
###############################################################################
# Paperless-specific settings #
###############################################################################
# All settings defined in the paperless.conf.example can be used here. The
# Docker setup does not use the configuration file.
# A few commonly adjusted settings are provided below.
# This is required if you will be exposing Paperless-ngx on a public domain
# (if doing so please consider security measures such as reverse proxy)
PAPERLESS_URL=https://{{ subdomains.paper }}
# Adjust this key if you plan to make paperless available publicly. It should
# be a very long sequence of random characters. You don't need to remember it.
PAPERLESS_SECRET_KEY={{ vault_paperlessngx.secret_key }}
# Use this variable to set a timezone for the Paperless Docker containers. If not specified, defaults to UTC.
PAPERLESS_TIME_ZONE=America/Denver
# The default language to use for OCR. Set this to the language most of your
# documents are written in.
#PAPERLESS_OCR_LANGUAGE=eng
# Set if accessing paperless via a domain subpath e.g. https://domain.com/PATHPREFIX and using a reverse-proxy like traefik or nginx
#PAPERLESS_FORCE_SCRIPT_NAME=/PATHPREFIX
#PAPERLESS_STATIC_URL=/PATHPREFIX/static/ # trailing slash required
# authentik
PAPERLESS_APPS: "allauth.socialaccount.providers.openid_connect"
PAPERLESS_SOCIALACCOUNT_PROVIDERS: '{"openid_connect": {"APPS": [{"provider_id": "authentik","name": "Authentik SSO","client_id": "{{ vault_paperlessngx.oidc.client_id }}","secret": "{{ vault_paperlessngx.oidc.client_secret }}","settings": { "server_url": "https://{{ subdomains.auth }}/application/o/paperlessngx/.well-known/openid-configuration"}}]}}'

View File

@@ -0,0 +1,22 @@
services:
pinchflat:
environment:
- TZ=America/Denver
ports:
- 100.70.169.99:8945:8945
volumes:
- ./config:/config
- data:/downloads
image: ghcr.io/kieraneglin/pinchflat:latest
labels:
glance.name: Pinchflat
glance.icon: si:youtube
glance.url: http://netcup.porgy-porgy.ts.net:8945
glance.description: Youtube interface
glance.id: pinchflat
volumes:
data:
networks:
default:
external: true
name: lava

View File

@@ -0,0 +1,21 @@
services:
pinry:
volumes:
- pinry:/data
labels:
glance.name: Pinry
glance.icon: si:pinterest
glance.url: https://{{ subdomains.pin }}
glance.description: Pinterest clone
glance.id: pinterest
environment:
- SECRET_KEY=no2254XiwYFWDnt2UW6wraSbVPRdHx8wVIeBh3jeYcI=
- ALLOW_NEW_REGISTRATIONS=False
image: getpinry/pinry
volumes:
pinry:
driver: local
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,96 @@
services:
postiz:
image: ghcr.io/gitroomhq/postiz-app:latest
container_name: postiz
restart: always
environment:
# You must change these. Replace `postiz.your-server.com` with your DNS name - what your web browser sees.
MAIN_URL: "https://{{ subdomains.post }}"
FRONTEND_URL: "https://{{ subdomains.post }}"
NEXT_PUBLIC_BACKEND_URL: "https://{{ subdomains.post }}/api"
JWT_SECRET: "TShr6Fdcwf67wIhuUvg0gOsJbdcQmgMiJl5kUh6JCfY="
# These defaults are probably fine, but if you change your user/password, update it in the
# postiz-postgres or postiz-redis services below.
DATABASE_URL: "postgresql://postiz-user:postiz-password@postiz-postgres:5432/postiz-db-local"
REDIS_URL: "redis://postiz-redis:6379"
BACKEND_INTERNAL_URL: "http://localhost:3000"
IS_GENERAL: "true" # Required for self-hosting.
# The container images are pre-configured to use /uploads for file storage.
# You probably should not change this unless you have a really good reason!
STORAGE_PROVIDER: "local"
UPLOAD_DIRECTORY: "/uploads"
NEXT_PUBLIC_UPLOAD_DIRECTORY: "/uploads"
# Social keys
LINKEDIN_CLIENT_ID: "86q7ksc8q5pai3"
LINKEDIN_CLIENT_SECRET: {{ vault_postiz.linkedin_secret }}
volumes:
- postiz-config:/config/
- postiz-uploads:/uploads/
depends_on:
postiz-postgres:
condition: service_healthy
postiz-redis:
condition: service_healthy
labels:
glance.name: Postiz
glance.url: https://{{ subdomains.post }}/
glance.description: Social media scheduler
glance.id: postiz
mag37.dockcheck.update: true
postiz-postgres:
image: postgres:14.5
container_name: postiz-postgres
restart: always
environment:
POSTGRES_PASSWORD: postiz-password
POSTGRES_USER: postiz-user
POSTGRES_DB: postiz-db-local
volumes:
- postgres-volume:/var/lib/postgresql/data
healthcheck:
test: pg_isready -U postiz-user -d postiz-db-local
interval: 10s
timeout: 3s
retries: 3
labels:
glance.parent: postiz
glance.name: DB
mag37.dockcheck.update: true
postiz-redis:
image: redis:7.2
container_name: postiz-redis
restart: always
healthcheck:
test: redis-cli ping
interval: 10s
timeout: 3s
retries: 3
volumes:
- postiz-redis-data:/data
labels:
glance.parent: postiz
glance.name: Redis
mag37.dockcheck.update: true
volumes:
postgres-volume:
external: false
postiz-redis-data:
external: false
postiz-config:
external: false
postiz-uploads:
external: false
networks:
default:
external: true
name: {{ docker.network_name }}

View File

@@ -0,0 +1,31 @@
services:
syncthing:
image: syncthing/syncthing
container_name: syncthing
hostname: my-syncthing
labels:
glance.name: Syncthing
glance.icon: si:syncthing
glance.url: https://netcup.porgy-porgy.ts.net:8384
glance.description: Syncthing core
glance.id: Syncthing
environment:
- PUID=1000
- PGID=1000
volumes:
- home:/var/syncthing
ports:
- 100.70.169.99:8384:8384 # Web UI
- 100.70.169.99:22000:22000/tcp # TCP file transfers
- 100.70.169.99:22000:22000/udp # QUIC file transfers
- 100.70.169.99:21027:21027/udp # Receive local discovery broadcasts
restart: unless-stopped
healthcheck:
test: curl -fkLsS -m 2 127.0.0.1:8384/rest/noauth/health | grep -o --color=never OK || exit 1
interval: 1m
timeout: 10s
retries: 3
volumes:
home:

View File

@@ -1,5 +1,6 @@
- hosts: docker
become: true
roles:
- common
- docker
- { role: common, tags: ["common"] }
- { role: docker, tags: ["docker"] }
- { role: cron, tags: ["cron"] }

141
todo.md Normal file
View File

@@ -0,0 +1,141 @@
# Infrastructure Improvements TODO
## High Priority (Quick Wins)
### 1. Split the massive docker role ✅ COMPLETED
- **Current Issue**: `roles/docker/tasks/main.yml` has 20+ services in one file (176 lines)
- **Solution**: Break into logical service groups:
```
roles/docker/tasks/
├── main.yml (orchestrator)
├── infrastructure/ (caddy, authentik, dockge)
├── development/ (gitea, codeserver, bytestash)
├── media/ (audiobookshelf, calibre, ghost, pinchflat, pinry, hoarder, manyfold)
├── productivity/ (paperless, baikal, syncthing, mmdl, heyform, dawarich, palmr, obsidian-livesync)
├── communication/ (gotosocial, postiz)
└── monitoring/ (glance, changedetection, appriseapi, gotify)
```
- **COMPLETED**: All services organized into logical categories with category-level tags
### 2. Standardize variable management ✅ COMPLETED
- **Current Issue**: Secrets in single encrypted file, no clear variable hierarchy
- **Solution**: Create proper variable structure:
```
group_vars/
├── all/
│ ├── domains.yml (domain and subdomain mappings)
│ ├── infrastructure.yml (network config, Docker settings)
│ ├── vault.yml (encrypted secrets with vault_ prefix)
│ └── services.yml (service configuration flags)
```
- **COMPLETED**: Implemented comprehensive variable hierarchy, updated all templates to use centralized variables, fixed service tag isolation
### 3. Template consolidation ✅ PARTIALLY COMPLETED
- **Current Issue**: Many compose templates repeat patterns, some services used static files
- **Solution**: Create reusable template includes with standard service template structure
- **COMPLETED**: Converted all static compose files (caddy, dockge, hoarder) to Jinja2 templates
- **REMAINING**: Create reusable template patterns for common configurations
## Security & Reliability
### 4. Add health checks
- **Issue**: Most services lack proper healthcheck configurations in compose templates
- **Solution**: Implement comprehensive health monitoring with standardized healthcheck patterns
### 5. Implement backup strategy
- **Issue**: No automated backups for 25 services and their data
- **Solution**: Add backup role with:
- Database dumps for PostgreSQL services
- Volume backups for file-based services
- Rotation policies
- Restoration testing
### 6. Network segmentation
- **Issue**: All services share one Docker network
- **Solution**: Separate into:
- `frontend` (Public-facing services)
- `backend` (Internal services only)
- `database` (Database access only)
### 7. Security hardening
- Remove unnecessary `user: root` from services
- Add security contexts to all containers
- Implement least-privilege access patterns
- Add fail2ban for authentication services
## Automation Opportunities
### 8. CI/CD with Gitea Actions
- Leverage self-hosted Gitea for:
- Ansible syntax validation
- Service configuration testing
- Automated deployment triggers
- Rollback capabilities
### 9. Configuration drift detection
- Add validation tasks to catch manual changes
- Implement configuration validation with proper assertions
### 10. Service dependency management
- **Issue**: Some services depend on Authentik SSO but no startup ordering
- **Solution**: Implement dependency checking and startup ordering
### 11. Ansible best practices
- Replace deprecated `apt_key` with proper patterns
- Use `ansible.builtin` FQCN consistently
- Add `check_mode` support
- Implement proper idempotency checks
### 12. Documentation automation
- Auto-generate service inventory
- Create service documentation templates
- Implement automated documentation updates
## Implementation Roadmap
### Week 1: Foundation
- [x] Document improvements in todo.md
- [x] Reorganize docker role structure
- [x] Convert static compose files to templates
- [x] Remove unused services (beaver, grist, stirlingpdf, tasksmd, redlib)
- [x] Clean up templates and files directories
- [x] Implement variable hierarchy
- [ ] Create reusable template patterns
### Week 2: Security & Monitoring
- [ ] Add health checks
- [ ] Implement backup strategy
- [ ] Security hardening
### Week 3: Automation
- [ ] CI/CD pipeline setup
- [ ] Configuration validation
- [ ] Documentation automation
### Week 4: Advanced Features
- [ ] Network segmentation
- [ ] Dependency management
- [ ] Monitoring dashboard
## Completed Work Summary
### ✅ Major Accomplishments
- **Docker Role Reorganization**: Split monolithic 176-line main.yml into 6 logical service categories
- **Template Standardization**: Converted all static compose files to Jinja2 templates
- **Service Cleanup**: Removed 5 unused/broken services (beaver, grist, stirlingpdf, tasksmd, redlib)
- **Category-Based Deployment**: Can now deploy services by category using tags (infrastructure, media, etc.)
- **Variable Management**: Implemented comprehensive centralized variable hierarchy with proper secret organization
- **Service Tag Isolation**: Fixed service tags to deploy individual services only (not entire categories)
- **Documentation Updates**: Updated all README files and CLAUDE.md to reflect new architecture
### 📊 Current Stats
- **25 active services** organized into 6 categories
- **100% templated** compose files (no static files)
- **6 service directories** for logical organization
- **Clean file structure** with only essential static files
## Notes
- Current architecture is solid and much better organized for long-term maintainability
- Focus on high-impact, low-effort improvements first
- Leverage existing infrastructure (Gitea, Authentik) for automation
- Template-driven approach enables future dynamic configuration