Compare commits
46 Commits
798d35be16
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 99e36d9492 | |||
| bbb9f50eff | |||
| 78fd63dcb5 | |||
| f088247ac0 | |||
| e1b6d3132a | |||
| f71ded1a01 | |||
| a2ae9e5ff6 | |||
| fb6651f1dc | |||
| 58a6be8da0 | |||
| 17c3077cf0 | |||
| 75fabb3523 | |||
| 336e197176 | |||
| f0c4cb51b8 | |||
| c95ca45a67 | |||
| a287e50048 | |||
| 01d959d12c | |||
| a4fc5f7608 | |||
| e3f4eb4e95 | |||
| a8350459ae | |||
| eac67e269c | |||
| 85cfca08f5 | |||
| 2cc05a19e6 | |||
| d54d04bcc9 | |||
| 5f76f69d8b | |||
| ef5309363c | |||
| ff89683038 | |||
| a338186a77 | |||
| 8710ffc70d | |||
| a98fae0b92 | |||
| d05bac8651 | |||
| c500790ea3 | |||
| 2e4c096bbe | |||
| 12582b352c | |||
| 8d686c2aa5 | |||
| 249eb52ceb | |||
| ef4f49fafb | |||
| 06a7889024 | |||
| 68f0276ac0 | |||
| d4bec94b99 | |||
| 8ca2122cb3 | |||
| ccab665d26 | |||
| 1c9ab0f5e6 | |||
| 7fdb52e91b | |||
| a2c3b53640 | |||
| e1f09fc119 | |||
| 1280bba7ff |
4
.gitignore
vendored
4
.gitignore
vendored
@@ -1,3 +1,5 @@
|
|||||||
.python-version
|
.python-version
|
||||||
secrets.enc
|
secrets.enc
|
||||||
vault_pass
|
vault_pass
|
||||||
|
DEPLOYMENT_LEARNINGS.md
|
||||||
|
group_vars/all/vault.yml
|
||||||
7
CLAUDE.local.md
Normal file
7
CLAUDE.local.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
- the password for secrets.enc is in vault_pass
|
||||||
|
- do not use the ansible-vault edit command
|
||||||
|
- NEVER, EVER, EVER, USE, OPEN, OR TOUCH SECRETS.ENC
|
||||||
|
- Whenever I talk about cron jobs I am referring to cron jobs on the remote servers managed by ansible, never the local machine
|
||||||
|
- never use secrets.enc
|
||||||
|
- all secrets go in vault.yml, never secrets.enc, never some random file you want to create, only ever vault.yml. you decrypt vault.yml with vault_pass
|
||||||
|
- Never use ansible-vault edit. always decrypt, make the changes, then encrypt
|
||||||
164
CLAUDE.md
Normal file
164
CLAUDE.md
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
# CLAUDE.md
|
||||||
|
|
||||||
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This is a personal infrastructure Ansible playbook that automates deployment and management of 27 self-hosted Docker services across two domains (`thesatelliteoflove.com` and `nerder.land`). The setup uses Tailscale VPN for secure networking and Caddy for reverse proxy with automated HTTPS.
|
||||||
|
|
||||||
|
**Important**: Always review `DEPLOYMENT_LEARNINGS.md` when working on this repository for lessons learned and troubleshooting guidance.
|
||||||
|
|
||||||
|
## Common Commands
|
||||||
|
|
||||||
|
### Initial Setup
|
||||||
|
```bash
|
||||||
|
# Install Ansible dependencies
|
||||||
|
ansible-galaxy install -r requirements.yml
|
||||||
|
|
||||||
|
# Bootstrap new server (creates user, installs Tailscale, security hardening)
|
||||||
|
ansible-playbook bootstrap.yml -i hosts.yml
|
||||||
|
|
||||||
|
# Deploy all Docker services
|
||||||
|
ansible-playbook site.yml -i hosts.yml
|
||||||
|
|
||||||
|
# Update DNS records in AWS Route53
|
||||||
|
ansible-playbook dns.yml -i hosts.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service Management
|
||||||
|
```bash
|
||||||
|
# Deploy specific services using tags (now properly isolated)
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags caddy --vault-password-file vault_pass
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags authentik --vault-password-file vault_pass
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags mmdl --vault-password-file vault_pass
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags docker --vault-password-file vault_pass # all docker services
|
||||||
|
|
||||||
|
# Deploy services by category (new organized structure)
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags infrastructure --vault-password-file vault_pass
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags media,productivity --vault-password-file vault_pass
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags development,monitoring --vault-password-file vault_pass
|
||||||
|
|
||||||
|
# Deploy only infrastructure components
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags common,cron --vault-password-file vault_pass
|
||||||
|
```
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Host Configuration
|
||||||
|
- **Bootstrap Host** (`netcup`): 152.53.36.98 - Initial server setup target
|
||||||
|
- **Docker Host** (`docker-01`): 100.70.169.99 - Main service deployment via Tailscale
|
||||||
|
|
||||||
|
### Role Structure
|
||||||
|
- **bootstrap**: Initial server hardening, user creation, Tailscale VPN setup
|
||||||
|
- **common**: Basic system configuration, UFW firewall management
|
||||||
|
- **docker**: Comprehensive service deployment (24 containerized applications, organized by category)
|
||||||
|
- **cron**: Scheduled task management (currently Warhammer RSS feed generation)
|
||||||
|
|
||||||
|
### Docker Role Organization (Reorganized into Logical Categories)
|
||||||
|
The docker role is now organized into logical service groups under `roles/docker/tasks/`:
|
||||||
|
|
||||||
|
- **infrastructure/**: Core platform components
|
||||||
|
- Caddy (reverse proxy), Authentik (SSO), Dockge (container management)
|
||||||
|
- **development/**: Development and collaboration tools
|
||||||
|
- Gitea, Code Server, ByteStash
|
||||||
|
- **media/**: Content creation and consumption
|
||||||
|
- Audiobookshelf, Calibre, Ghost blog, Pinchflat, Pinry, Karakeep (formerly Hoarder), Manyfold
|
||||||
|
- **productivity/**: Personal organization and document management
|
||||||
|
- Paperless-NGX, MMDL, Baikal (CalDAV/CardDAV), Syncthing, Heyform, Dawarich, Palmr, Obsidian LiveSync
|
||||||
|
- **communication/**: Social media and external communication
|
||||||
|
- GoToSocial (Fediverse), Postiz (social media management)
|
||||||
|
- **monitoring/**: System monitoring and alerts
|
||||||
|
- Changedetection, Glance dashboard, AppriseAPI, Gotify
|
||||||
|
|
||||||
|
### Variable Management
|
||||||
|
**Critical**: This infrastructure uses a centralized variable hierarchy in `group_vars/all/`:
|
||||||
|
|
||||||
|
- **domains.yml**: Domain and subdomain mappings (use `{{ subdomains.service }}`)
|
||||||
|
- **infrastructure.yml**: Network configuration, Docker settings (`{{ docker.network_name }}`, `{{ docker.hairpin_ip }}`)
|
||||||
|
- **vault.yml**: Encrypted secrets with `vault_` prefix
|
||||||
|
- **services.yml**: Service-specific configuration and feature flags
|
||||||
|
|
||||||
|
**Important**: All templates use variables instead of hardcoded values. Never hardcode domains, IPs, or secrets.
|
||||||
|
|
||||||
|
### Data Structure
|
||||||
|
- All service data stored in `/opt/stacks/[service-name]/` on docker host
|
||||||
|
- Docker Compose files generated from Jinja2 templates in `roles/docker/templates/`
|
||||||
|
- Environment files templated for services requiring configuration
|
||||||
|
- All configurations use centralized variables for consistency
|
||||||
|
|
||||||
|
## Key Implementation Details
|
||||||
|
|
||||||
|
### Template-Driven Configuration
|
||||||
|
The docker role uses Jinja2 templates exclusively for all services. When modifying services:
|
||||||
|
- Update templates in `roles/docker/templates/[service]-compose.yml.j2`
|
||||||
|
- Environment files use `.env.j2` templates where needed
|
||||||
|
- Task files organized by category in `roles/docker/tasks/[category]/[service].yml`
|
||||||
|
- All services now use templated configurations (no static compose files)
|
||||||
|
|
||||||
|
### DNS Management
|
||||||
|
The `dns.yml` playbook manages AWS Route53 records for both domains. All subdomains point to the netcup server (152.53.36.98), with Caddy handling internal routing to the docker host via Tailscale.
|
||||||
|
|
||||||
|
### Security Architecture
|
||||||
|
- Tailscale provides secure networking between management and service hosts
|
||||||
|
- Services are network-isolated using Docker
|
||||||
|
- Caddy handles SSL termination with automatic Let's Encrypt certificates
|
||||||
|
- UFW firewall managed through Docker integration script
|
||||||
|
|
||||||
|
### Service Dependencies
|
||||||
|
Many services depend on Authentik for SSO. When deploying new services, consider:
|
||||||
|
- Whether SSO integration is needed
|
||||||
|
- Caddy routing configuration for subdomain access
|
||||||
|
- Network connectivity requirements within Docker stack
|
||||||
|
- Hairpinning fixes for internal service-to-service communication
|
||||||
|
|
||||||
|
### Hairpinning Resolution
|
||||||
|
Services inside Docker containers cannot reach external domains that resolve to the same server. Fix by adding `extra_hosts` mappings:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
extra_hosts:
|
||||||
|
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
|
||||||
|
- "{{ subdomains.cal }}:{{ docker.hairpin_ip }}"
|
||||||
|
```
|
||||||
|
|
||||||
|
Common domains requiring hairpinning fixes:
|
||||||
|
- `{{ subdomains.auth }}` (Authentik SSO)
|
||||||
|
- `{{ subdomains.cal }}` (Baikal CalDAV)
|
||||||
|
- Any service domain the container needs to communicate with
|
||||||
|
|
||||||
|
**Note**: Use variables instead of hardcoded values for maintainability.
|
||||||
|
|
||||||
|
### Service-Specific Reference Configurations
|
||||||
|
- **Dawarich**: Based on production compose file at https://github.com/Freika/dawarich/blob/master/docker/docker-compose.production.yml
|
||||||
|
|
||||||
|
## Service Memories
|
||||||
|
- palmr is the service that responds on files.thesatelliteoflove.com
|
||||||
|
- karakeep (formerly called hoarder) is deployed with both 'hoarder' and 'karakeep' tags for backward compatibility
|
||||||
|
- whenever i ask you what containers need updates, run dockcheck and return a list of containers needing updates
|
||||||
|
- when i ask for the status container updates i want you to run dockcheck on the docker host https://github.com/mag37/dockcheck?ref=selfh.st
|
||||||
|
- this is your reference for glance configuration https://github.com/glanceapp/glance/blob/main/docs/configuration.md#configuring-glance
|
||||||
|
|
||||||
|
## Variable Management Implementation Notes
|
||||||
|
**Major Infrastructure Update**: Variable management system was implemented to replace all hardcoded values with centralized variables.
|
||||||
|
|
||||||
|
### Key Changes Made:
|
||||||
|
- Created comprehensive `group_vars/all/` structure
|
||||||
|
- Updated all Docker Compose templates to use variables
|
||||||
|
- Fixed service tag isolation (individual service tags now deploy only that service)
|
||||||
|
- Standardized domain and network configuration
|
||||||
|
- Organized secrets by service with consistent `vault_` prefix
|
||||||
|
|
||||||
|
### Service Tag Fix:
|
||||||
|
**Critical**: Service tags are now properly isolated. `--tags mmdl` deploys only MMDL (5 tasks), not the entire productivity category.
|
||||||
|
|
||||||
|
### Template Pattern:
|
||||||
|
All templates now follow this pattern:
|
||||||
|
```yaml
|
||||||
|
# Use variables, not hardcoded values
|
||||||
|
glance.url: "https://{{ subdomains.service }}/"
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
external: true
|
||||||
|
name: "{{ docker.network_name }}"
|
||||||
|
extra_hosts:
|
||||||
|
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
|
||||||
|
```
|
||||||
162
README.md
Normal file
162
README.md
Normal file
@@ -0,0 +1,162 @@
|
|||||||
|
# Personal Infrastructure Ansible Playbook
|
||||||
|
|
||||||
|
This Ansible playbook automates the setup and management of a personal self-hosted infrastructure running Docker containers for various services.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The playbook manages two main environments:
|
||||||
|
- **Bootstrap server** (`netcup`): Initial server setup with Tailscale VPN
|
||||||
|
- **Docker server** (`docker-01`): Main application server running containerized services
|
||||||
|
|
||||||
|
## Services Deployed
|
||||||
|
|
||||||
|
The Docker role deploys and manages 27 self-hosted services organized into logical categories:
|
||||||
|
|
||||||
|
### Infrastructure
|
||||||
|
- **Caddy** (Reverse proxy with automatic HTTPS)
|
||||||
|
- **Authentik** (SSO/Identity Provider)
|
||||||
|
- **Dockge** (Container management)
|
||||||
|
|
||||||
|
### Development
|
||||||
|
- **Gitea** (Git repository hosting)
|
||||||
|
- **Code Server** (VS Code in browser)
|
||||||
|
- **ByteStash** (Code snippet management)
|
||||||
|
|
||||||
|
### Media
|
||||||
|
- **Audiobookshelf** (Audiobook server)
|
||||||
|
- **Calibre** (E-book management)
|
||||||
|
- **Ghost** (Blog platform)
|
||||||
|
- **Pinchflat** (Media downloader)
|
||||||
|
- **Pinry** (Pinterest-like board)
|
||||||
|
- **Hoarder** (Bookmark manager)
|
||||||
|
- **Manyfold** (3D model organizer)
|
||||||
|
|
||||||
|
### Productivity
|
||||||
|
- **Paperless-NGX** (Document management)
|
||||||
|
- **MMDL** (Task management)
|
||||||
|
- **Baikal** (CalDAV/CardDAV server)
|
||||||
|
- **Syncthing** (File synchronization)
|
||||||
|
- **HeyForm** (Form builder)
|
||||||
|
- **Dawarich** (Location tracking)
|
||||||
|
- **Palmr** (File sharing)
|
||||||
|
- **Obsidian LiveSync** (Note synchronization)
|
||||||
|
|
||||||
|
### Communication
|
||||||
|
- **GoToSocial** (Fediverse/Mastodon)
|
||||||
|
- **Postiz** (Social media management)
|
||||||
|
|
||||||
|
### Monitoring
|
||||||
|
- **Changedetection** (Website change monitoring)
|
||||||
|
- **Glance** (Dashboard)
|
||||||
|
- **AppriseAPI** (Notification service)
|
||||||
|
- **Gotify** (Push notifications)
|
||||||
|
|
||||||
|
## Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
├── site.yml # Main playbook
|
||||||
|
├── bootstrap.yml # Server bootstrap playbook
|
||||||
|
├── dns.yml # AWS Route53 DNS management
|
||||||
|
├── hosts.yml # Inventory file
|
||||||
|
├── requirements.yml # External role dependencies
|
||||||
|
└── roles/
|
||||||
|
├── bootstrap/ # Initial server setup
|
||||||
|
├── common/ # Common server configuration
|
||||||
|
├── cron/ # Scheduled tasks
|
||||||
|
└── docker/ # Docker services deployment
|
||||||
|
```
|
||||||
|
|
||||||
|
## Roles Documentation
|
||||||
|
|
||||||
|
Each role has detailed documentation in its respective directory:
|
||||||
|
|
||||||
|
### [Bootstrap Role](roles/bootstrap/README.md)
|
||||||
|
Performs initial server setup and hardening:
|
||||||
|
- Creates user accounts with SSH key authentication
|
||||||
|
- Configures passwordless sudo and security hardening
|
||||||
|
- Installs essential packages and configures UFW firewall
|
||||||
|
- Sets up Tailscale VPN for secure network access
|
||||||
|
|
||||||
|
### [Common Role](roles/common/README.md)
|
||||||
|
Provides shared configuration for all servers:
|
||||||
|
- Installs common packages (aptitude)
|
||||||
|
- Enables UFW firewall with default deny policy
|
||||||
|
- Ensures consistent base configuration across infrastructure
|
||||||
|
|
||||||
|
### [Cron Role](roles/cron/README.md)
|
||||||
|
Manages scheduled tasks and automation:
|
||||||
|
- **Warhammer RSS Feed Updater**: Daily job that generates and updates RSS feeds
|
||||||
|
- Integrates with Docker services for content generation
|
||||||
|
- Supports easy addition of new scheduled tasks
|
||||||
|
|
||||||
|
### [Docker Role](roles/docker/README.md)
|
||||||
|
The most comprehensive role, deploying 25 containerized services organized into logical categories:
|
||||||
|
- **Infrastructure**: Caddy reverse proxy, Authentik SSO, Dockge management
|
||||||
|
- **Development**: Gitea, Code Server, Matrix communication
|
||||||
|
- **Media**: Audiobookshelf, Calibre, Ghost blog, Pinchflat, and more
|
||||||
|
- **Productivity**: Paperless-NGX, MMDL task management, Baikal calendar
|
||||||
|
- **Communication**: GoToSocial, Postiz social media management
|
||||||
|
- **Monitoring**: Glance dashboard, Changedetection, AppriseAPI notifications
|
||||||
|
- **Template-Driven**: All services use Jinja2 templates for consistent configuration
|
||||||
|
- **Category-Based Deployment**: Deploy services by category using Ansible tags
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
1. Install Ansible and required collections:
|
||||||
|
```bash
|
||||||
|
ansible-galaxy install -r requirements.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Configure your inventory in `hosts.yml` with your server details
|
||||||
|
|
||||||
|
### Bootstrap a New Server
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook bootstrap.yml -i hosts.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
This will:
|
||||||
|
- Create a user account
|
||||||
|
- Install and configure Tailscale VPN
|
||||||
|
- Set up basic security
|
||||||
|
|
||||||
|
### Deploy Docker Services
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook site.yml -i hosts.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Deploy specific services using tags:
|
||||||
|
```bash
|
||||||
|
# Deploy by service category
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags infrastructure
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags media,productivity
|
||||||
|
|
||||||
|
# Deploy individual services
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags caddy
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags authentik
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags mmdl
|
||||||
|
```
|
||||||
|
|
||||||
|
### Manage DNS Records
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook dns.yml -i hosts.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Updates AWS Route53 DNS records for configured domains (`thesatelliteoflove.com` and `nerder.land`).
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
- Service configurations are templated in `roles/docker/templates/`
|
||||||
|
- Environment variables and secrets should be managed through Ansible Vault
|
||||||
|
- Docker Compose files are generated from Jinja2 templates
|
||||||
|
|
||||||
|
## Security Notes
|
||||||
|
|
||||||
|
- Uses Tailscale for secure network access
|
||||||
|
- Caddy provides automatic HTTPS with Let's Encrypt
|
||||||
|
- Services are containerized for isolation
|
||||||
|
- UFW firewall rules are managed via Docker integration
|
||||||
@@ -3,11 +3,8 @@
|
|||||||
become: true
|
become: true
|
||||||
vars:
|
vars:
|
||||||
created_username: phil
|
created_username: phil
|
||||||
vars_prompt:
|
|
||||||
- name: tailscale_key
|
|
||||||
prompt: Enter the tailscale key
|
|
||||||
roles:
|
roles:
|
||||||
- bootstrap
|
- bootstrap
|
||||||
- role: artis3n.tailscale
|
- role: artis3n.tailscale
|
||||||
vars:
|
vars:
|
||||||
tailscale_authkey: "{{ tailscale_key }}"
|
tailscale_authkey: "{{ vault_infrastructure.tailscale_key }}"
|
||||||
24
dns.yml
24
dns.yml
@@ -27,20 +27,36 @@
|
|||||||
ip: "152.53.36.98"
|
ip: "152.53.36.98"
|
||||||
- name: "code"
|
- name: "code"
|
||||||
ip: "152.53.36.98"
|
ip: "152.53.36.98"
|
||||||
|
- name: "snippets"
|
||||||
|
ip: "152.53.36.98"
|
||||||
- name: cal
|
- name: cal
|
||||||
ip: "152.53.36.98"
|
ip: "152.53.36.98"
|
||||||
- name: phlog
|
- name: phlog
|
||||||
ip: "152.53.36.98"
|
ip: "152.53.36.98"
|
||||||
- name: loclog
|
- name: loclog
|
||||||
ip: "152.53.36.98"
|
ip: "152.53.36.98"
|
||||||
- name: habits
|
|
||||||
ip: "152.53.36.98"
|
|
||||||
- name: watcher
|
- name: watcher
|
||||||
ip: "152.53.36.98"
|
ip: "152.53.36.98"
|
||||||
- name: chat
|
|
||||||
ip: "152.53.36.98"
|
|
||||||
- name: models
|
- name: models
|
||||||
ip: "152.53.36.98"
|
ip: "152.53.36.98"
|
||||||
|
- name: tasks
|
||||||
|
ip: "152.53.36.98"
|
||||||
|
- name: post
|
||||||
|
ip: "152.53.36.98"
|
||||||
|
- name: files
|
||||||
|
ip: "152.53.36.98"
|
||||||
|
- name: bookmarks
|
||||||
|
ip: "152.53.36.98"
|
||||||
|
- name: gotify
|
||||||
|
ip: "152.53.36.98"
|
||||||
|
- name: gotify-assistant
|
||||||
|
ip: "152.53.36.98"
|
||||||
|
- name: pdg
|
||||||
|
ip: "152.53.36.98"
|
||||||
|
- name: kanboard
|
||||||
|
ip: "152.53.36.98"
|
||||||
|
- name: grocy
|
||||||
|
ip: "152.53.36.98"
|
||||||
- name: nerder.land
|
- name: nerder.land
|
||||||
dns_records:
|
dns_records:
|
||||||
- name: "forms"
|
- name: "forms"
|
||||||
|
|||||||
43
group_vars/all/domains.yml
Normal file
43
group_vars/all/domains.yml
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# Domain Configuration
|
||||||
|
primary_domain: "thesatelliteoflove.com"
|
||||||
|
secondary_domain: "nerder.land"
|
||||||
|
|
||||||
|
# Subdomain mappings
|
||||||
|
subdomains:
|
||||||
|
auth: "auth.{{ primary_domain }}"
|
||||||
|
git: "git.{{ primary_domain }}"
|
||||||
|
cal: "cal.{{ primary_domain }}"
|
||||||
|
docs: "docs.{{ primary_domain }}"
|
||||||
|
phlog: "phlog.{{ primary_domain }}" # Ghost blog
|
||||||
|
bookmarks: "bookmarks.{{ primary_domain }}" # Hoarder/Karakeep
|
||||||
|
heyform: "forms.{{ secondary_domain }}" # Heyform on nerder.land
|
||||||
|
media: "media.{{ primary_domain }}"
|
||||||
|
audio: "audio.{{ primary_domain }}" # Audiobookshelf
|
||||||
|
books: "books.{{ primary_domain }}" # Calibre
|
||||||
|
models: "models.{{ primary_domain }}" # Manyfold
|
||||||
|
pinchflat: "pinchflat.{{ primary_domain }}"
|
||||||
|
pin: "pin.{{ primary_domain }}" # Pinry
|
||||||
|
paper: "paper.{{ primary_domain }}" # Paperless-NGX
|
||||||
|
tasks: "tasks.{{ primary_domain }}" # MMDL
|
||||||
|
syncthing: "syncthing.{{ primary_domain }}"
|
||||||
|
loclog: "loclog.{{ primary_domain }}" # Dawarich
|
||||||
|
files: "files.{{ primary_domain }}" # Palmr file sharing
|
||||||
|
social: "social.{{ primary_domain }}" # GoToSocial
|
||||||
|
post: "post.{{ primary_domain }}" # Postiz
|
||||||
|
home: "home.{{ primary_domain }}" # Glance
|
||||||
|
watcher: "watcher.{{ primary_domain }}" # Changedetection
|
||||||
|
appriseapi: "appriseapi.{{ primary_domain }}"
|
||||||
|
dockge: "dockge.{{ primary_domain }}"
|
||||||
|
code: "code.{{ primary_domain }}" # Code Server
|
||||||
|
bytestash: "snippets.{{ primary_domain }}" # ByteStash code snippets
|
||||||
|
gotify: "gotify.{{ primary_domain }}" # Gotify notifications
|
||||||
|
gotify_assistant: "gotify-assistant.{{ primary_domain }}" # iGotify iOS assistant
|
||||||
|
kanboard: "kanboard.{{ primary_domain }}" # Kanboard project management
|
||||||
|
grocy: "grocy.{{ primary_domain }}" # Grocy kitchen ERP
|
||||||
|
|
||||||
|
# Email domains for notifications
|
||||||
|
email_domains:
|
||||||
|
updates: "updates.{{ primary_domain }}"
|
||||||
|
auth_email: "auth@updates.{{ primary_domain }}"
|
||||||
|
git_email: "git@updates.{{ primary_domain }}"
|
||||||
|
cal_email: "cal@updates.{{ primary_domain }}"
|
||||||
26
group_vars/all/infrastructure.yml
Normal file
26
group_vars/all/infrastructure.yml
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
# Infrastructure Configuration
|
||||||
|
|
||||||
|
# Docker configuration
|
||||||
|
docker:
|
||||||
|
network_name: "lava"
|
||||||
|
stacks_path: "/opt/stacks"
|
||||||
|
hairpin_ip: "172.20.0.5"
|
||||||
|
|
||||||
|
# SMTP configuration
|
||||||
|
smtp:
|
||||||
|
host: "smtp.resend.com"
|
||||||
|
username: "resend"
|
||||||
|
from_domain: "{{ email_domains.updates }}"
|
||||||
|
|
||||||
|
# Network configuration
|
||||||
|
network:
|
||||||
|
netcup_ip: "152.53.36.98"
|
||||||
|
docker_host_ip: "100.70.169.99"
|
||||||
|
|
||||||
|
# Paths
|
||||||
|
paths:
|
||||||
|
stacks: "{{ docker.stacks_path }}"
|
||||||
|
|
||||||
|
# Notification services
|
||||||
|
notifications:
|
||||||
|
appriseapi_endpoint: "http://apprise:8000/notify/apprise"
|
||||||
25
group_vars/docker/services.yml
Normal file
25
group_vars/docker/services.yml
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# Docker Services Configuration
|
||||||
|
|
||||||
|
# Service categories for organization
|
||||||
|
service_categories:
|
||||||
|
infrastructure: ["caddy", "authentik", "dockge"]
|
||||||
|
development: ["gitea", "codeserver"]
|
||||||
|
media: ["audiobookshelf", "calibre", "ghost", "pinchflat", "pinry", "hoarder", "manyfold"]
|
||||||
|
productivity: ["paperlessngx", "baikal", "syncthing", "mmdl", "heyform", "dawarich", "pingvin"]
|
||||||
|
communication: ["gotosocial", "postiz"]
|
||||||
|
monitoring: ["glance", "changedetection", "appriseapi", "gotify"]
|
||||||
|
|
||||||
|
# Common service configuration
|
||||||
|
services:
|
||||||
|
common:
|
||||||
|
restart_policy: "unless-stopped"
|
||||||
|
network: "{{ docker.network_name }}"
|
||||||
|
|
||||||
|
# Service-specific configurations
|
||||||
|
dawarich:
|
||||||
|
db_name: "dawarich"
|
||||||
|
db_user: "dawarich"
|
||||||
|
|
||||||
|
mmdl:
|
||||||
|
db_name: "mmdl"
|
||||||
|
db_user: "mmdl"
|
||||||
41
roles/bootstrap/README.md
Normal file
41
roles/bootstrap/README.md
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
# Bootstrap Role
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
Performs initial server setup and hardening for new Ubuntu/Debian servers.
|
||||||
|
|
||||||
|
## What It Does
|
||||||
|
|
||||||
|
### User Management
|
||||||
|
- Creates a new user account with sudo privileges (specified by `created_username` variable)
|
||||||
|
- Configures passwordless sudo for the sudo group
|
||||||
|
- Sets up SSH key authentication using your local `~/.ssh/id_ed25519.pub` key
|
||||||
|
- Disables root password authentication
|
||||||
|
|
||||||
|
### System Packages
|
||||||
|
- Installs `aptitude` for better package management
|
||||||
|
- Installs essential packages:
|
||||||
|
- `curl` - HTTP client
|
||||||
|
- `vim` - Text editor
|
||||||
|
- `git` - Version control
|
||||||
|
- `ufw` - Uncomplicated Firewall
|
||||||
|
|
||||||
|
### Security Configuration
|
||||||
|
- Configures UFW firewall to:
|
||||||
|
- Allow SSH connections
|
||||||
|
- Enable firewall with default deny policy
|
||||||
|
- Hardens SSH configuration
|
||||||
|
|
||||||
|
## Variables Required
|
||||||
|
- `created_username`: The username to create (typically set in bootstrap.yml)
|
||||||
|
- `tailscale_key`: Tailscale authentication key (prompted during playbook run)
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
- Requires the `artis3n.tailscale` role for VPN setup
|
||||||
|
- Requires your SSH public key at `~/.ssh/id_ed25519.pub`
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
```bash
|
||||||
|
ansible-playbook bootstrap.yml -i hosts.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
This role is designed to be run once on a fresh server before deploying other services.
|
||||||
23
roles/common/README.md
Normal file
23
roles/common/README.md
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
# Common Role
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
Provides shared configuration and security setup that applies to all servers in the infrastructure.
|
||||||
|
|
||||||
|
## What It Does
|
||||||
|
|
||||||
|
### System Packages
|
||||||
|
- Installs `aptitude` for better package management and dependency resolution
|
||||||
|
- Updates package cache to ensure latest package information
|
||||||
|
|
||||||
|
### Security Configuration
|
||||||
|
- Enables UFW (Uncomplicated Firewall) with default deny policy
|
||||||
|
- Provides baseline firewall protection for all managed servers
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
This role is automatically applied to all servers in the infrastructure as part of the main site.yml playbook. It ensures consistent base configuration across all managed systems.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
None - this is a foundational role that other roles can depend on.
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
This role is designed to be lightweight and provide only the most essential common functionality. Server-specific configurations should be handled by dedicated roles like `docker` or `bootstrap`.
|
||||||
@@ -1,6 +1,8 @@
|
|||||||
- name: Install aptitude
|
- name: Install common packages
|
||||||
apt:
|
apt:
|
||||||
name: aptitude
|
name:
|
||||||
|
- aptitude
|
||||||
|
- jq
|
||||||
state: latest
|
state: latest
|
||||||
update_cache: true
|
update_cache: true
|
||||||
|
|
||||||
|
|||||||
37
roles/cron/README.md
Normal file
37
roles/cron/README.md
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
# Cron Role
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
Manages scheduled tasks and automated maintenance jobs for the infrastructure.
|
||||||
|
|
||||||
|
## What It Does
|
||||||
|
|
||||||
|
### Warhammer RSS Feed Updater
|
||||||
|
- Copies `update_warhammer_feed.sh` script to `/usr/local/bin/` with executable permissions
|
||||||
|
- Creates a daily cron job that runs at 09:10 AM
|
||||||
|
- The script performs these actions:
|
||||||
|
1. Creates a temporary directory `/tmp/warhammer_feed`
|
||||||
|
2. Runs a custom Docker container (`git.thesatelliteoflove.com/phil/rss-warhammer`) to generate RSS feed
|
||||||
|
3. Copies the generated `warhammer_rss_feed.xml` to `/opt/stacks/caddy/site/tsol/feeds/`
|
||||||
|
4. Restarts the Glance dashboard stack to reflect the updated feed
|
||||||
|
|
||||||
|
## Files Managed
|
||||||
|
- `/usr/local/bin/update_warhammer_feed.sh` - RSS feed update script
|
||||||
|
- Cron job: "Update Warhammer RSS Feed" (daily at 09:10)
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
- Requires Docker to be installed and running
|
||||||
|
- Depends on the following Docker stacks being deployed:
|
||||||
|
- Custom RSS generator container at `git.thesatelliteoflove.com/phil/rss-warhammer`
|
||||||
|
- Caddy web server stack at `/opt/stacks/caddy/`
|
||||||
|
- Glance dashboard stack at `/opt/stacks/glance/`
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
This role is automatically applied as part of the main site.yml playbook with the `cron` tag.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deploy only cron jobs
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags cron
|
||||||
|
```
|
||||||
|
|
||||||
|
## Customization
|
||||||
|
To add additional cron jobs, create new tasks in the main.yml file following the same pattern as the Warhammer feed updater.
|
||||||
6
roles/cron/handlers/main.yml
Normal file
6
roles/cron/handlers/main.yml
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
---
|
||||||
|
# Handler to restart systemd-journald service
|
||||||
|
- name: restart rsyslog
|
||||||
|
systemd:
|
||||||
|
name: systemd-journald
|
||||||
|
state: restarted
|
||||||
@@ -1,4 +1,7 @@
|
|||||||
---
|
---
|
||||||
|
# Enable cron logging in systemd-journald (already enabled by default)
|
||||||
|
# We'll rely on journalctl for cron execution logs
|
||||||
|
|
||||||
# Ensure the script is copied to the target machine
|
# Ensure the script is copied to the target machine
|
||||||
- name: Copy the warhammer feed update script
|
- name: Copy the warhammer feed update script
|
||||||
copy:
|
copy:
|
||||||
@@ -16,3 +19,97 @@
|
|||||||
hour: "9"
|
hour: "9"
|
||||||
user: root
|
user: root
|
||||||
job: "/usr/local/bin/update_warhammer_feed.sh"
|
job: "/usr/local/bin/update_warhammer_feed.sh"
|
||||||
|
|
||||||
|
# Create .local/bin directory for phil user
|
||||||
|
- name: Ensure .local/bin directory exists for phil
|
||||||
|
file:
|
||||||
|
path: /home/phil/.local/bin
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
owner: phil
|
||||||
|
group: phil
|
||||||
|
|
||||||
|
# Install dockcheck script in phil's .local/bin
|
||||||
|
- name: Download dockcheck.sh script
|
||||||
|
get_url:
|
||||||
|
url: https://raw.githubusercontent.com/mag37/dockcheck/main/dockcheck.sh
|
||||||
|
dest: /home/phil/.local/bin/dockcheck.sh
|
||||||
|
mode: '0755'
|
||||||
|
owner: phil
|
||||||
|
group: phil
|
||||||
|
|
||||||
|
# Create .config directory for phil user
|
||||||
|
- name: Ensure .config directory exists for phil
|
||||||
|
file:
|
||||||
|
path: /home/phil/.config
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
owner: phil
|
||||||
|
group: phil
|
||||||
|
|
||||||
|
# Create notify_templates directory alongside dockcheck.sh
|
||||||
|
- name: Ensure notify_templates directory exists in .local/bin
|
||||||
|
file:
|
||||||
|
path: /home/phil/.local/bin/notify_templates
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
owner: phil
|
||||||
|
group: phil
|
||||||
|
|
||||||
|
# Download notify_v2.sh script for dockcheck notifications
|
||||||
|
- name: Download notify_v2.sh script
|
||||||
|
get_url:
|
||||||
|
url: https://raw.githubusercontent.com/mag37/dockcheck/main/notify_templates/notify_v2.sh
|
||||||
|
dest: /home/phil/.local/bin/notify_templates/notify_v2.sh
|
||||||
|
mode: '0755'
|
||||||
|
owner: phil
|
||||||
|
group: phil
|
||||||
|
|
||||||
|
# Download notify_gotify.sh script for dockcheck notifications
|
||||||
|
- name: Download notify_gotify.sh script
|
||||||
|
get_url:
|
||||||
|
url: https://raw.githubusercontent.com/mag37/dockcheck/main/notify_templates/notify_gotify.sh
|
||||||
|
dest: /home/phil/.local/bin/notify_templates/notify_gotify.sh
|
||||||
|
mode: '0755'
|
||||||
|
owner: phil
|
||||||
|
group: phil
|
||||||
|
|
||||||
|
# Template dockcheck configuration file
|
||||||
|
- name: Template dockcheck configuration
|
||||||
|
template:
|
||||||
|
src: dockcheck.config.j2
|
||||||
|
dest: /home/phil/.config/dockcheck.config
|
||||||
|
mode: '0644'
|
||||||
|
owner: phil
|
||||||
|
group: phil
|
||||||
|
|
||||||
|
# Create log directory for dockcheck
|
||||||
|
- name: Create dockcheck log directory
|
||||||
|
file:
|
||||||
|
path: /var/log/dockcheck
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
owner: phil
|
||||||
|
group: phil
|
||||||
|
|
||||||
|
# Create dockcheck wrapper script to avoid cron escaping issues
|
||||||
|
- name: Create dockcheck wrapper script
|
||||||
|
copy:
|
||||||
|
dest: /home/phil/.local/bin/run_dockcheck.sh
|
||||||
|
mode: '0755'
|
||||||
|
owner: phil
|
||||||
|
group: phil
|
||||||
|
content: |
|
||||||
|
#!/bin/bash
|
||||||
|
cd /home/phil
|
||||||
|
/home/phil/.local/bin/dockcheck.sh >> /var/log/dockcheck/dockcheck.log 2>&1
|
||||||
|
echo "$(date "+%Y-%m-%d %H:%M:%S") - Dockcheck completed with exit code $?" >> /var/log/dockcheck/dockcheck.log
|
||||||
|
|
||||||
|
# Create cron job for dockcheck as phil user with logging
|
||||||
|
- name: Create cron job for dockcheck container updates
|
||||||
|
cron:
|
||||||
|
name: "Check Docker container updates"
|
||||||
|
minute: "0"
|
||||||
|
hour: "8"
|
||||||
|
user: phil
|
||||||
|
job: "/home/phil/.local/bin/run_dockcheck.sh"
|
||||||
|
|||||||
18
roles/cron/templates/dockcheck.config.j2
Normal file
18
roles/cron/templates/dockcheck.config.j2
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
# Dockcheck Configuration - Check only, no updates
|
||||||
|
# Don't update, just check for updates
|
||||||
|
# DontUpdate=true
|
||||||
|
OnlyLabel=true
|
||||||
|
AutoMode=true
|
||||||
|
|
||||||
|
# Enable notifications
|
||||||
|
Notify=true
|
||||||
|
|
||||||
|
# Exclude containers from checking
|
||||||
|
Exclude="authentik-postgresql-1,dawarich_redis,dawarich_db"
|
||||||
|
|
||||||
|
# Notification channels
|
||||||
|
NOTIFY_CHANNELS="gotify"
|
||||||
|
|
||||||
|
# Gotify notification configuration
|
||||||
|
GOTIFY_DOMAIN="https://{{ subdomains.gotify }}"
|
||||||
|
GOTIFY_TOKEN="{{ vault_dockcheck.gotify_token }}"
|
||||||
228
roles/docker/README.md
Normal file
228
roles/docker/README.md
Normal file
@@ -0,0 +1,228 @@
|
|||||||
|
# Docker Role
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
Deploys and manages a comprehensive self-hosted infrastructure with 24 containerized services organized into logical categories, transforming a server into a personal cloud platform with authentication, media management, productivity tools, and development services.
|
||||||
|
|
||||||
|
## Architecture Overview
|
||||||
|
|
||||||
|
### Network Configuration
|
||||||
|
- **External Network**: All services connect to shared Docker network (configurable)
|
||||||
|
- **Reverse Proxy**: Caddy handles all ingress traffic with automatic HTTPS
|
||||||
|
- **Service Discovery**: Container-to-container communication using service names
|
||||||
|
- **Firewall Integration**: UFW-Docker script properly configures firewall rules
|
||||||
|
|
||||||
|
### Security Features
|
||||||
|
- **Centralized SSO**: Authentik provides OIDC authentication for most services
|
||||||
|
- **Network Isolation**: Services restricted to appropriate network segments
|
||||||
|
- **Container Hardening**: Non-root users, capability dropping, security options
|
||||||
|
- **Secret Management**: Ansible vault for sensitive configuration
|
||||||
|
- **Variable Management**: Centralized variable hierarchy using group_vars structure
|
||||||
|
|
||||||
|
## Services Deployed (Organized by Category)
|
||||||
|
|
||||||
|
### Infrastructure (`infrastructure/`)
|
||||||
|
- **Caddy** - Reverse proxy with automatic HTTPS (static IP: 172.20.0.5)
|
||||||
|
- **Authentik** - Enterprise authentication server (OIDC/SAML SSO)
|
||||||
|
- **Dockge** - Docker compose stack management UI
|
||||||
|
|
||||||
|
### Development (`development/`)
|
||||||
|
- **Gitea** - Self-hosted Git with CI/CD runners
|
||||||
|
- **Code Server** - VS Code in the browser
|
||||||
|
- **ByteStash** - Code snippet management and organization
|
||||||
|
|
||||||
|
### Media (`media/`)
|
||||||
|
- **Audiobookshelf** - Audiobook and podcast server
|
||||||
|
- **Calibre** - E-book management and conversion
|
||||||
|
- **Ghost** - Modern blogging platform
|
||||||
|
- **Pinchflat** - YouTube video archiving
|
||||||
|
- **Pinry** - Pinterest-like image board
|
||||||
|
- **Karakeep** - Bookmark management with AI tagging
|
||||||
|
- **Manyfold** - 3D model file organization
|
||||||
|
|
||||||
|
### Productivity (`productivity/`)
|
||||||
|
- **Paperless-ngx** - Document management with OCR
|
||||||
|
- **MMDL** - Task and calendar management with CalDAV integration
|
||||||
|
- **Baikal** - CalDAV/CardDAV server
|
||||||
|
- **Syncthing** - Decentralized file sync
|
||||||
|
- **Heyform** - Form builder and surveys
|
||||||
|
- **Dawarich** - Location tracking
|
||||||
|
- **Palmr** - File sharing service
|
||||||
|
- **Obsidian LiveSync** - CouchDB backend for note synchronization
|
||||||
|
|
||||||
|
### Communication (`communication/`)
|
||||||
|
- **GoToSocial** - Lightweight ActivityPub server
|
||||||
|
- **Postiz** - Social media management
|
||||||
|
|
||||||
|
### Monitoring (`monitoring/`)
|
||||||
|
- **Glance** - Customizable dashboard with monitoring
|
||||||
|
- **Change Detection** - Website monitoring
|
||||||
|
- **Apprise API** - Unified notifications
|
||||||
|
- **Gotify** - Self-hosted push notification service
|
||||||
|
|
||||||
|
## Deployment Patterns
|
||||||
|
|
||||||
|
### Standardized Service Deployment
|
||||||
|
Each service follows a consistent pattern:
|
||||||
|
1. Creates `/opt/stacks/[service-name]` directory structure
|
||||||
|
2. Generates Docker Compose file from Jinja2 template
|
||||||
|
3. Deploys using `community.docker.docker_compose_v2`
|
||||||
|
4. Configures environment variables from vault secrets
|
||||||
|
|
||||||
|
### Template System
|
||||||
|
- **Compose Templates**: `.j2` files in `templates/` for dynamic configuration
|
||||||
|
- **Environment Templates**: Separate `.env.j2` files for services requiring environment variables
|
||||||
|
- **Variable Substitution**: Uses centralized variable hierarchy from group_vars structure
|
||||||
|
- **Domain Management**: Centralized domain and subdomain configuration
|
||||||
|
- **Network Configuration**: Standardized Docker network and IP address management
|
||||||
|
|
||||||
|
## Shell Environment Setup
|
||||||
|
The role also configures the shell environment:
|
||||||
|
- **Zsh Installation**: Sets zsh as default shell
|
||||||
|
- **Atuin**: Command history sync and search
|
||||||
|
- **Bat**: Enhanced `cat` command with syntax highlighting
|
||||||
|
|
||||||
|
## File Organization
|
||||||
|
```
|
||||||
|
roles/docker/
|
||||||
|
├── tasks/
|
||||||
|
│ ├── main.yml # Orchestrates all deployments
|
||||||
|
│ ├── shell.yml # Shell environment setup
|
||||||
|
│ ├── infrastructure/
|
||||||
|
│ │ ├── main.yml # Infrastructure category orchestrator
|
||||||
|
│ │ ├── caddy.yml # Reverse proxy
|
||||||
|
│ │ └── authentik.yml # Authentication
|
||||||
|
│ ├── development/
|
||||||
|
│ │ ├── main.yml # Development category orchestrator
|
||||||
|
│ │ ├── gitea.yml # Git hosting
|
||||||
|
│ │ └── codeserver.yml # VS Code server
|
||||||
|
│ ├── media/ # Media services (7 services)
|
||||||
|
│ ├── productivity/ # Productivity services (7 services)
|
||||||
|
│ ├── communication/ # Communication services (2 services)
|
||||||
|
│ └── monitoring/ # Monitoring services (3 services)
|
||||||
|
├── templates/
|
||||||
|
│ ├── [service]-compose.yml.j2 # Docker Compose templates (all templated)
|
||||||
|
│ ├── [service]-env.j2 # Environment variable templates
|
||||||
|
│ └── [service]-*.j2 # Service-specific templates
|
||||||
|
├── files/
|
||||||
|
│ ├── Caddyfile # Caddy configuration
|
||||||
|
│ ├── ufw-docker.sh # Firewall integration script
|
||||||
|
│ ├── client # Matrix well-known client file
|
||||||
|
│ └── server # Matrix well-known server file
|
||||||
|
└── handlers/
|
||||||
|
└── main.yml # Service restart handlers
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Deploy All Services
|
||||||
|
```bash
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags docker
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deploy by Service Category
|
||||||
|
```bash
|
||||||
|
# Deploy entire service categories
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags infrastructure
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags development
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags media
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags productivity
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags communication
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags monitoring
|
||||||
|
|
||||||
|
# Deploy multiple categories
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags infrastructure,monitoring
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deploy Individual Services
|
||||||
|
```bash
|
||||||
|
# Deploy specific services
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags authentik
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags gitea,codeserver
|
||||||
|
ansible-playbook site.yml -i hosts.yml --tags mmdl
|
||||||
|
```
|
||||||
|
|
||||||
|
## Service-Specific Notes
|
||||||
|
|
||||||
|
### MMDL (Task Management)
|
||||||
|
- **URL**: https://tasks.thesatelliteoflove.com
|
||||||
|
- **Initial Setup**: Visit `/install` endpoint first to run database migrations
|
||||||
|
- **Authentication**: Integrates with Authentik OIDC provider
|
||||||
|
- **Database**: Uses MySQL 8.0 with automatic schema migration
|
||||||
|
- **Features**: CalDAV integration, multiple account support, task management
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
### System Requirements
|
||||||
|
- Docker CE installed and running
|
||||||
|
- UFW firewall configured
|
||||||
|
- DNS records pointing to the server
|
||||||
|
- Valid SSL certificates (handled automatically by Caddy)
|
||||||
|
|
||||||
|
### External Services
|
||||||
|
- **DNS**: Requires subdomains configured for each service
|
||||||
|
- **Email**: Gitea uses Resend for notifications
|
||||||
|
- **Storage**: All services persist data to `/opt/stacks/[service]/`
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Variable Structure
|
||||||
|
The role uses a centralized variable hierarchy in `group_vars/all/`:
|
||||||
|
|
||||||
|
- **domains.yml**: Domain and subdomain mappings for all services
|
||||||
|
- **infrastructure.yml**: Network configuration, Docker settings, and system parameters
|
||||||
|
- **vault.yml**: Encrypted secrets including API keys, passwords, and OAuth credentials
|
||||||
|
- **services.yml**: Service-specific configuration and feature flags
|
||||||
|
|
||||||
|
### Required Variables (in vault.yml)
|
||||||
|
- Authentication credentials for various services (vault_*)
|
||||||
|
- API keys for external integrations
|
||||||
|
- OAuth client secrets for SSO integration
|
||||||
|
- Database passwords and connection strings
|
||||||
|
- SMTP credentials for notifications
|
||||||
|
|
||||||
|
### Network Configuration
|
||||||
|
Services expect to be accessible via subdomains of configured domains:
|
||||||
|
- `auth.thesatelliteoflove.com` - Authentik
|
||||||
|
- `git.thesatelliteoflove.com` - Gitea
|
||||||
|
- `books.thesatelliteoflove.com` - Calibre
|
||||||
|
- `tasks.thesatelliteoflove.com` - MMDL
|
||||||
|
- (and many more...)
|
||||||
|
|
||||||
|
## Monitoring & Management
|
||||||
|
|
||||||
|
### Glance Dashboard Integration
|
||||||
|
All services include Glance labels for dashboard monitoring:
|
||||||
|
- Service health status
|
||||||
|
- Container resource usage
|
||||||
|
- Parent-child relationships for multi-container services
|
||||||
|
|
||||||
|
### Operational Features
|
||||||
|
- Automatic container restart policies
|
||||||
|
- Health checks for database services
|
||||||
|
- Centralized logging and monitoring
|
||||||
|
- Backup-ready data structure in `/opt/stacks/`
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
### Network Security
|
||||||
|
- UFW-Docker integration for proper firewall rules
|
||||||
|
- Services isolated to appropriate network segments
|
||||||
|
- Restricted access for sensitive tools (Stirling PDF)
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
- Centralized SSO through Authentik for most services
|
||||||
|
- OAuth integration where supported
|
||||||
|
- Secure secret management through Ansible vault
|
||||||
|
|
||||||
|
### Container Security
|
||||||
|
- Non-root container execution (UID/GID 1000:1000)
|
||||||
|
- Security options: `no-new-privileges: true`
|
||||||
|
- Capability dropping and minimal permissions
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
- **Database Connection**: Ensure MySQL containers use proper authentication plugins
|
||||||
|
- **OAuth Discovery**: Check issuer URLs don't have trailing slashes
|
||||||
|
- **Migration Failures**: Visit service `/install` endpoints for database setup
|
||||||
|
- **Network Issues**: Verify containers are on the same Docker network
|
||||||
@@ -37,23 +37,48 @@ watcher.thesatelliteoflove.com {
|
|||||||
}
|
}
|
||||||
|
|
||||||
tasks.thesatelliteoflove.com {
|
tasks.thesatelliteoflove.com {
|
||||||
reverse_proxy authentik-server-1:9000
|
reverse_proxy mmdl:3000
|
||||||
|
}
|
||||||
|
|
||||||
|
kanboard.thesatelliteoflove.com {
|
||||||
|
reverse_proxy kanboard:80
|
||||||
|
}
|
||||||
|
|
||||||
|
grocy.thesatelliteoflove.com {
|
||||||
|
# API endpoints bypass forward auth for mobile apps
|
||||||
|
handle /api/* {
|
||||||
|
reverse_proxy grocy:80
|
||||||
|
}
|
||||||
|
|
||||||
|
# Web interface requires Authentik authentication
|
||||||
|
forward_auth authentik-server-1:9000 {
|
||||||
|
uri /outpost.goauthentik.io/auth/caddy
|
||||||
|
copy_headers {
|
||||||
|
X-authentik-username
|
||||||
|
X-authentik-groups
|
||||||
|
X-authentik-email
|
||||||
|
X-authentik-name
|
||||||
|
X-authentik-uid
|
||||||
|
}
|
||||||
|
}
|
||||||
|
reverse_proxy grocy:80
|
||||||
}
|
}
|
||||||
|
|
||||||
phlog.thesatelliteoflove.com {
|
phlog.thesatelliteoflove.com {
|
||||||
reverse_proxy ghost-1-ghost-1:2368
|
reverse_proxy ghost-1-ghost-1:2368
|
||||||
}
|
}
|
||||||
|
|
||||||
habits.thesatelliteoflove.com {
|
|
||||||
reverse_proxy beaverhabits:8080
|
|
||||||
}
|
|
||||||
|
|
||||||
code.thesatelliteoflove.com {
|
code.thesatelliteoflove.com {
|
||||||
reverse_proxy authentik-server-1:9000
|
reverse_proxy authentik-server-1:9000
|
||||||
}
|
}
|
||||||
|
|
||||||
|
snippets.thesatelliteoflove.com {
|
||||||
|
reverse_proxy bytestash:5000
|
||||||
|
}
|
||||||
|
|
||||||
files.thesatelliteoflove.com {
|
files.thesatelliteoflove.com {
|
||||||
reverse_proxy pingvin-pingvin-share-1:3000
|
reverse_proxy palmr-palmr-1:5487
|
||||||
}
|
}
|
||||||
|
|
||||||
git.thesatelliteoflove.com {
|
git.thesatelliteoflove.com {
|
||||||
@@ -67,15 +92,6 @@ thesatelliteoflove.com {
|
|||||||
file_server
|
file_server
|
||||||
}
|
}
|
||||||
|
|
||||||
chat.thesatelliteoflove.com, chat.thesatelliteoflove.com:8448 {
|
|
||||||
handle /.well-known/* {
|
|
||||||
root * /srv/matrix
|
|
||||||
file_server
|
|
||||||
}
|
|
||||||
reverse_proxy /_matrix/* conduit-homeserver-1:6167
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
bookmarks.thesatelliteoflove.com {
|
bookmarks.thesatelliteoflove.com {
|
||||||
reverse_proxy hoarder-web-1:3000
|
reverse_proxy hoarder-web-1:3000
|
||||||
}
|
}
|
||||||
@@ -88,25 +104,28 @@ models.thesatelliteoflove.com {
|
|||||||
reverse_proxy manyfold-app-1:3214
|
reverse_proxy manyfold-app-1:3214
|
||||||
}
|
}
|
||||||
|
|
||||||
grist.thesatelliteoflove.com {
|
|
||||||
reverse_proxy grist-grist-1:8484
|
|
||||||
}
|
|
||||||
|
|
||||||
home.thesatelliteoflove.com {
|
home.thesatelliteoflove.com {
|
||||||
reverse_proxy authentik-server-1:9000
|
reverse_proxy authentik-server-1:9000
|
||||||
}
|
}
|
||||||
|
|
||||||
pdftools.thesatelliteoflove.com:80 {
|
gotify.thesatelliteoflove.com {
|
||||||
@allowed {
|
reverse_proxy gotify:80
|
||||||
remote_ip 100.64.0.0/10
|
}
|
||||||
}
|
|
||||||
|
|
||||||
handle @allowed {
|
gotify-assistant.thesatelliteoflove.com {
|
||||||
reverse_proxy stirling-stirlingpdf-1:8080
|
reverse_proxy igotify-assistant:8080
|
||||||
}
|
}
|
||||||
|
|
||||||
handle {
|
pdg.thesatelliteoflove.com {
|
||||||
respond "Access denied" 403
|
root * /srv/pdg
|
||||||
|
try_files {path} {path}.html {path}/ =404
|
||||||
|
file_server
|
||||||
|
encode gzip
|
||||||
|
|
||||||
|
handle_errors {
|
||||||
|
rewrite * /{err.status_code}.html
|
||||||
|
file_server
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +0,0 @@
|
|||||||
{
|
|
||||||
"m.homeserver": {
|
|
||||||
"base_url": "https://chat.thesatelliteoflove.com"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
{
|
|
||||||
"m.server": "chat.thesatelliteoflove.com:443"
|
|
||||||
}
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
version: "3"
|
|
||||||
services:
|
|
||||||
tasks.md:
|
|
||||||
image: baldissaramatheus/tasks.md:2.5.4
|
|
||||||
container_name: tasksmd
|
|
||||||
environment:
|
|
||||||
- PUID=1000
|
|
||||||
- PGID=1000
|
|
||||||
volumes:
|
|
||||||
- tasksmd-data:/tasks
|
|
||||||
- tasksmd-config:/config
|
|
||||||
restart: unless-stopped
|
|
||||||
volumes:
|
|
||||||
tasksmd-data:
|
|
||||||
driver: local
|
|
||||||
tasksmd-config:
|
|
||||||
driver: local
|
|
||||||
networks:
|
|
||||||
default:
|
|
||||||
external: true
|
|
||||||
name: lava
|
|
||||||
@@ -11,4 +11,11 @@
|
|||||||
project_src: /opt/stacks/caddy
|
project_src: /opt/stacks/caddy
|
||||||
files:
|
files:
|
||||||
- compose.yml
|
- compose.yml
|
||||||
|
state: restarted
|
||||||
|
|
||||||
|
- name: restart obsidian-livesync
|
||||||
|
community.docker.docker_compose_v2:
|
||||||
|
project_src: /opt/stacks/obsidian-livesync
|
||||||
|
files:
|
||||||
|
- docker-compose.yml
|
||||||
state: restarted
|
state: restarted
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
- name: make beaver directories
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ item}}"
|
|
||||||
state: directory
|
|
||||||
loop:
|
|
||||||
- /opt/stacks/beaver
|
|
||||||
|
|
||||||
- name: Template out the compose file
|
|
||||||
ansible.builtin.template:
|
|
||||||
src: beaver-compose.yml.j2
|
|
||||||
dest: /opt/stacks/beaver/compose.yml
|
|
||||||
owner: root
|
|
||||||
mode: 644
|
|
||||||
|
|
||||||
- name: deploy beaver stack
|
|
||||||
community.docker.docker_compose_v2:
|
|
||||||
project_src: /opt/stacks/beaver
|
|
||||||
files:
|
|
||||||
- compose.yml
|
|
||||||
10
roles/docker/tasks/communication/main.yml
Normal file
10
roles/docker/tasks/communication/main.yml
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
---
|
||||||
|
# Communication services - Social media, messaging, and external communication
|
||||||
|
|
||||||
|
- name: Install gotosocial
|
||||||
|
import_tasks: gotosocial.yml
|
||||||
|
tags: gotosocial
|
||||||
|
|
||||||
|
- name: Install postiz
|
||||||
|
import_tasks: postiz.yml
|
||||||
|
tags: postiz
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
- name: make conduit directories
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ item}}"
|
|
||||||
state: directory
|
|
||||||
loop:
|
|
||||||
- /opt/stacks/conduit
|
|
||||||
|
|
||||||
- name: copy well-known files
|
|
||||||
ansible.builtin.copy:
|
|
||||||
src: "{{item}}"
|
|
||||||
dest: /opt/stacks/caddy/site/matrix/
|
|
||||||
owner: root
|
|
||||||
mode: 644
|
|
||||||
loop:
|
|
||||||
- client
|
|
||||||
- server
|
|
||||||
|
|
||||||
- name: Template out the compose file
|
|
||||||
ansible.builtin.template:
|
|
||||||
src: conduit-compose.yml.j2
|
|
||||||
dest: /opt/stacks/conduit/compose.yml
|
|
||||||
owner: root
|
|
||||||
mode: 644
|
|
||||||
|
|
||||||
- name: deploy conduit stack
|
|
||||||
community.docker.docker_compose_v2:
|
|
||||||
project_src: /opt/stacks/conduit
|
|
||||||
files:
|
|
||||||
- compose.yml
|
|
||||||
@@ -1,19 +1,19 @@
|
|||||||
- name: make pingvin directories
|
- name: make bytestash directories
|
||||||
ansible.builtin.file:
|
ansible.builtin.file:
|
||||||
path: "{{ item}}"
|
path: "{{ item}}"
|
||||||
state: directory
|
state: directory
|
||||||
loop:
|
loop:
|
||||||
- /opt/stacks/pingvin
|
- /opt/stacks/bytestash
|
||||||
|
|
||||||
- name: Template out the compose file
|
- name: Template out the compose file
|
||||||
ansible.builtin.template:
|
ansible.builtin.template:
|
||||||
src: pingvin-compose.yml.j2
|
src: bytestash-compose.yml.j2
|
||||||
dest: /opt/stacks/pingvin/compose.yml
|
dest: /opt/stacks/bytestash/compose.yml
|
||||||
owner: root
|
owner: root
|
||||||
mode: 644
|
mode: 644
|
||||||
|
|
||||||
- name: deploy pingvin stack
|
- name: deploy bytestash stack
|
||||||
community.docker.docker_compose_v2:
|
community.docker.docker_compose_v2:
|
||||||
project_src: /opt/stacks/pingvin
|
project_src: /opt/stacks/bytestash
|
||||||
files:
|
files:
|
||||||
- compose.yml
|
- compose.yml
|
||||||
15
roles/docker/tasks/development/main.yml
Normal file
15
roles/docker/tasks/development/main.yml
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
---
|
||||||
|
# Development services - Code, collaboration, and development tools
|
||||||
|
|
||||||
|
- name: Install gitea
|
||||||
|
import_tasks: gitea.yml
|
||||||
|
tags: gitea
|
||||||
|
|
||||||
|
- name: Install codeserver
|
||||||
|
import_tasks: codeserver.yml
|
||||||
|
tags: codeserver
|
||||||
|
|
||||||
|
- name: Install bytestash
|
||||||
|
import_tasks: bytestash.yml
|
||||||
|
tags: bytestash
|
||||||
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
- name: make grist directories
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ item}}"
|
|
||||||
state: directory
|
|
||||||
loop:
|
|
||||||
- /opt/stacks/grist
|
|
||||||
|
|
||||||
- name: Template out the compose file
|
|
||||||
ansible.builtin.template:
|
|
||||||
src: grist-compose.yml.j2
|
|
||||||
dest: /opt/stacks/grist/compose.yml
|
|
||||||
owner: root
|
|
||||||
mode: 644
|
|
||||||
|
|
||||||
- name: deploy grist stack
|
|
||||||
community.docker.docker_compose_v2:
|
|
||||||
project_src: /opt/stacks/grist
|
|
||||||
files:
|
|
||||||
- compose.yml
|
|
||||||
@@ -13,9 +13,9 @@
|
|||||||
mode: 644
|
mode: 644
|
||||||
notify: restart caddy
|
notify: restart caddy
|
||||||
|
|
||||||
- name: copy caddy compose file
|
- name: template caddy compose file
|
||||||
ansible.builtin.copy:
|
ansible.builtin.template:
|
||||||
src: caddy-compose.yml
|
src: caddy-compose.yml.j2
|
||||||
dest: /opt/stacks/caddy/compose.yml
|
dest: /opt/stacks/caddy/compose.yml
|
||||||
owner: root
|
owner: root
|
||||||
mode: 644
|
mode: 644
|
||||||
17
roles/docker/tasks/infrastructure/main.yml
Normal file
17
roles/docker/tasks/infrastructure/main.yml
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
---
|
||||||
|
# Infrastructure services - Core platform components
|
||||||
|
|
||||||
|
- name: Install caddy
|
||||||
|
import_tasks: caddy.yml
|
||||||
|
tags: caddy
|
||||||
|
|
||||||
|
- name: Install authentik
|
||||||
|
import_tasks: authentik.yml
|
||||||
|
tags: authentik
|
||||||
|
|
||||||
|
- name: Deploy dockge stack
|
||||||
|
community.docker.docker_compose_v2:
|
||||||
|
project_src: /opt/dockge
|
||||||
|
files:
|
||||||
|
- dockge.yml
|
||||||
|
tags: dockge
|
||||||
@@ -49,124 +49,34 @@
|
|||||||
- /opt/stacks
|
- /opt/stacks
|
||||||
- /opt/dockge
|
- /opt/dockge
|
||||||
|
|
||||||
- name: copy dockge compose file
|
- name: template dockge compose file
|
||||||
ansible.builtin.copy:
|
ansible.builtin.template:
|
||||||
src: dockge-compose.yml
|
src: dockge-compose.yml.j2
|
||||||
dest: /opt/dockge/dockge.yml
|
dest: /opt/dockge/dockge.yml
|
||||||
owner: root
|
owner: root
|
||||||
mode: 644
|
mode: 644
|
||||||
|
|
||||||
- name: deploy dockge stack
|
# Deploy services by category for better organization and dependency management
|
||||||
community.docker.docker_compose_v2:
|
- name: Deploy infrastructure services
|
||||||
project_src: /opt/dockge
|
import_tasks: infrastructure/main.yml
|
||||||
files:
|
tags: infrastructure
|
||||||
- dockge.yml
|
|
||||||
tags: dockge
|
|
||||||
|
|
||||||
- name: Install caddy
|
- name: Deploy development services
|
||||||
import_tasks: caddy.yml
|
import_tasks: development/main.yml
|
||||||
tags: caddy
|
tags: development
|
||||||
|
|
||||||
- name: Install gitea
|
- name: Deploy media services
|
||||||
import_tasks: gitea.yml
|
import_tasks: media/main.yml
|
||||||
tags: gitea
|
tags: media
|
||||||
|
|
||||||
- name: Install hoarder
|
- name: Deploy productivity services
|
||||||
import_tasks: hoarder.yml
|
import_tasks: productivity/main.yml
|
||||||
tags: hoarder
|
tags: productivity
|
||||||
|
|
||||||
- name: Install authentik
|
- name: Deploy monitoring services
|
||||||
import_tasks: authentik.yml
|
import_tasks: monitoring/main.yml
|
||||||
tags: authentik
|
tags: monitoring
|
||||||
|
|
||||||
- name: Install gotosocial
|
- name: Deploy communication services
|
||||||
import_tasks: gotosocial.yml
|
import_tasks: communication/main.yml
|
||||||
tags: gotosocial
|
tags: communication
|
||||||
|
|
||||||
#- name: Install grist
|
|
||||||
# import_tasks: grist.yml
|
|
||||||
# tags: grist
|
|
||||||
|
|
||||||
#- name: Install tasksmd
|
|
||||||
# import_tasks: tasksmd.yml
|
|
||||||
# tags: tasksmd
|
|
||||||
|
|
||||||
- name: Install glance
|
|
||||||
import_tasks: glance.yml
|
|
||||||
tags: glance
|
|
||||||
|
|
||||||
#- name: Install stirlingpdf
|
|
||||||
# import_tasks: stirlingpdf.yml
|
|
||||||
# tags: stirlingpdf
|
|
||||||
|
|
||||||
- name: Install pingvin
|
|
||||||
import_tasks: pingvin.yml
|
|
||||||
tags: pingvin
|
|
||||||
|
|
||||||
- name: Install postiz
|
|
||||||
import_tasks: postiz.yml
|
|
||||||
tags: postiz
|
|
||||||
|
|
||||||
- name: Install pinry
|
|
||||||
import_tasks: pinry.yml
|
|
||||||
tags: pinry
|
|
||||||
|
|
||||||
- name: Install audiobookshelf
|
|
||||||
import_tasks: audiobookshelf.yml
|
|
||||||
tags: audiobookshelf
|
|
||||||
|
|
||||||
- name: Install calibre
|
|
||||||
import_tasks: calibre.yml
|
|
||||||
tags: calibre
|
|
||||||
|
|
||||||
- name: Install paperlessngx
|
|
||||||
import_tasks: paperlessngx.yml
|
|
||||||
tags: paperlessngx
|
|
||||||
|
|
||||||
- name: Install heyform
|
|
||||||
import_tasks: heyform.yml
|
|
||||||
tags: heyform
|
|
||||||
|
|
||||||
- name: Install codeserver
|
|
||||||
import_tasks: codeserver.yml
|
|
||||||
tags: codeserver
|
|
||||||
|
|
||||||
- name: Install baikal
|
|
||||||
import_tasks: baikal.yml
|
|
||||||
tags: baikal
|
|
||||||
|
|
||||||
- name: Install syncthing
|
|
||||||
import_tasks: syncthing.yml
|
|
||||||
tags: syncthing
|
|
||||||
|
|
||||||
- name: Install ghost-1
|
|
||||||
import_tasks: ghost-1.yml
|
|
||||||
tags: ghost-1
|
|
||||||
|
|
||||||
- name: Install dawarich
|
|
||||||
import_tasks: dawarich.yml
|
|
||||||
tags: dawarich
|
|
||||||
|
|
||||||
#- name: Install beaver
|
|
||||||
# import_tasks: beaver.yml
|
|
||||||
# tags: beaver
|
|
||||||
|
|
||||||
- name: Install changedetection
|
|
||||||
import_tasks: changedetection.yml
|
|
||||||
tags: changedetection
|
|
||||||
|
|
||||||
- name: Install conduit
|
|
||||||
import_tasks: conduit.yml
|
|
||||||
tags: conduit
|
|
||||||
|
|
||||||
- name: Install pinchflat
|
|
||||||
import_tasks: pinchflat.yml
|
|
||||||
tags: pinchflat
|
|
||||||
|
|
||||||
- name: Install appriseapi
|
|
||||||
import_tasks: appriseapi.yml
|
|
||||||
tags: appriseapi
|
|
||||||
|
|
||||||
- name: Install manyfold
|
|
||||||
import_tasks: manyfold.yml
|
|
||||||
tags: manyfold
|
|
||||||
@@ -5,9 +5,9 @@
|
|||||||
loop:
|
loop:
|
||||||
- /opt/stacks/hoarder
|
- /opt/stacks/hoarder
|
||||||
|
|
||||||
- name: copy hoarder compose file
|
- name: template hoarder compose file
|
||||||
ansible.builtin.copy:
|
ansible.builtin.template:
|
||||||
src: hoarder-compose.yml
|
src: hoarder-compose.yml.j2
|
||||||
dest: /opt/stacks/hoarder/compose.yml
|
dest: /opt/stacks/hoarder/compose.yml
|
||||||
owner: root
|
owner: root
|
||||||
mode: 644
|
mode: 644
|
||||||
32
roles/docker/tasks/media/main.yml
Normal file
32
roles/docker/tasks/media/main.yml
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
---
|
||||||
|
# Media services - Content creation, management, and consumption
|
||||||
|
|
||||||
|
- name: Install audiobookshelf
|
||||||
|
import_tasks: audiobookshelf.yml
|
||||||
|
tags: audiobookshelf
|
||||||
|
|
||||||
|
- name: Install calibre
|
||||||
|
import_tasks: calibre.yml
|
||||||
|
tags: calibre
|
||||||
|
|
||||||
|
- name: Install ghost-1
|
||||||
|
import_tasks: ghost-1.yml
|
||||||
|
tags: ghost-1
|
||||||
|
|
||||||
|
- name: Install pinchflat
|
||||||
|
import_tasks: pinchflat.yml
|
||||||
|
tags: pinchflat
|
||||||
|
|
||||||
|
- name: Install pinry
|
||||||
|
import_tasks: pinry.yml
|
||||||
|
tags: pinry
|
||||||
|
|
||||||
|
- name: Install karakeep
|
||||||
|
import_tasks: hoarder.yml
|
||||||
|
tags:
|
||||||
|
- hoarder
|
||||||
|
- karakeep
|
||||||
|
|
||||||
|
- name: Install manyfold
|
||||||
|
import_tasks: manyfold.yml
|
||||||
|
tags: manyfold
|
||||||
22
roles/docker/tasks/monitoring/cronmaster.yml
Normal file
22
roles/docker/tasks/monitoring/cronmaster.yml
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
- name: make cronmaster directories
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: directory
|
||||||
|
loop:
|
||||||
|
- /opt/stacks/cronmaster
|
||||||
|
- /opt/stacks/cronmaster/scripts
|
||||||
|
- /opt/stacks/cronmaster/data
|
||||||
|
- /opt/stacks/cronmaster/snippets
|
||||||
|
|
||||||
|
- name: Template out the compose file
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: cronmaster-compose.yml.j2
|
||||||
|
dest: /opt/stacks/cronmaster/compose.yml
|
||||||
|
owner: root
|
||||||
|
mode: '0644'
|
||||||
|
|
||||||
|
- name: deploy cronmaster stack
|
||||||
|
community.docker.docker_compose_v2:
|
||||||
|
project_src: /opt/stacks/cronmaster
|
||||||
|
files:
|
||||||
|
- compose.yml
|
||||||
19
roles/docker/tasks/monitoring/gotify.yml
Normal file
19
roles/docker/tasks/monitoring/gotify.yml
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
- name: Create gotify directories
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: directory
|
||||||
|
loop:
|
||||||
|
- /opt/stacks/gotify
|
||||||
|
|
||||||
|
- name: Template out the gotify compose file
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: gotify-compose.yml.j2
|
||||||
|
dest: /opt/stacks/gotify/compose.yml
|
||||||
|
owner: root
|
||||||
|
mode: 644
|
||||||
|
|
||||||
|
- name: Deploy gotify stack
|
||||||
|
community.docker.docker_compose_v2:
|
||||||
|
project_src: /opt/stacks/gotify
|
||||||
|
files:
|
||||||
|
- compose.yml
|
||||||
22
roles/docker/tasks/monitoring/main.yml
Normal file
22
roles/docker/tasks/monitoring/main.yml
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
---
|
||||||
|
# Monitoring services - System monitoring, alerts, and dashboards
|
||||||
|
|
||||||
|
- name: Install glance
|
||||||
|
import_tasks: glance.yml
|
||||||
|
tags: glance
|
||||||
|
|
||||||
|
- name: Install changedetection
|
||||||
|
import_tasks: changedetection.yml
|
||||||
|
tags: changedetection
|
||||||
|
|
||||||
|
- name: Install appriseapi
|
||||||
|
import_tasks: appriseapi.yml
|
||||||
|
tags: appriseapi
|
||||||
|
|
||||||
|
- name: Install gotify
|
||||||
|
import_tasks: gotify.yml
|
||||||
|
tags: gotify
|
||||||
|
|
||||||
|
- name: Install cronmaster
|
||||||
|
import_tasks: cronmaster.yml
|
||||||
|
tags: cronmaster
|
||||||
18
roles/docker/tasks/productivity/grocy.yml
Normal file
18
roles/docker/tasks/productivity/grocy.yml
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
---
|
||||||
|
- name: Create grocy directories
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: directory
|
||||||
|
loop:
|
||||||
|
- /opt/stacks/grocy
|
||||||
|
|
||||||
|
- name: Template grocy compose file
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: grocy-compose.yml.j2
|
||||||
|
dest: /opt/stacks/grocy/compose.yml
|
||||||
|
|
||||||
|
- name: Deploy grocy stack
|
||||||
|
community.docker.docker_compose_v2:
|
||||||
|
project_src: /opt/stacks/grocy
|
||||||
|
files:
|
||||||
|
- compose.yml
|
||||||
18
roles/docker/tasks/productivity/kanboard.yml
Normal file
18
roles/docker/tasks/productivity/kanboard.yml
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
---
|
||||||
|
- name: Create kanboard directories
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: directory
|
||||||
|
loop:
|
||||||
|
- /opt/stacks/kanboard
|
||||||
|
|
||||||
|
- name: Template kanboard compose file
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: kanboard-compose.yml.j2
|
||||||
|
dest: /opt/stacks/kanboard/compose.yml
|
||||||
|
|
||||||
|
- name: Deploy kanboard stack
|
||||||
|
community.docker.docker_compose_v2:
|
||||||
|
project_src: /opt/stacks/kanboard
|
||||||
|
files:
|
||||||
|
- compose.yml
|
||||||
42
roles/docker/tasks/productivity/main.yml
Normal file
42
roles/docker/tasks/productivity/main.yml
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
---
|
||||||
|
# Productivity services - Task management, document handling, and personal organization
|
||||||
|
|
||||||
|
- name: Install paperlessngx
|
||||||
|
import_tasks: paperlessngx.yml
|
||||||
|
tags: paperlessngx
|
||||||
|
|
||||||
|
- name: Install baikal
|
||||||
|
import_tasks: baikal.yml
|
||||||
|
tags: baikal
|
||||||
|
|
||||||
|
- name: Install syncthing
|
||||||
|
import_tasks: syncthing.yml
|
||||||
|
tags: syncthing
|
||||||
|
|
||||||
|
- name: Install mmdl
|
||||||
|
import_tasks: mmdl.yml
|
||||||
|
tags: mmdl
|
||||||
|
|
||||||
|
- name: Install heyform
|
||||||
|
import_tasks: heyform.yml
|
||||||
|
tags: heyform
|
||||||
|
|
||||||
|
- name: Install dawarich
|
||||||
|
import_tasks: dawarich.yml
|
||||||
|
tags: dawarich
|
||||||
|
|
||||||
|
- name: Install palmr
|
||||||
|
import_tasks: palmr.yml
|
||||||
|
tags: palmr
|
||||||
|
|
||||||
|
- name: Install obsidian-livesync
|
||||||
|
import_tasks: obsidian-livesync.yml
|
||||||
|
tags: obsidian-livesync
|
||||||
|
|
||||||
|
- name: Install kanboard
|
||||||
|
import_tasks: kanboard.yml
|
||||||
|
tags: kanboard
|
||||||
|
|
||||||
|
- name: Install grocy
|
||||||
|
import_tasks: grocy.yml
|
||||||
|
tags: grocy
|
||||||
25
roles/docker/tasks/productivity/mmdl.yml
Normal file
25
roles/docker/tasks/productivity/mmdl.yml
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
---
|
||||||
|
- name: Create mmdl directories
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: directory
|
||||||
|
loop:
|
||||||
|
- /opt/stacks/mmdl
|
||||||
|
- /opt/stacks/mmdl/data
|
||||||
|
- /opt/stacks/mmdl/mysql
|
||||||
|
|
||||||
|
- name: Template mmdl environment file
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: mmdl-env.j2
|
||||||
|
dest: /opt/stacks/mmdl/.env.local
|
||||||
|
|
||||||
|
- name: Template mmdl compose file
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: mmdl-compose.yml.j2
|
||||||
|
dest: /opt/stacks/mmdl/compose.yml
|
||||||
|
|
||||||
|
- name: Deploy mmdl stack
|
||||||
|
community.docker.docker_compose_v2:
|
||||||
|
project_src: /opt/stacks/mmdl
|
||||||
|
files:
|
||||||
|
- compose.yml
|
||||||
20
roles/docker/tasks/productivity/obsidian-livesync.yml
Normal file
20
roles/docker/tasks/productivity/obsidian-livesync.yml
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
---
|
||||||
|
- name: make obsidian-livesync directories
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ paths.stacks }}/obsidian-livesync"
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Template out the compose file
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: obsidian-livesync-compose.yml.j2
|
||||||
|
dest: "{{ paths.stacks }}/obsidian-livesync/docker-compose.yml"
|
||||||
|
mode: '0644'
|
||||||
|
notify: restart obsidian-livesync
|
||||||
|
|
||||||
|
- name: deploy obsidian-livesync stack
|
||||||
|
community.docker.docker_compose_v2:
|
||||||
|
project_src: "{{ paths.stacks }}/obsidian-livesync"
|
||||||
|
state: present
|
||||||
|
tags:
|
||||||
|
- obsidian-livesync
|
||||||
19
roles/docker/tasks/productivity/palmr.yml
Normal file
19
roles/docker/tasks/productivity/palmr.yml
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
- name: make palmr directories
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: directory
|
||||||
|
loop:
|
||||||
|
- /opt/stacks/palmr
|
||||||
|
|
||||||
|
- name: Template out the compose file
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: palmr-compose.yml.j2
|
||||||
|
dest: /opt/stacks/palmr/compose.yml
|
||||||
|
owner: root
|
||||||
|
mode: 644
|
||||||
|
|
||||||
|
- name: deploy palmr stack
|
||||||
|
community.docker.docker_compose_v2:
|
||||||
|
project_src: /opt/stacks/palmr
|
||||||
|
files:
|
||||||
|
- compose.yml
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
- name: make StirlingPDF directories
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ item}}"
|
|
||||||
state: directory
|
|
||||||
loop:
|
|
||||||
- /opt/stacks/stirlingpdf
|
|
||||||
|
|
||||||
- name: Template out the compose file
|
|
||||||
ansible.builtin.template:
|
|
||||||
src: striling-compose.yml.j2
|
|
||||||
dest: /opt/stacks/stirling/compose.yml
|
|
||||||
owner: root
|
|
||||||
mode: 644
|
|
||||||
|
|
||||||
- name: deploy stirling stack
|
|
||||||
community.docker.docker_compose_v2:
|
|
||||||
project_src: /opt/stacks/stirling
|
|
||||||
files:
|
|
||||||
- compose.yml
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
- name: make tasksmd directories
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ item}}"
|
|
||||||
state: directory
|
|
||||||
loop:
|
|
||||||
- /opt/stacks/tasksmd
|
|
||||||
|
|
||||||
- name: copy tasksmd compose file
|
|
||||||
ansible.builtin.copy:
|
|
||||||
src: tasksmd-compose.yml
|
|
||||||
dest: /opt/stacks/tasksmd/compose.yml
|
|
||||||
owner: root
|
|
||||||
mode: 644
|
|
||||||
|
|
||||||
- name: deploy tasksmd stack
|
|
||||||
community.docker.docker_compose_v2:
|
|
||||||
project_src: /opt/stacks/tasksmd
|
|
||||||
files:
|
|
||||||
- compose.yml
|
|
||||||
@@ -2,7 +2,7 @@ services:
|
|||||||
apprise:
|
apprise:
|
||||||
container_name: apprise
|
container_name: apprise
|
||||||
ports:
|
ports:
|
||||||
- 100.70.169.99:8000:8000
|
- {{ network.docker_host_ip }}:8000:8000
|
||||||
environment:
|
environment:
|
||||||
- APPRISE_STATEFUL_MODE=simple
|
- APPRISE_STATEFUL_MODE=simple
|
||||||
- APPRISE_WORKER_COUNT=1
|
- APPRISE_WORKER_COUNT=1
|
||||||
@@ -11,13 +11,15 @@ services:
|
|||||||
- plugin:/plugin
|
- plugin:/plugin
|
||||||
- attach:/attach
|
- attach:/attach
|
||||||
image: caronc/apprise:latest
|
image: caronc/apprise:latest
|
||||||
|
extra_hosts:
|
||||||
|
- "{{ subdomains.gotify }}:{{ docker.hairpin_ip }}"
|
||||||
labels:
|
labels:
|
||||||
glance.name: Apprise
|
glance.name: Apprise
|
||||||
glance.icon: si:imessage
|
glance.icon: si:imessage
|
||||||
glance.url: https://auth.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.appriseapi }}/
|
||||||
glance.description: Apprise api server
|
glance.description: Apprise api server
|
||||||
glance.id: apprise
|
glance.id: apprise
|
||||||
|
mag37.dockcheck.update: true
|
||||||
volumes:
|
volumes:
|
||||||
config:
|
config:
|
||||||
attach:
|
attach:
|
||||||
@@ -25,4 +27,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -10,12 +10,13 @@ services:
|
|||||||
- TZ=America/Denver
|
- TZ=America/Denver
|
||||||
- DISABLE_SSRF_REQUEST_FILTER=1
|
- DISABLE_SSRF_REQUEST_FILTER=1
|
||||||
extra_hosts:
|
extra_hosts:
|
||||||
- 'auth.thesatelliteoflove.com:172.20.0.5'
|
- '{{ subdomains.auth }}:172.20.0.5'
|
||||||
labels:
|
labels:
|
||||||
glance.name: Audiobookshelf
|
glance.name: Audiobookshelf
|
||||||
glance.icon: si:audiobookshelf
|
glance.icon: si:audiobookshelf
|
||||||
glance.url: https://audio.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.audio }}/
|
||||||
glance.description: Audio book server
|
glance.description: Audio book server
|
||||||
|
mag37.dockcheck.update: true
|
||||||
volumes:
|
volumes:
|
||||||
audiobooks:
|
audiobooks:
|
||||||
driver: local
|
driver: local
|
||||||
@@ -28,4 +29,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -37,7 +37,7 @@ services:
|
|||||||
glance.parent: authentik
|
glance.parent: authentik
|
||||||
glance.name: Redis
|
glance.name: Redis
|
||||||
server:
|
server:
|
||||||
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2025.4}
|
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2025.8.4}
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
command: server
|
command: server
|
||||||
environment:
|
environment:
|
||||||
@@ -64,7 +64,7 @@ services:
|
|||||||
glance.description: Authentication server
|
glance.description: Authentication server
|
||||||
glance.id: authentik
|
glance.id: authentik
|
||||||
worker:
|
worker:
|
||||||
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2025.4}
|
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2025.8.4}
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
command: worker
|
command: worker
|
||||||
environment:
|
environment:
|
||||||
@@ -103,4 +103,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: lava
|
||||||
|
|||||||
@@ -1,15 +1,15 @@
|
|||||||
PG_PASS={{ authentik_pg_pass }}
|
PG_PASS={{ vault_authentik.postgres_password }}
|
||||||
AUTHENTIK_SECRET_KEY={{ authentik_secret_key }}
|
AUTHENTIK_SECRET_KEY={{ vault_authentik.secret_key }}
|
||||||
# SMTP Host Emails are sent to
|
# SMTP Host Emails are sent to
|
||||||
AUTHENTIK_EMAIL__HOST=smtp.resend.com
|
AUTHENTIK_EMAIL__HOST={{ smtp.host }}
|
||||||
AUTHENTIK_EMAIL__PORT=25
|
AUTHENTIK_EMAIL__PORT=25
|
||||||
# Optionally authenticate (don't add quotation marks to your password)
|
# Optionally authenticate (don't add quotation marks to your password)
|
||||||
AUTHENTIK_EMAIL__USERNAME=resend
|
AUTHENTIK_EMAIL__USERNAME={{ smtp.username }}
|
||||||
AUTHENTIK_EMAIL__PASSWORD={{ resend_key }}
|
AUTHENTIK_EMAIL__PASSWORD={{ vault_smtp.password }}
|
||||||
# Use StartTLS
|
# Use StartTLS
|
||||||
AUTHENTIK_EMAIL__USE_TLS=true
|
AUTHENTIK_EMAIL__USE_TLS=true
|
||||||
# Use SSL
|
# Use SSL
|
||||||
AUTHENTIK_EMAIL__USE_SSL=false
|
AUTHENTIK_EMAIL__USE_SSL=false
|
||||||
AUTHENTIK_EMAIL__TIMEOUT=10
|
AUTHENTIK_EMAIL__TIMEOUT=10
|
||||||
# Email address authentik will send from, should have a correct @domain
|
# Email address authentik will send from, should have a correct @domain
|
||||||
AUTHENTIK_EMAIL__FROM=auth@updates.thesatelliteoflove.com
|
AUTHENTIK_EMAIL__FROM=auth@{{ email_domains.updates }}
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
services:
|
services:
|
||||||
baikal:
|
baikal:
|
||||||
image: ckulka/baikal:nginx
|
image: ckulka/baikal:0.10.1-nginx
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
volumes:
|
volumes:
|
||||||
- config:/var/www/baikal/config
|
- config:/var/www/baikal/config
|
||||||
@@ -8,9 +8,9 @@ services:
|
|||||||
labels:
|
labels:
|
||||||
glance.name: Baikal
|
glance.name: Baikal
|
||||||
glance.icon: si:protoncalendar
|
glance.icon: si:protoncalendar
|
||||||
glance.url: https://cal.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.cal }}/
|
||||||
glance.description: CalDav server
|
glance.description: CalDav server
|
||||||
|
mag37.dockcheck.update: true
|
||||||
volumes:
|
volumes:
|
||||||
config:
|
config:
|
||||||
data:
|
data:
|
||||||
@@ -18,4 +18,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
services:
|
|
||||||
beaverhabits:
|
|
||||||
container_name: beaverhabits
|
|
||||||
user: 1000:1000
|
|
||||||
environment:
|
|
||||||
# See the note below to find all the environment variables
|
|
||||||
- HABITS_STORAGE=USER_DISK # DATABASE stores in a single SQLite database named habits.db. USER_DISK option saves in a local json file.
|
|
||||||
- MAX_USER_COUNT=1
|
|
||||||
volumes:
|
|
||||||
- ./data:/app/.user/ # Change directory to match your docker file scheme.
|
|
||||||
restart: unless-stopped
|
|
||||||
image: daya0576/beaverhabits:latest
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
data:
|
|
||||||
|
|
||||||
networks:
|
|
||||||
default:
|
|
||||||
external: true
|
|
||||||
name: lava
|
|
||||||
37
roles/docker/templates/bytestash-compose.yml.j2
Normal file
37
roles/docker/templates/bytestash-compose.yml.j2
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
services:
|
||||||
|
bytestash:
|
||||||
|
image: ghcr.io/jordan-dalby/bytestash:latest
|
||||||
|
container_name: bytestash
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- bytestash_data:/data/snippets
|
||||||
|
environment:
|
||||||
|
JWT_SECRET: "{{ vault_bytestash.jwt_secret }}"
|
||||||
|
TOKEN_EXPIRY: "24h"
|
||||||
|
ALLOW_NEW_ACCOUNTS: "true"
|
||||||
|
DEBUG: "false"
|
||||||
|
DISABLE_ACCOUNTS: "false"
|
||||||
|
DISABLE_INTERNAL_ACCOUNTS: "false"
|
||||||
|
OIDC_ENABLED: "true"
|
||||||
|
OIDC_DISPLAY_NAME: "Login with Authentik"
|
||||||
|
OIDC_ISSUER_URL: "https://{{ subdomains.auth }}/application/o/bytestash/"
|
||||||
|
OIDC_CLIENT_ID: "{{ vault_bytestash.oidc_client_id }}"
|
||||||
|
OIDC_CLIENT_SECRET: "{{ vault_bytestash.oidc_client_secret }}"
|
||||||
|
OIDC_SCOPES: "openid profile email"
|
||||||
|
extra_hosts:
|
||||||
|
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
|
||||||
|
labels:
|
||||||
|
glance.name: ByteStash
|
||||||
|
glance.icon: si:code
|
||||||
|
glance.url: https://{{ subdomains.bytestash }}/
|
||||||
|
glance.description: Code snippet manager
|
||||||
|
glance.id: bytestash
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
volumes:
|
||||||
|
bytestash_data:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
external: true
|
||||||
|
name: {{ docker.network_name }}
|
||||||
@@ -16,11 +16,12 @@ services:
|
|||||||
labels:
|
labels:
|
||||||
glance.name: Caddy
|
glance.name: Caddy
|
||||||
glance.icon: si:caddy
|
glance.icon: si:caddy
|
||||||
glance.url: https://thesatelliteoflove.com/
|
glance.url: https://{{ primary_domain }}/
|
||||||
glance.description: Reverse proxy
|
glance.description: Reverse proxy
|
||||||
|
mag37.dockcheck.update: true
|
||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
ipv4_address: 172.20.0.5
|
ipv4_address: {{ docker.hairpin_ip }}
|
||||||
volumes:
|
volumes:
|
||||||
caddy_data:
|
caddy_data:
|
||||||
caddy_config:
|
caddy_config:
|
||||||
@@ -28,4 +29,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -16,8 +16,9 @@ services:
|
|||||||
labels:
|
labels:
|
||||||
glance.name: Calibre
|
glance.name: Calibre
|
||||||
glance.icon: si:calibreweb
|
glance.icon: si:calibreweb
|
||||||
glance.url: https://books.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.books }}/
|
||||||
glance.description: Book server
|
glance.description: Book server
|
||||||
|
mag37.dockcheck.update: true
|
||||||
volumes:
|
volumes:
|
||||||
config:
|
config:
|
||||||
driver: local
|
driver: local
|
||||||
@@ -26,4 +27,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -4,14 +4,13 @@ services:
|
|||||||
image: ghcr.io/dgtlmoon/changedetection.io
|
image: ghcr.io/dgtlmoon/changedetection.io
|
||||||
container_name: changedetection
|
container_name: changedetection
|
||||||
hostname: changedetection
|
hostname: changedetection
|
||||||
extra_hosts:
|
|
||||||
- 'chat.thesatelliteoflove.com:172.20.0.5'
|
|
||||||
labels:
|
labels:
|
||||||
glance.name: Changedetection
|
glance.name: Changedetection
|
||||||
glance.icon: si:watchtower
|
glance.icon: si:watchtower
|
||||||
glance.url: https://watcher.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.watcher }}/
|
||||||
glance.description: Changedetection
|
glance.description: Changedetection
|
||||||
glance.id: changedetection
|
glance.id: changedetection
|
||||||
|
mag37.dockcheck.update: true
|
||||||
volumes:
|
volumes:
|
||||||
- changedetection-data:/datastore
|
- changedetection-data:/datastore
|
||||||
# Configurable proxy list support, see https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration#proxy-list-support
|
# Configurable proxy list support, see https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration#proxy-list-support
|
||||||
@@ -50,7 +49,7 @@ services:
|
|||||||
# - NO_PROXY="localhost,192.168.0.0/24"
|
# - NO_PROXY="localhost,192.168.0.0/24"
|
||||||
#
|
#
|
||||||
# Base URL of your changedetection.io install (Added to the notification alert)
|
# Base URL of your changedetection.io install (Added to the notification alert)
|
||||||
- BASE_URL=https://watcher.thesatelliteoflove.com
|
- BASE_URL=https://{{ subdomains.watcher }}
|
||||||
# Respect proxy_pass type settings, `proxy_set_header Host "localhost";` and `proxy_set_header X-Forwarded-Prefix /app;`
|
# Respect proxy_pass type settings, `proxy_set_header Host "localhost";` and `proxy_set_header X-Forwarded-Prefix /app;`
|
||||||
# More here https://github.com/dgtlmoon/changedetection.io/wiki/Running-changedetection.io-behind-a-reverse-proxy-sub-directory
|
# More here https://github.com/dgtlmoon/changedetection.io/wiki/Running-changedetection.io-behind-a-reverse-proxy-sub-directory
|
||||||
# - USE_X_SETTINGS=1
|
# - USE_X_SETTINGS=1
|
||||||
@@ -77,6 +76,8 @@ services:
|
|||||||
# ports:
|
# ports:
|
||||||
# - 5000:5000
|
# - 5000:5000
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
|
extra_hosts:
|
||||||
|
- "{{ subdomains.gotify }}:{{ docker.hairpin_ip }}"
|
||||||
|
|
||||||
# Used for fetching pages via WebDriver+Chrome where you need Javascript support.
|
# Used for fetching pages via WebDriver+Chrome where you need Javascript support.
|
||||||
# Now working on arm64 (needs testing on rPi - tested on Oracle ARM instance)
|
# Now working on arm64 (needs testing on rPi - tested on Oracle ARM instance)
|
||||||
@@ -96,6 +97,7 @@ services:
|
|||||||
labels:
|
labels:
|
||||||
glance.parent: changedetection
|
glance.parent: changedetection
|
||||||
glance.name: Browser
|
glance.name: Browser
|
||||||
|
mag37.dockcheck.update: true
|
||||||
image: dgtlmoon/sockpuppetbrowser:latest
|
image: dgtlmoon/sockpuppetbrowser:latest
|
||||||
cap_add:
|
cap_add:
|
||||||
- SYS_ADMIN
|
- SYS_ADMIN
|
||||||
@@ -106,6 +108,8 @@ services:
|
|||||||
- SCREEN_HEIGHT=1024
|
- SCREEN_HEIGHT=1024
|
||||||
- SCREEN_DEPTH=16
|
- SCREEN_DEPTH=16
|
||||||
- MAX_CONCURRENT_CHROME_PROCESSES=10
|
- MAX_CONCURRENT_CHROME_PROCESSES=10
|
||||||
|
extra_hosts:
|
||||||
|
- "{{ subdomains.gotify }}:{{ docker.hairpin_ip }}"
|
||||||
|
|
||||||
# Used for fetching pages via Playwright+Chrome where you need Javascript support.
|
# Used for fetching pages via Playwright+Chrome where you need Javascript support.
|
||||||
# Note: Works well but is deprecated, does not fetch full page screenshots (doesnt work with Visual Selector)
|
# Note: Works well but is deprecated, does not fetch full page screenshots (doesnt work with Visual Selector)
|
||||||
@@ -130,4 +134,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
|
|||||||
@@ -5,8 +5,9 @@ services:
|
|||||||
labels:
|
labels:
|
||||||
glance.name: Code Server
|
glance.name: Code Server
|
||||||
glance.icon: si:vscodium
|
glance.icon: si:vscodium
|
||||||
glance.url: https://code.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.code }}/
|
||||||
glance.description: Code Server
|
glance.description: Code Server
|
||||||
|
mag37.dockcheck.update: true
|
||||||
container_name: codeserver
|
container_name: codeserver
|
||||||
volumes:
|
volumes:
|
||||||
- home:/home
|
- home:/home
|
||||||
@@ -19,4 +20,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -1,46 +0,0 @@
|
|||||||
services:
|
|
||||||
homeserver:
|
|
||||||
image: matrixconduit/matrix-conduit:next
|
|
||||||
restart: unless-stopped
|
|
||||||
volumes:
|
|
||||||
- db:/var/lib/matrix-conduit/
|
|
||||||
labels:
|
|
||||||
glance.name: Conduit
|
|
||||||
glance.icon: si:matrix
|
|
||||||
glance.url: https://chat.thesatelliteoflove.com/
|
|
||||||
glance.description: Matrix server
|
|
||||||
environment:
|
|
||||||
CONDUIT_SERVER_NAME: chat.thesatelliteoflove.com # EDIT THIS
|
|
||||||
CONDUIT_DATABASE_PATH: /var/lib/matrix-conduit/
|
|
||||||
CONDUIT_DATABASE_BACKEND: rocksdb
|
|
||||||
CONDUIT_PORT: 6167
|
|
||||||
CONDUIT_MAX_REQUEST_SIZE: 20000000 # in bytes, ~20 MB
|
|
||||||
CONDUIT_ALLOW_REGISTRATION: 'true'
|
|
||||||
CONDUIT_ALLOW_FEDERATION: 'true'
|
|
||||||
CONDUIT_ALLOW_CHECK_FOR_UPDATES: 'true'
|
|
||||||
CONDUIT_TRUSTED_SERVERS: '["matrix.org"]'
|
|
||||||
#CONDUIT_MAX_CONCURRENT_REQUESTS: 100
|
|
||||||
CONDUIT_ADDRESS: 0.0.0.0
|
|
||||||
CONDUIT_CONFIG: '' # Ignore this
|
|
||||||
#
|
|
||||||
### Uncomment if you want to use your own Element-Web App.
|
|
||||||
### Note: You need to provide a config.json for Element and you also need a second
|
|
||||||
### Domain or Subdomain for the communication between Element and Conduit
|
|
||||||
### Config-Docs: https://github.com/vector-im/element-web/blob/develop/docs/config.md
|
|
||||||
# element-web:
|
|
||||||
# image: vectorim/element-web:latest
|
|
||||||
# restart: unless-stopped
|
|
||||||
# ports:
|
|
||||||
# - 8009:80
|
|
||||||
# volumes:
|
|
||||||
# - ./element_config.json:/app/config.json
|
|
||||||
# depends_on:
|
|
||||||
# - homeserver
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
db:
|
|
||||||
|
|
||||||
networks:
|
|
||||||
default:
|
|
||||||
external: true
|
|
||||||
name: lava
|
|
||||||
32
roles/docker/templates/cronmaster-compose.yml.j2
Normal file
32
roles/docker/templates/cronmaster-compose.yml.j2
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
services:
|
||||||
|
cronmaster:
|
||||||
|
image: ghcr.io/fccview/cronmaster:latest
|
||||||
|
container_name: cronmaster
|
||||||
|
restart: unless-stopped
|
||||||
|
user: "root"
|
||||||
|
privileged: true
|
||||||
|
pid: "host"
|
||||||
|
ports:
|
||||||
|
- "{{ network.docker_host_ip }}:40123:3000"
|
||||||
|
environment:
|
||||||
|
- DOCKER=true
|
||||||
|
- HOST_PROJECT_DIR=/opt/stacks/cronmaster/scripts
|
||||||
|
- HOST_CRONTAB_USER=root,phil
|
||||||
|
- AUTH_PASSWORD={{ vault_cronmaster.password }}
|
||||||
|
volumes:
|
||||||
|
- /var/run/docker.sock:/var/run/docker.sock
|
||||||
|
- /opt/stacks/cronmaster/scripts:/app/scripts
|
||||||
|
- /opt/stacks/cronmaster/data:/app/data
|
||||||
|
- /opt/stacks/cronmaster/snippets:/app/snippets
|
||||||
|
labels:
|
||||||
|
glance.url: "http://{{ network.docker_host_ip }}:40123/"
|
||||||
|
glance.title: CronMaster
|
||||||
|
glance.description: Cron job management interface
|
||||||
|
glance.group: Infrastructure
|
||||||
|
glance.parent: infrastructure
|
||||||
|
glance.name: CronMaster
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
external: true
|
||||||
|
name: "{{ docker.network_name }}"
|
||||||
@@ -2,19 +2,18 @@ services:
|
|||||||
dawarich_redis:
|
dawarich_redis:
|
||||||
image: redis:7.4-alpine
|
image: redis:7.4-alpine
|
||||||
container_name: dawarich_redis
|
container_name: dawarich_redis
|
||||||
command: redis-server
|
labels:
|
||||||
|
glance.parent: dawarich
|
||||||
|
glance.name: Redis
|
||||||
volumes:
|
volumes:
|
||||||
- dawarich_redis_data:/var/shared/redis
|
- dawarich_redis_data:/data
|
||||||
restart: always
|
restart: always
|
||||||
healthcheck:
|
healthcheck:
|
||||||
test: [ "CMD", "redis-cli", "--raw", "incr", "ping" ]
|
test: ["CMD", "redis-cli", "ping"]
|
||||||
interval: 10s
|
interval: 10s
|
||||||
retries: 5
|
retries: 5
|
||||||
start_period: 30s
|
start_period: 30s
|
||||||
timeout: 10s
|
timeout: 10s
|
||||||
labels:
|
|
||||||
glance.parent: dawarich
|
|
||||||
glance.name: Redis
|
|
||||||
dawarich_db:
|
dawarich_db:
|
||||||
image: postgis/postgis:17-3.5-alpine
|
image: postgis/postgis:17-3.5-alpine
|
||||||
shm_size: 1G
|
shm_size: 1G
|
||||||
@@ -26,7 +25,7 @@ services:
|
|||||||
- dawarich_db_data:/var/lib/postgresql/data
|
- dawarich_db_data:/var/lib/postgresql/data
|
||||||
environment:
|
environment:
|
||||||
POSTGRES_USER: postgres
|
POSTGRES_USER: postgres
|
||||||
POSTGRES_PASSWORD: {{ dawarich_db_password }}
|
POSTGRES_PASSWORD: {{ vault_dawarich.postgres_password }}
|
||||||
POSTGRES_DB: dawarich_production
|
POSTGRES_DB: dawarich_production
|
||||||
restart: always
|
restart: always
|
||||||
healthcheck:
|
healthcheck:
|
||||||
@@ -35,18 +34,20 @@ services:
|
|||||||
retries: 5
|
retries: 5
|
||||||
start_period: 30s
|
start_period: 30s
|
||||||
timeout: 10s
|
timeout: 10s
|
||||||
|
|
||||||
dawarich_app:
|
dawarich_app:
|
||||||
image: freikin/dawarich:latest
|
image: freikin/dawarich:latest
|
||||||
container_name: dawarich_app
|
container_name: dawarich_app
|
||||||
labels:
|
labels:
|
||||||
glance.name: Dawarich
|
glance.name: Dawarich
|
||||||
glance.icon: si:openstreetmap
|
glance.icon: si:openstreetmap
|
||||||
glance.url: https://loclog.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.loclog }}/
|
||||||
glance.description: Dawarich
|
glance.description: Dawarich
|
||||||
glance.id: dawarich
|
glance.id: dawarich
|
||||||
volumes:
|
volumes:
|
||||||
- dawarich_public:/var/app/public
|
- dawarich_public:/var/app/public
|
||||||
- dawarich_watched:/var/app/tmp/imports/watched
|
- dawarich_watched:/var/app/tmp/imports/watched
|
||||||
|
- dawarich_storage:/var/app/storage
|
||||||
stdin_open: true
|
stdin_open: true
|
||||||
tty: true
|
tty: true
|
||||||
entrypoint: web-entrypoint.sh
|
entrypoint: web-entrypoint.sh
|
||||||
@@ -54,21 +55,21 @@ services:
|
|||||||
restart: on-failure
|
restart: on-failure
|
||||||
environment:
|
environment:
|
||||||
RAILS_ENV: production
|
RAILS_ENV: production
|
||||||
REDIS_URL: redis://dawarich_redis:6379/0
|
|
||||||
DATABASE_HOST: dawarich_db
|
DATABASE_HOST: dawarich_db
|
||||||
DATABASE_PORT: 5432
|
DATABASE_PORT: 5432
|
||||||
DATABASE_USERNAME: postgres
|
DATABASE_USERNAME: postgres
|
||||||
DATABASE_PASSWORD: {{ dawarich_db_password }}
|
DATABASE_PASSWORD: {{ vault_dawarich.postgres_password }}
|
||||||
DATABASE_NAME: dawarich_production
|
DATABASE_NAME: dawarich_production
|
||||||
|
REDIS_URL: redis://dawarich_redis:6379
|
||||||
MIN_MINUTES_SPENT_IN_CITY: 60
|
MIN_MINUTES_SPENT_IN_CITY: 60
|
||||||
APPLICATION_HOSTS: loclog.thesatelliteoflove.com,localhost,::1,127.0.0.1
|
APPLICATION_HOSTS: {{ subdomains.loclog }},localhost,::1,127.0.0.1
|
||||||
TIME_ZONE: America/Denver
|
TIME_ZONE: America/Denver
|
||||||
APPLICATION_PROTOCOL: http
|
APPLICATION_PROTOCOL: http
|
||||||
DISTANCE_UNIT: mi
|
DISTANCE_UNIT: mi
|
||||||
PROMETHEUS_EXPORTER_ENABLED: false
|
PROMETHEUS_EXPORTER_ENABLED: false
|
||||||
PROMETHEUS_EXPORTER_HOST: 0.0.0.0
|
PROMETHEUS_EXPORTER_HOST: 0.0.0.0
|
||||||
PROMETHEUS_EXPORTER_PORT: 9394
|
PROMETHEUS_EXPORTER_PORT: 9394
|
||||||
SECRET_KEY_BASE: 1234567890
|
SECRET_KEY_BASE: {{ vault_dawarich.secret_key_base }}
|
||||||
RAILS_LOG_TO_STDOUT: "true"
|
RAILS_LOG_TO_STDOUT: "true"
|
||||||
logging:
|
logging:
|
||||||
driver: "json-file"
|
driver: "json-file"
|
||||||
@@ -91,8 +92,8 @@ services:
|
|||||||
deploy:
|
deploy:
|
||||||
resources:
|
resources:
|
||||||
limits:
|
limits:
|
||||||
cpus: '0.50' # Limit CPU usage to 50% of one core
|
cpus: '0.50'
|
||||||
memory: '2G' # Limit memory usage to 2GB
|
memory: '2G'
|
||||||
dawarich_sidekiq:
|
dawarich_sidekiq:
|
||||||
image: freikin/dawarich:latest
|
image: freikin/dawarich:latest
|
||||||
container_name: dawarich_sidekiq
|
container_name: dawarich_sidekiq
|
||||||
@@ -102,27 +103,27 @@ services:
|
|||||||
volumes:
|
volumes:
|
||||||
- dawarich_public:/var/app/public
|
- dawarich_public:/var/app/public
|
||||||
- dawarich_watched:/var/app/tmp/imports/watched
|
- dawarich_watched:/var/app/tmp/imports/watched
|
||||||
|
- dawarich_storage:/var/app/storage
|
||||||
stdin_open: true
|
stdin_open: true
|
||||||
tty: true
|
tty: true
|
||||||
entrypoint: sidekiq-entrypoint.sh
|
entrypoint: sidekiq-entrypoint.sh
|
||||||
command: ['bundle', 'exec', 'sidekiq']
|
command: ['sidekiq']
|
||||||
restart: on-failure
|
restart: on-failure
|
||||||
environment:
|
environment:
|
||||||
RAILS_ENV: production
|
RAILS_ENV: production
|
||||||
REDIS_URL: redis://dawarich_redis:6379/0
|
|
||||||
DATABASE_HOST: dawarich_db
|
DATABASE_HOST: dawarich_db
|
||||||
DATABASE_PORT: 5432
|
DATABASE_PORT: 5432
|
||||||
DATABASE_USERNAME: postgres
|
DATABASE_USERNAME: postgres
|
||||||
DATABASE_PASSWORD: {{ dawarich_db_password }}
|
DATABASE_PASSWORD: {{ vault_dawarich.postgres_password }}
|
||||||
DATABASE_NAME: dawarich_production
|
DATABASE_NAME: dawarich_production
|
||||||
APPLICATION_HOSTS: loclog.thesatelliteoflove.com,localhost,::1,127.0.0.1
|
REDIS_URL: redis://dawarich_redis:6379
|
||||||
BACKGROUND_PROCESSING_CONCURRENCY: 10
|
MIN_MINUTES_SPENT_IN_CITY: 60
|
||||||
|
APPLICATION_HOSTS: {{ subdomains.loclog }},localhost,::1,127.0.0.1
|
||||||
|
TIME_ZONE: America/Denver
|
||||||
APPLICATION_PROTOCOL: http
|
APPLICATION_PROTOCOL: http
|
||||||
DISTANCE_UNIT: mi
|
DISTANCE_UNIT: mi
|
||||||
PROMETHEUS_EXPORTER_ENABLED: false
|
PROMETHEUS_EXPORTER_ENABLED: false
|
||||||
PROMETHEUS_EXPORTER_HOST: dawarich_app
|
SECRET_KEY_BASE: {{ vault_dawarich.secret_key_base }}
|
||||||
PROMETHEUS_EXPORTER_PORT: 9394
|
|
||||||
SECRET_KEY_BASE: 1234567890
|
|
||||||
RAILS_LOG_TO_STDOUT: "true"
|
RAILS_LOG_TO_STDOUT: "true"
|
||||||
logging:
|
logging:
|
||||||
driver: "json-file"
|
driver: "json-file"
|
||||||
@@ -130,33 +131,28 @@ services:
|
|||||||
max-size: "100m"
|
max-size: "100m"
|
||||||
max-file: "5"
|
max-file: "5"
|
||||||
healthcheck:
|
healthcheck:
|
||||||
test: [ "CMD-SHELL", "bundle exec sidekiqmon processes | grep $${HOSTNAME}" ]
|
test: ["CMD-SHELL", "ps aux | grep '[s]idekiq' || exit 1"]
|
||||||
interval: 10s
|
interval: 10s
|
||||||
retries: 30
|
retries: 30
|
||||||
start_period: 30s
|
start_period: 30s
|
||||||
timeout: 10s
|
timeout: 10s
|
||||||
depends_on:
|
depends_on:
|
||||||
|
dawarich_app:
|
||||||
|
condition: service_healthy
|
||||||
|
restart: true
|
||||||
dawarich_db:
|
dawarich_db:
|
||||||
condition: service_healthy
|
condition: service_healthy
|
||||||
restart: true
|
restart: true
|
||||||
dawarich_redis:
|
dawarich_redis:
|
||||||
condition: service_healthy
|
condition: service_healthy
|
||||||
restart: true
|
restart: true
|
||||||
dawarich_app:
|
|
||||||
condition: service_healthy
|
|
||||||
restart: true
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
limits:
|
|
||||||
cpus: '0.50' # Limit CPU usage to 50% of one core
|
|
||||||
memory: '2G' # Limit memory usage to 2GB
|
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
dawarich_db_data:
|
dawarich_db_data:
|
||||||
dawarich_redis_data:
|
dawarich_redis_data:
|
||||||
dawarich_public:
|
dawarich_public:
|
||||||
dawarich_watched:
|
dawarich_watched:
|
||||||
|
dawarich_storage:
|
||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -7,21 +7,21 @@ services:
|
|||||||
- database__client=sqlite3
|
- database__client=sqlite3
|
||||||
- database__connection__filename=/var/lib/ghost/content/data/ghost.db
|
- database__connection__filename=/var/lib/ghost/content/data/ghost.db
|
||||||
- database__useNullAsDefault=true
|
- database__useNullAsDefault=true
|
||||||
- url=https://phlog.thesatelliteoflove.com
|
- url=https://{{ subdomains.phlog }}
|
||||||
volumes:
|
volumes:
|
||||||
- ghost:/var/lib/ghost/content
|
- ghost:/var/lib/ghost/content
|
||||||
extra_hosts:
|
extra_hosts:
|
||||||
- 'phlog.thesatelliteoflove.com:172.20.0.5'
|
- '{{ subdomains.phlog }}:172.20.0.5'
|
||||||
labels:
|
labels:
|
||||||
glance.name: Ghost
|
glance.name: Ghost
|
||||||
glance.icon: si:ghost
|
glance.icon: si:ghost
|
||||||
glance.url: https://phlog.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.phlog }}/
|
||||||
glance.description: Photo Blog
|
glance.description: Photo Blog
|
||||||
|
mag37.dockcheck.update: true
|
||||||
volumes:
|
volumes:
|
||||||
ghost:
|
ghost:
|
||||||
driver: local
|
driver: local
|
||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -7,19 +7,20 @@ services:
|
|||||||
- USER_UID=1000
|
- USER_UID=1000
|
||||||
- USER_GID=1000
|
- USER_GID=1000
|
||||||
- GITEA__mailer__ENABLED=true
|
- GITEA__mailer__ENABLED=true
|
||||||
- GITEA__mailer__FROM=git@updates.thesatelliteoflove.com
|
- GITEA__mailer__FROM=git@{{ email_domains.updates }}
|
||||||
- GITEA__mailer__PROTOCOL=smtps
|
- GITEA__mailer__PROTOCOL=smtps
|
||||||
- GITEA__mailer__SMTP_ADDR=smtp.resend.com
|
- GITEA__mailer__SMTP_ADDR={{ smtp.host }}
|
||||||
- GITEA__mailer__SMTP_PORT=465
|
- GITEA__mailer__SMTP_PORT=465
|
||||||
- GITEA__mailer__USER=resend
|
- GITEA__mailer__USER={{ smtp.username }}
|
||||||
- GITEA__mailer__PASSWD={{ resend_key }}
|
- GITEA__mailer__PASSWD={{ vault_smtp.password }}
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
labels:
|
labels:
|
||||||
glance.name: Gitea
|
glance.name: Gitea
|
||||||
glance.icon: si:gitea
|
glance.icon: si:gitea
|
||||||
glance.url: https://git.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.git }}/
|
||||||
glance.description: Code repo
|
glance.description: Code repo
|
||||||
glance.id: gitea
|
glance.id: gitea
|
||||||
|
mag37.dockcheck.update: true
|
||||||
volumes:
|
volumes:
|
||||||
- gitea:/data
|
- gitea:/data
|
||||||
- /etc/timezone:/etc/timezone:ro
|
- /etc/timezone:/etc/timezone:ro
|
||||||
@@ -27,8 +28,8 @@ services:
|
|||||||
ports:
|
ports:
|
||||||
- 222:22
|
- 222:22
|
||||||
extra_hosts:
|
extra_hosts:
|
||||||
- 'auth.thesatelliteoflove.com:172.20.0.5'
|
- '{{ subdomains.auth }}:{{ docker.hairpin_ip }}'
|
||||||
- 'git.thesatelliteoflove.com:172.20.0.5'
|
- '{{ subdomains.git }}:{{ docker.hairpin_ip }}'
|
||||||
runner:
|
runner:
|
||||||
image: gitea/act_runner:nightly
|
image: gitea/act_runner:nightly
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
@@ -37,24 +38,25 @@ services:
|
|||||||
environment:
|
environment:
|
||||||
- CONFIG_FILE=/config.yaml
|
- CONFIG_FILE=/config.yaml
|
||||||
- GITEA_INSTANCE_URL=http://gitea:3000
|
- GITEA_INSTANCE_URL=http://gitea:3000
|
||||||
- GITEA_RUNNER_REGISTRATION_TOKEN={{ gitea_runner_key }}
|
- GITEA_RUNNER_REGISTRATION_TOKEN={{ vault_infrastructure.gitea_runner_key }}
|
||||||
- GITEA_RUNNER_NAME=runner_1
|
- GITEA_RUNNER_NAME=runner_1
|
||||||
- GITEA_RUNNER_LABELS=docker
|
- GITEA_RUNNER_LABELS=docker
|
||||||
extra_hosts:
|
extra_hosts:
|
||||||
- 'auth.thesatelliteoflove.com:172.20.0.5'
|
- '{{ subdomains.auth }}:{{ docker.hairpin_ip }}'
|
||||||
- 'git.thesatelliteoflove.com:172.20.0.5'
|
- '{{ subdomains.git }}:{{ docker.hairpin_ip }}'
|
||||||
labels:
|
labels:
|
||||||
glance.parent: gitea
|
glance.parent: gitea
|
||||||
glance.name: Worker
|
glance.name: Worker
|
||||||
|
mag37.dockcheck.update: true
|
||||||
volumes:
|
volumes:
|
||||||
- ./runner-config.yaml:/config.yaml
|
- ./runner-config.yaml:/config.yaml
|
||||||
- ./data:/data
|
- ./data:/data
|
||||||
- /var/run/docker.sock:/var/run/docker.sock
|
- /var/run/docker.sock:/var/run/docker.sock
|
||||||
- /opt/stacks/caddy/site:/sites
|
- {{ paths.stacks }}/caddy/site:/sites
|
||||||
volumes:
|
volumes:
|
||||||
gitea:
|
gitea:
|
||||||
driver: local
|
driver: local
|
||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
services:
|
services:
|
||||||
glance:
|
glance:
|
||||||
image: glanceapp/glance
|
image: glanceapp/glance:latest
|
||||||
volumes:
|
volumes:
|
||||||
- ./config:/app/config
|
- ./config:/app/config
|
||||||
- /etc/timezone:/etc/timezone:ro
|
- /etc/timezone:/etc/timezone:ro
|
||||||
@@ -8,16 +8,16 @@ services:
|
|||||||
- /var/run/docker.sock:/var/run/docker.sock
|
- /var/run/docker.sock:/var/run/docker.sock
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
extra_hosts:
|
extra_hosts:
|
||||||
- 'thesatelliteoflove.com:172.20.0.5'
|
- '{{ primary_domain }}:172.20.0.5'
|
||||||
- 'watcher.thesatelliteoflove.com:172.20.0.5'
|
- '{{ subdomains.watcher }}:172.20.0.5'
|
||||||
labels:
|
labels:
|
||||||
glance.name: Glance
|
glance.name: Glance
|
||||||
glance.icon: si:homepage
|
glance.icon: si:homepage
|
||||||
glance.url: https://home.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.home }}/
|
||||||
glance.description: Homepage app
|
glance.description: Homepage app
|
||||||
glance.id: glance
|
glance.id: glance
|
||||||
|
mag37.dockcheck.update: true
|
||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
|
|||||||
@@ -1,40 +1,27 @@
|
|||||||
pages:
|
pages:
|
||||||
- name: Home
|
- name: Home
|
||||||
|
head-widgets:
|
||||||
|
- type: markets
|
||||||
|
hide-header: true
|
||||||
|
markets:
|
||||||
|
- symbol: SPY
|
||||||
|
name: S&P 500
|
||||||
|
- symbol: VTSAX
|
||||||
|
name: Vanguard Total Stock Market
|
||||||
|
- symbol: BAI
|
||||||
|
name: Blackrock AI
|
||||||
|
- symbol: NLR
|
||||||
|
name: VanEck Uranium+Nuclear Energy
|
||||||
|
- symbol: BITO
|
||||||
|
name: Bitcoin ETF
|
||||||
|
|
||||||
columns:
|
columns:
|
||||||
- size: small
|
|
||||||
widgets:
|
|
||||||
- type: calendar
|
|
||||||
|
|
||||||
- type: server-stats
|
|
||||||
servers:
|
|
||||||
- type: local
|
|
||||||
name: Services
|
|
||||||
|
|
||||||
- size: full
|
- size: full
|
||||||
widgets:
|
widgets:
|
||||||
- type: group
|
- type: search
|
||||||
widgets:
|
search-engine: kagi
|
||||||
- type: hacker-news
|
new-tab: true
|
||||||
- type: rss
|
|
||||||
limit: 10
|
|
||||||
collapse-after: 3
|
|
||||||
cache: 3h
|
|
||||||
feeds:
|
|
||||||
- url: https://kill-the-newsletter.com/feeds/ij4twrnzhrwvyic13qcm.xml
|
|
||||||
- url: https://mrmoneymustache.ck.page/68f9e9826c
|
|
||||||
- type: rss
|
|
||||||
title: Gear
|
|
||||||
limit: 10
|
|
||||||
collapse-after: 3
|
|
||||||
cache: 3h
|
|
||||||
feeds:
|
|
||||||
- url: https://9to5toys.com/steals/feed
|
|
||||||
- url: https://hiro.report/rss/
|
|
||||||
- type: change-detection
|
|
||||||
instance-url: https://watcher.thesatelliteoflove.com
|
|
||||||
token: ac69ae11570548549d6706eac6dbb6a9
|
|
||||||
- type: docker-containers
|
|
||||||
hide-by-default: false
|
|
||||||
|
|
||||||
- size: small
|
- size: small
|
||||||
widgets:
|
widgets:
|
||||||
@@ -42,24 +29,62 @@ pages:
|
|||||||
location: Nederland, Colorado, United States
|
location: Nederland, Colorado, United States
|
||||||
units: imperial
|
units: imperial
|
||||||
|
|
||||||
- type: markets
|
- type: custom-api
|
||||||
markets:
|
title: Air Quality
|
||||||
- symbol: SPY
|
cache: 10m
|
||||||
name: S&P 500
|
url: https://api.waqi.info/feed/geo:39.9676367;-105.4037992/?token={{ vault_glance.air_quality_key }}
|
||||||
- symbol: VTSAX
|
template: |
|
||||||
name: Vanguard Total Stock Market
|
{% raw %}{{ $aqi := printf "%03s" (.JSON.String "data.aqi") }}
|
||||||
- symbol: BAI
|
{{ $aqiraw := .JSON.String "data.aqi" }}
|
||||||
name: Blackrock AI
|
{{ $updated := .JSON.String "data.time.iso" }}
|
||||||
- symbol: NLR
|
{{ $humidity := .JSON.String "data.iaqi.h.v" }}
|
||||||
name: VanEck Uranium+Nuclear Energy
|
{{ $ozone := .JSON.String "data.iaqi.o3.v" }}
|
||||||
- symbol: BITO
|
{{ $pm25 := .JSON.String "data.iaqi.pm25.v" }}
|
||||||
name: Bitcoin ETF
|
{{ $pressure := .JSON.String "data.iaqi.p.v" }}
|
||||||
- symbol: GOOGL
|
|
||||||
name: Google
|
<div class="flex justify-between">
|
||||||
- symbol: AMD
|
<div class="size-h5">
|
||||||
name: AMD
|
{{ if le $aqi "050" }}
|
||||||
- symbol: DJT
|
<div class="color-positive">Good air quality</div>
|
||||||
name: Trump Media
|
{{ else if le $aqi "100" }}
|
||||||
|
<div class="color-primary">Moderate air quality</div>
|
||||||
|
{{ else }}
|
||||||
|
<div class="color-negative">Bad air quality</div>
|
||||||
|
{{ end }}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="color-highlight size-h2">AQI: {{ $aqiraw }}</div>
|
||||||
|
<div style="border-bottom: 1px solid; margin-block: 10px;"></div>
|
||||||
|
|
||||||
|
<div class="margin-block-2">
|
||||||
|
<div style="display: grid; grid-template-columns: 1fr 1fr; gap: 10px;">
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<div class="size-h3 color-highlight">{{ $humidity }}%</div>
|
||||||
|
<div class="size-h6">HUMIDITY</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<div class="size-h3 color-highlight">{{ $ozone }} μg/m³</div>
|
||||||
|
<div class="size-h6">OZONE</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<div class="size-h3 color-highlight">{{ $pm25 }} μg/m³</div>
|
||||||
|
<div class="size-h6">PM2.5</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<div class="size-h3 color-highlight">{{ $pressure }} hPa</div>
|
||||||
|
<div class="size-h6">PRESSURE</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="size-h6" style="margin-top: 10px;">Last Updated at {{ slice $updated 11 16 }}</div>
|
||||||
|
</div>{% endraw %}
|
||||||
|
|
||||||
- name: Mini Painting
|
- name: Mini Painting
|
||||||
columns:
|
columns:
|
||||||
- size: small
|
- size: small
|
||||||
|
|||||||
48
roles/docker/templates/gotify-compose.yml.j2
Normal file
48
roles/docker/templates/gotify-compose.yml.j2
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
services:
|
||||||
|
gotify:
|
||||||
|
image: gotify/server:latest
|
||||||
|
container_name: gotify
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- gotify_data:/app/data
|
||||||
|
environment:
|
||||||
|
- GOTIFY_DEFAULTUSER_PASS={{ vault_gotify.admin_password }}
|
||||||
|
- TZ=America/Denver
|
||||||
|
labels:
|
||||||
|
glance.name: Gotify
|
||||||
|
glance.icon: si:gotify
|
||||||
|
glance.url: "https://{{ subdomains.gotify }}/"
|
||||||
|
glance.description: Push notification server
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
extra_hosts:
|
||||||
|
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
|
||||||
|
- "{{ subdomains.gotify_assistant }}:{{ docker.hairpin_ip }}"
|
||||||
|
|
||||||
|
igotify-assistant:
|
||||||
|
image: ghcr.io/androidseb25/igotify-notification-assist:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
container_name: igotify-assistant
|
||||||
|
volumes:
|
||||||
|
- igotify_data:/app/data
|
||||||
|
environment:
|
||||||
|
TZ: America/Denver
|
||||||
|
depends_on:
|
||||||
|
- gotify
|
||||||
|
labels:
|
||||||
|
glance.name: iGotify Assistant
|
||||||
|
glance.icon: si:apple
|
||||||
|
glance.url: "https://{{ subdomains.gotify_assistant }}/"
|
||||||
|
glance.description: iOS notification assistant
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
extra_hosts:
|
||||||
|
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
|
||||||
|
- "{{ subdomains.gotify }}:{{ docker.hairpin_ip }}"
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
gotify_data:
|
||||||
|
igotify_data:
|
||||||
|
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
external: true
|
||||||
|
name: "{{ docker.network_name }}"
|
||||||
@@ -1,30 +1,31 @@
|
|||||||
services:
|
services:
|
||||||
gotosocial:
|
gotosocial:
|
||||||
image: superseriousbusiness/gotosocial:latest
|
image: docker.io/superseriousbusiness/gotosocial:latest
|
||||||
container_name: gotosocial
|
container_name: gotosocial
|
||||||
user: 1000:1000
|
user: 1000:1000
|
||||||
extra_hosts:
|
extra_hosts:
|
||||||
- 'auth.thesatelliteoflove.com:172.20.0.5'
|
- '{{ subdomains.auth }}:{{ docker.hairpin_ip }}'
|
||||||
environment:
|
environment:
|
||||||
GTS_HOST: social.thesatelliteoflove.com
|
GTS_HOST: {{ subdomains.social }}
|
||||||
GTS_DB_TYPE: sqlite
|
GTS_DB_TYPE: sqlite
|
||||||
GTS_DB_ADDRESS: /gotosocial/storage/sqlite.db
|
GTS_DB_ADDRESS: /gotosocial/storage/sqlite.db
|
||||||
|
GTS_WAZERO_COMPILATION_CACHE: /gotosocial/.cache
|
||||||
GTS_LETSENCRYPT_ENABLED: "false"
|
GTS_LETSENCRYPT_ENABLED: "false"
|
||||||
GTS_LETSENCRYPT_EMAIL_ADDRESS: ""
|
GTS_LETSENCRYPT_EMAIL_ADDRESS: ""
|
||||||
GTS_TRUSTED_PROXIES: "172.20.0.5"
|
GTS_TRUSTED_PROXIES: "{{ docker.hairpin_ip }}"
|
||||||
GTS_ACCOUNT_DOMAIN: thesatelliteoflove.com
|
GTS_ACCOUNT_DOMAIN: {{ primary_domain }}
|
||||||
GTS_OIDC_ENABLED: "true"
|
GTS_OIDC_ENABLED: "true"
|
||||||
GTS_OIDC_IDP_NAME: "Authentik"
|
GTS_OIDC_IDP_NAME: "Authentik"
|
||||||
GTS_OIDC_ISSUER: https://auth.thesatelliteoflove.com/application/o/gotosocial/
|
GTS_OIDC_ISSUER: https://{{ subdomains.auth }}/application/o/gotosocial/
|
||||||
GTS_OIDC_CLIENT_ID: {{ gts_oidc_client_id }}
|
GTS_OIDC_CLIENT_ID: {{ vault_gotosocial.oidc.client_id }}
|
||||||
GTS_OIDC_CLIENT_SECRET: {{ gts_oidc_client_secret }}
|
GTS_OIDC_CLIENT_SECRET: {{ vault_gotosocial.oidc.client_secret }}
|
||||||
GTS_OIDC_LINK_EXISTING: "true"
|
GTS_OIDC_LINK_EXISTING: "true"
|
||||||
GTS_HTTP_CLIENT: "20s"
|
GTS_HTTP_CLIENT: "20s"
|
||||||
GTS_SMTP_HOST: "smtp.resend.com"
|
GTS_SMTP_HOST: "{{ smtp.host }}"
|
||||||
GTS_SMTP_PORT: "587"
|
GTS_SMTP_PORT: "587"
|
||||||
GTS_SMTP_USERNAME: "resend"
|
GTS_SMTP_USERNAME: "{{ smtp.username }}"
|
||||||
GTS_SMTP_PASSWORD: {{ resend_key }}
|
GTS_SMTP_PASSWORD: {{ vault_smtp.password }}
|
||||||
GTS_SMTP_FROM: "social@updates.thesatelliteoflove.com"
|
GTS_SMTP_FROM: "social@{{ email_domains.updates }}"
|
||||||
TZ: UTC
|
TZ: UTC
|
||||||
volumes:
|
volumes:
|
||||||
- gotosocial:/gotosocial/storage
|
- gotosocial:/gotosocial/storage
|
||||||
@@ -33,7 +34,7 @@ services:
|
|||||||
docker-volume-backup.stop-during-backup: true
|
docker-volume-backup.stop-during-backup: true
|
||||||
glance.name: GoToSocial
|
glance.name: GoToSocial
|
||||||
glance.icon: si:mastodon
|
glance.icon: si:mastodon
|
||||||
glance.url: https://social.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.social }}/
|
||||||
glance.description: Fediverse server
|
glance.description: Fediverse server
|
||||||
glance.id: gotosocial
|
glance.id: gotosocial
|
||||||
|
|
||||||
@@ -43,23 +44,19 @@ services:
|
|||||||
labels:
|
labels:
|
||||||
glance.parent: gotosocial
|
glance.parent: gotosocial
|
||||||
glance.name: Backup
|
glance.name: Backup
|
||||||
|
mag37.dockcheck.update: true
|
||||||
environment:
|
environment:
|
||||||
BACKUP_FILENAME: backup-gts-%Y-%m-%dT%H-%M-%S.tar.gz
|
BACKUP_FILENAME: gts-backup-%Y-%m-%dT%H-%M-%S.tar.gz
|
||||||
BACKUP_LATEST_SYMLINK: backup-latest.tar.gz
|
|
||||||
BACKUP_CRON_EXPRESSION: "0 9 * * *"
|
BACKUP_CRON_EXPRESSION: "0 9 * * *"
|
||||||
BACKUP_PRUNING_PREFIX: backup-
|
BACKUP_PRUNING_PREFIX: gts-
|
||||||
BACKUP_RETENTION_DAYS: 1
|
BACKUP_RETENTION_DAYS: 7
|
||||||
AWS_S3_BUCKET_NAME: tsolbackups
|
AWS_S3_BUCKET_NAME: tsolbackups
|
||||||
AWS_ENDPOINT: s3.us-west-004.backblazeb2.com
|
AWS_ENDPOINT: s3.us-west-004.backblazeb2.com
|
||||||
AWS_ACCESS_KEY_ID: {{ backup_key_id }}
|
AWS_ACCESS_KEY_ID: {{ vault_backup.access_key_id }}
|
||||||
AWS_SECRET_ACCESS_KEY: {{ backup_key }}
|
AWS_SECRET_ACCESS_KEY: {{ vault_backup.secret_access_key }}
|
||||||
BACKUP_SKIP_BACKENDS_FROM_PRUNE: s3
|
|
||||||
|
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
- gotosocial:/backup/my-app-backup:ro
|
- gotosocial:/backup/gts-app-backup:ro
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
- ./backup:/archive
|
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
gotosocial:
|
gotosocial:
|
||||||
@@ -68,4 +65,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
version: "3.3"
|
|
||||||
services:
|
|
||||||
grist:
|
|
||||||
volumes:
|
|
||||||
- grist:/persist
|
|
||||||
extra_hosts:
|
|
||||||
- 'auth.thesatelliteoflove.com:172.20.0.3'
|
|
||||||
environment:
|
|
||||||
- GRIST_SESSION_SECRET={{ grist_session_secret }}
|
|
||||||
- APP_HOME_URL=https://grist.thesatelliteoflove.com
|
|
||||||
- GRIST_OIDC_IDP_ISSUER=https://auth.thesatelliteoflove.com/application/o/grist/.well-known/openid-configuration
|
|
||||||
- GRIST_OIDC_IDP_CLIENT_ID={{ grist_oidc_client_id }}
|
|
||||||
- GRIST_OIDC_IDP_CLIENT_SECRET={{ grist_oidc_client_secret }}
|
|
||||||
image: gristlabs/grist
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
grist:
|
|
||||||
driver: local
|
|
||||||
|
|
||||||
networks:
|
|
||||||
default:
|
|
||||||
external: true
|
|
||||||
name: lava
|
|
||||||
30
roles/docker/templates/grocy-compose.yml.j2
Normal file
30
roles/docker/templates/grocy-compose.yml.j2
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
services:
|
||||||
|
grocy:
|
||||||
|
image: lscr.io/linuxserver/grocy:latest
|
||||||
|
container_name: grocy
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
- PUID=1000
|
||||||
|
- PGID=1000
|
||||||
|
- TZ=America/Denver
|
||||||
|
volumes:
|
||||||
|
- ./config:/config
|
||||||
|
extra_hosts:
|
||||||
|
- "host.docker.internal:host-gateway"
|
||||||
|
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
|
||||||
|
labels:
|
||||||
|
glance.name: Grocy
|
||||||
|
glance.icon: si:grocyapp
|
||||||
|
glance.url: https://{{ subdomains.grocy }}/
|
||||||
|
glance.description: Kitchen ERP and inventory management
|
||||||
|
glance.id: grocy
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
grocy_config:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
external: true
|
||||||
|
name: {{ docker.network_name }}
|
||||||
@@ -11,21 +11,21 @@ services:
|
|||||||
labels:
|
labels:
|
||||||
glance.name: Heyform
|
glance.name: Heyform
|
||||||
glance.icon: si:googleforms
|
glance.icon: si:googleforms
|
||||||
glance.url: https://forms.nerder.land/
|
glance.url: https://{{ subdomains.heyform }}/
|
||||||
glance.description: Forms server
|
glance.description: Forms server
|
||||||
glance.id: heyform
|
glance.id: heyform
|
||||||
environment:
|
environment:
|
||||||
- APP_HOMEPAGE_URL=http://forms.nerder.land
|
- APP_HOMEPAGE_URL=http://{{ subdomains.heyform }}
|
||||||
- SESSION_KEY={{ heyform_session_key }}
|
- SESSION_KEY={{ vault_heyform.session_key }}
|
||||||
- FORM_ENCRYPTION_KEY={{ heyform_encryption_key }}
|
- FORM_ENCRYPTION_KEY={{ vault_heyform.encryption_key }}
|
||||||
- MONGO_URI='mongodb://mongo:27017/heyform'
|
- MONGO_URI='mongodb://mongo:27017/heyform'
|
||||||
- REDIS_HOST=keydb
|
- REDIS_HOST=keydb
|
||||||
- REDIS_PORT=6379
|
- REDIS_PORT=6379
|
||||||
- SMTP_FROM=nerderland@updates.thesatelliteoflove.com
|
- SMTP_FROM=nerderland@{{ email_domains.updates }}
|
||||||
- SMTP_HOST=smtp.resend.com
|
- SMTP_HOST={{ smtp.host }}
|
||||||
- SMTP_PORT=465
|
- SMTP_PORT=465
|
||||||
- SMTP_USER=resend
|
- SMTP_USER={{ smtp.username }}
|
||||||
- SMTP_PASSWORD={{ resend_key }}
|
- SMTP_PASSWORD={{ vault_smtp.password }}
|
||||||
- SMTP_SECURE=true
|
- SMTP_SECURE=true
|
||||||
|
|
||||||
mongo:
|
mongo:
|
||||||
@@ -60,4 +60,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
version: "3.8"
|
version: "3.8"
|
||||||
services:
|
services:
|
||||||
web:
|
web:
|
||||||
image: ghcr.io/hoarder-app/hoarder:${HOARDER_VERSION:-release}
|
image: ghcr.io/karakeep-app/karakeep:latest
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
volumes:
|
volumes:
|
||||||
- data:/data
|
- data:/data
|
||||||
@@ -10,24 +10,26 @@ services:
|
|||||||
env_file:
|
env_file:
|
||||||
- .env
|
- .env
|
||||||
extra_hosts:
|
extra_hosts:
|
||||||
- 'auth.thesatelliteoflove.com:172.20.0.5'
|
- '{{ subdomains.auth }}:{{ docker.hairpin_ip }}'
|
||||||
- bookmarks.thesatelliteoflove.com:172.20.0.5
|
- '{{ subdomains.bookmarks }}:{{ docker.hairpin_ip }}'
|
||||||
environment:
|
environment:
|
||||||
MEILI_ADDR: http://meilisearch:7700
|
MEILI_ADDR: http://meilisearch:7700
|
||||||
DATA_DIR: /data
|
DATA_DIR: /data
|
||||||
BROWSER_WEB_URL: http://chrome:9222
|
BROWSER_WEB_URL: http://chrome:9222
|
||||||
labels:
|
labels:
|
||||||
glance.name: Hoarder
|
glance.name: Karakeep
|
||||||
glance.icon: si:wikibooks
|
glance.icon: si:wikibooks
|
||||||
glance.url: https://bookmarks.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.bookmarks }}/
|
||||||
glance.description: Bookmark manager
|
glance.description: Bookmark manager
|
||||||
glance.id: hoarder
|
glance.id: karakeep
|
||||||
|
mag37.dockcheck.update: true
|
||||||
chrome:
|
chrome:
|
||||||
image: gcr.io/zenika-hub/alpine-chrome:123
|
image: gcr.io/zenika-hub/alpine-chrome:123
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
labels:
|
labels:
|
||||||
glance.name: Chrome
|
glance.name: Chrome
|
||||||
glance.parent: hoarder
|
glance.parent: karakeep
|
||||||
|
mag37.dockcheck.update: true
|
||||||
command:
|
command:
|
||||||
- --no-sandbox
|
- --no-sandbox
|
||||||
- --disable-gpu
|
- --disable-gpu
|
||||||
@@ -36,11 +38,12 @@ services:
|
|||||||
- --remote-debugging-port=9222
|
- --remote-debugging-port=9222
|
||||||
- --hide-scrollbars
|
- --hide-scrollbars
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch:v1.6
|
image: getmeili/meilisearch:v1.13.3
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
labels:
|
labels:
|
||||||
glance.name: Meilisearch
|
glance.name: Meilisearch
|
||||||
glance.parent: hoarder
|
glance.parent: karakeep
|
||||||
|
mag37.dockcheck.update: true
|
||||||
env_file:
|
env_file:
|
||||||
- .env
|
- .env
|
||||||
environment:
|
environment:
|
||||||
@@ -53,4 +56,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -1,10 +1,10 @@
|
|||||||
HOARDER_VERSION=release
|
KARAKEEP_VERSION=release
|
||||||
NEXTAUTH_SECRET={{ hoarder_nextauth_secret }}
|
NEXTAUTH_SECRET={{ vault_hoarder.nextauth_secret }}
|
||||||
MEILI_MASTER_KEY={{ hoarder_meili_master_key }}
|
MEILI_MASTER_KEY={{ vault_hoarder.meili_master_key }}
|
||||||
NEXTAUTH_URL=https://bookmarks.thesatelliteoflove.com
|
NEXTAUTH_URL=https://{{ subdomains.bookmarks }}
|
||||||
OPENAI_API_KEY={{ openai_api_key }}
|
OPENAI_API_KEY={{ vault_infrastructure.openai_api_key }}
|
||||||
OAUTH_CLIENT_SECRET={{ hoarder_oidc_client_secret }}
|
OAUTH_CLIENT_SECRET={{ vault_hoarder.oidc.client_secret }}
|
||||||
OAUTH_CLIENT_ID=GTi0QBRH5TiTqZfxfAkYSQVVFouGdlOFMc2sjivN
|
OAUTH_CLIENT_ID=GTi0QBRH5TiTqZfxfAkYSQVVFouGdlOFMc2sjivN
|
||||||
OAUTH_PROVIDER_NAME=Authentik
|
OAUTH_PROVIDER_NAME=Authentik
|
||||||
OAUTH_WELLKNOWN_URL=https://auth.thesatelliteoflove.com/application/o/hoarder/.well-known/openid-configuration
|
OAUTH_WELLKNOWN_URL=https://{{ subdomains.auth }}/application/o/hoarder/.well-known/openid-configuration
|
||||||
OAUTH_ALLOW_DANGEROUS_EMAIL_ACCOUNT_LINKING=true
|
OAUTH_ALLOW_DANGEROUS_EMAIL_ACCOUNT_LINKING=true
|
||||||
32
roles/docker/templates/kanboard-compose.yml.j2
Normal file
32
roles/docker/templates/kanboard-compose.yml.j2
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
services:
|
||||||
|
kanboard:
|
||||||
|
image: kanboard/kanboard:latest
|
||||||
|
container_name: kanboard
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
- PLUGIN_INSTALLER=true
|
||||||
|
- DB_DRIVER=sqlite
|
||||||
|
volumes:
|
||||||
|
- kanboard_data:/var/www/app/data
|
||||||
|
- kanboard_plugins:/var/www/app/plugins
|
||||||
|
extra_hosts:
|
||||||
|
- "host.docker.internal:host-gateway"
|
||||||
|
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
|
||||||
|
labels:
|
||||||
|
glance.name: Kanboard
|
||||||
|
glance.icon: si:kanboard
|
||||||
|
glance.url: https://{{ subdomains.kanboard }}/
|
||||||
|
glance.description: Project management and Kanban boards
|
||||||
|
glance.id: kanboard
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
kanboard_data:
|
||||||
|
driver: local
|
||||||
|
kanboard_plugins:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
external: true
|
||||||
|
name: {{ docker.network_name }}
|
||||||
@@ -12,23 +12,24 @@ services:
|
|||||||
# The container path can be anything; you will need to enter it in the "new library" form.
|
# The container path can be anything; you will need to enter it in the "new library" form.
|
||||||
- ./models:/models
|
- ./models:/models
|
||||||
environment:
|
environment:
|
||||||
SECRET_KEY_BASE: {{manyfold_key}}
|
SECRET_KEY_BASE: {{ vault_manyfold.secret_key }}
|
||||||
MULTIUSER: enabled
|
MULTIUSER: enabled
|
||||||
OIDC_CLIENT_ID: {{ manyfold_oidc_client_id }}
|
OIDC_CLIENT_ID: {{ vault_manyfold.oidc.client_id }}
|
||||||
OIDC_CLIENT_SECRET: {{ manyfold_oidc_client_secret }}
|
OIDC_CLIENT_SECRET: {{ vault_manyfold.oidc.client_secret }}
|
||||||
OIDC_ISSUER: https://auth.thesatelliteoflove.com/application/o/manyfold/
|
OIDC_ISSUER: https://{{ subdomains.auth }}/application/o/manyfold/
|
||||||
OIDC_NAME: Authentik
|
OIDC_NAME: Authentik
|
||||||
PUBLIC_HOSTNAME: models.thesatelliteoflove.com
|
PUBLIC_HOSTNAME: {{ subdomains.models }}
|
||||||
MAX_FILE_UPLOAD_SIZE: 5368709120
|
MAX_FILE_UPLOAD_SIZE: 5368709120
|
||||||
PUID: 1000
|
PUID: 1000
|
||||||
PGID: 1000
|
PGID: 1000
|
||||||
extra_hosts:
|
extra_hosts:
|
||||||
- 'auth.thesatelliteoflove.com:172.20.0.5'
|
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
|
||||||
labels:
|
labels:
|
||||||
glance.name: Manyfold
|
glance.name: Manyfold
|
||||||
glance.icon: si:open3d
|
glance.icon: si:open3d
|
||||||
glance.url: https://models.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.models }}/
|
||||||
glance.description: STL Storage
|
glance.description: STL Storage
|
||||||
|
mag37.dockcheck.update: true
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
# Optional, but recommended for better security
|
# Optional, but recommended for better security
|
||||||
security_opt:
|
security_opt:
|
||||||
@@ -44,4 +45,4 @@ services:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: "{{ docker.network_name }}"
|
||||||
48
roles/docker/templates/mmdl-compose.yml.j2
Normal file
48
roles/docker/templates/mmdl-compose.yml.j2
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
services:
|
||||||
|
mmdl:
|
||||||
|
image: intriin/mmdl:latest
|
||||||
|
container_name: mmdl
|
||||||
|
restart: unless-stopped
|
||||||
|
depends_on:
|
||||||
|
- mmdl_db
|
||||||
|
env_file:
|
||||||
|
- .env.local
|
||||||
|
extra_hosts:
|
||||||
|
- "host.docker.internal:host-gateway"
|
||||||
|
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
|
||||||
|
- "{{ subdomains.cal }}:{{ docker.hairpin_ip }}"
|
||||||
|
labels:
|
||||||
|
glance.name: MMDL
|
||||||
|
glance.icon: si:task
|
||||||
|
glance.url: https://{{ subdomains.tasks }}/
|
||||||
|
glance.description: Task and calendar management
|
||||||
|
glance.id: mmdl
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
|
||||||
|
mmdl_db:
|
||||||
|
image: mysql:8.0
|
||||||
|
container_name: mmdl_db
|
||||||
|
restart: unless-stopped
|
||||||
|
command: --default-authentication-plugin=mysql_native_password
|
||||||
|
environment:
|
||||||
|
MYSQL_DATABASE: mmdl
|
||||||
|
MYSQL_USER: mmdl
|
||||||
|
MYSQL_PASSWORD: "{{ vault_mmdl.mysql_password }}"
|
||||||
|
MYSQL_ROOT_PASSWORD: "{{ vault_mmdl.mysql_root_password }}"
|
||||||
|
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
|
||||||
|
MYSQL_ROOT_HOST: "%"
|
||||||
|
volumes:
|
||||||
|
- mmdl_db:/var/lib/mysql
|
||||||
|
labels:
|
||||||
|
glance.parent: mmdl
|
||||||
|
glance.name: DB
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
mmdl_db:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
external: true
|
||||||
|
name: {{ docker.network_name }}
|
||||||
42
roles/docker/templates/mmdl-env.j2
Normal file
42
roles/docker/templates/mmdl-env.j2
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
# Database Configuration
|
||||||
|
DB_HOST=mmdl_db
|
||||||
|
DB_USER=mmdl
|
||||||
|
DB_PASS={{ vault_mmdl.mysql_password }}
|
||||||
|
DB_PORT=3306
|
||||||
|
DB_DIALECT=mysql
|
||||||
|
DB_CHARSET=utf8mb4
|
||||||
|
DB_NAME=mmdl
|
||||||
|
|
||||||
|
# Encryption
|
||||||
|
AES_PASSWORD={{ vault_mmdl.aes_password }}
|
||||||
|
|
||||||
|
# SMTP Settings
|
||||||
|
SMTP_HOST={{ smtp.host }}
|
||||||
|
SMTP_USERNAME={{ smtp.username }}
|
||||||
|
SMTP_PASSWORD={{ vault_smtp.password }}
|
||||||
|
SMTP_FROM=tasks@{{ email_domains.updates }}
|
||||||
|
SMTP_PORT=587
|
||||||
|
SMTP_SECURE=true
|
||||||
|
|
||||||
|
# Authentication
|
||||||
|
USE_NEXT_AUTH=true
|
||||||
|
NEXTAUTH_URL=https://{{ subdomains.tasks }}
|
||||||
|
NEXTAUTH_SECRET={{ vault_mmdl.nextauth_secret }}
|
||||||
|
|
||||||
|
# Authentik OIDC Configuration
|
||||||
|
AUTHENTIK_ISSUER=https://{{ subdomains.auth }}/application/o/mmdl
|
||||||
|
AUTHENTIK_CLIENT_ID={{ vault_mmdl.oidc.client_id }}
|
||||||
|
AUTHENTIK_CLIENT_SECRET={{ vault_mmdl.oidc.client_secret }}
|
||||||
|
|
||||||
|
# User and Session Management
|
||||||
|
ALLOW_USER_REGISTRATION=false
|
||||||
|
MAX_CONCURRENT_LOGINS=3
|
||||||
|
OTP_VALIDITY_PERIOD=300
|
||||||
|
SESSION_VALIDITY_PERIOD=30
|
||||||
|
|
||||||
|
# Application Settings
|
||||||
|
API_URL=https://{{ subdomains.tasks }}
|
||||||
|
DEBUG_MODE=false
|
||||||
|
TEST_MODE=false
|
||||||
|
NEXT_API_DEBUG_MODE=false
|
||||||
|
SUBTASK_RECURSION_DEPTH=5
|
||||||
31
roles/docker/templates/obsidian-livesync-compose.yml.j2
Normal file
31
roles/docker/templates/obsidian-livesync-compose.yml.j2
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
services:
|
||||||
|
obsidian-livesync:
|
||||||
|
image: oleduc/docker-obsidian-livesync-couchdb:latest
|
||||||
|
container_name: obsidian-livesync
|
||||||
|
restart: unless-stopped
|
||||||
|
labels:
|
||||||
|
glance.name: Obsidian LiveSync
|
||||||
|
glance.icon: si:obsidian
|
||||||
|
glance.url: http://{{ network.docker_host_ip }}:5984
|
||||||
|
glance.description: Obsidian note synchronization
|
||||||
|
glance.id: obsidian-livesync
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
environment:
|
||||||
|
- SERVER_DOMAIN={{ network.docker_host_ip }}
|
||||||
|
- COUCHDB_USER={{ vault_obsidian.username }}
|
||||||
|
- COUCHDB_PASSWORD={{ vault_obsidian.password }}
|
||||||
|
- COUCHDB_DATABASE=obsidian
|
||||||
|
ports:
|
||||||
|
- "{{ network.docker_host_ip }}:5984:5984"
|
||||||
|
volumes:
|
||||||
|
- couchdb_data:/opt/couchdb/data
|
||||||
|
networks:
|
||||||
|
- default
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
couchdb_data:
|
||||||
|
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
external: true
|
||||||
|
name: "{{ docker.network_name }}"
|
||||||
30
roles/docker/templates/palmr-compose.yml.j2
Normal file
30
roles/docker/templates/palmr-compose.yml.j2
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
services:
|
||||||
|
palmr:
|
||||||
|
image: kyantech/palmr:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
DISABLE_FILESYSTEM_ENCRYPTION: "false"
|
||||||
|
ENCRYPTION_KEY: "{{ vault_palmr.encryption_key }}"
|
||||||
|
PALMR_UID: "1000"
|
||||||
|
PALMR_GID: "1000"
|
||||||
|
SECURE_SITE: "true"
|
||||||
|
DEFAULT_LANGUAGE: "en-US"
|
||||||
|
TRUST_PROXY: "true"
|
||||||
|
extra_hosts:
|
||||||
|
- "{{ subdomains.auth }}:{{ docker.hairpin_ip }}"
|
||||||
|
labels:
|
||||||
|
glance.name: Palmr
|
||||||
|
glance.icon: si:files
|
||||||
|
glance.url: "https://{{ subdomains.files }}/"
|
||||||
|
glance.description: File sharing and storage
|
||||||
|
glance.id: palmr
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
volumes:
|
||||||
|
- palmr_data:/app/server
|
||||||
|
volumes:
|
||||||
|
palmr_data:
|
||||||
|
driver: local
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
external: true
|
||||||
|
name: "{{ docker.network_name }}"
|
||||||
@@ -14,7 +14,7 @@ services:
|
|||||||
labels:
|
labels:
|
||||||
glance.name: Paperless NGX
|
glance.name: Paperless NGX
|
||||||
glance.icon: si:paperlessngx
|
glance.icon: si:paperlessngx
|
||||||
glance.url: https://papers.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.paper }}/
|
||||||
glance.description: Document server
|
glance.description: Document server
|
||||||
glance.id: paperlessngx
|
glance.id: paperlessngx
|
||||||
depends_on:
|
depends_on:
|
||||||
@@ -28,7 +28,7 @@ services:
|
|||||||
- ./consume:/usr/src/paperless/consume
|
- ./consume:/usr/src/paperless/consume
|
||||||
env_file: docker-compose.env
|
env_file: docker-compose.env
|
||||||
extra_hosts:
|
extra_hosts:
|
||||||
- 'auth.thesatelliteoflove.com:172.20.0.5'
|
- '{{ subdomains.auth }}:{{ docker.hairpin_ip }}'
|
||||||
environment:
|
environment:
|
||||||
PAPERLESS_REDIS: redis://broker:6379
|
PAPERLESS_REDIS: redis://broker:6379
|
||||||
PAPERLESS_TIKA_ENABLED: 1
|
PAPERLESS_TIKA_ENABLED: 1
|
||||||
@@ -57,6 +57,26 @@ services:
|
|||||||
glance.name: Tika
|
glance.name: Tika
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
|
|
||||||
|
backup:
|
||||||
|
image: offen/docker-volume-backup:v2
|
||||||
|
restart: always
|
||||||
|
labels:
|
||||||
|
glance.parent: paperlessngx
|
||||||
|
glance.name: Backup
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
environment:
|
||||||
|
BACKUP_FILENAME: pngx-backup-%Y-%m-%dT%H-%M-%S.tar.gz
|
||||||
|
BACKUP_CRON_EXPRESSION: "10 9 * * *"
|
||||||
|
BACKUP_PRUNING_PREFIX: pngx-
|
||||||
|
BACKUP_RETENTION_DAYS: 7
|
||||||
|
AWS_S3_BUCKET_NAME: tsolbackups
|
||||||
|
AWS_ENDPOINT: s3.us-west-004.backblazeb2.com
|
||||||
|
AWS_ACCESS_KEY_ID: {{ vault_backup.access_key_id }}
|
||||||
|
AWS_SECRET_ACCESS_KEY: {{ vault_backup.secret_access_key }}
|
||||||
|
volumes:
|
||||||
|
- media:/backup/pngx-app-backup:ro
|
||||||
|
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
data:
|
data:
|
||||||
media:
|
media:
|
||||||
@@ -65,4 +85,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -24,11 +24,11 @@
|
|||||||
|
|
||||||
# This is required if you will be exposing Paperless-ngx on a public domain
|
# This is required if you will be exposing Paperless-ngx on a public domain
|
||||||
# (if doing so please consider security measures such as reverse proxy)
|
# (if doing so please consider security measures such as reverse proxy)
|
||||||
PAPERLESS_URL=https://paper.thesatelliteoflove.com
|
PAPERLESS_URL=https://{{ subdomains.paper }}
|
||||||
|
|
||||||
# Adjust this key if you plan to make paperless available publicly. It should
|
# Adjust this key if you plan to make paperless available publicly. It should
|
||||||
# be a very long sequence of random characters. You don't need to remember it.
|
# be a very long sequence of random characters. You don't need to remember it.
|
||||||
PAPERLESS_SECRET_KEY={{ paperlessngx_secret }}
|
PAPERLESS_SECRET_KEY={{ vault_paperlessngx.secret_key }}
|
||||||
|
|
||||||
# Use this variable to set a timezone for the Paperless Docker containers. If not specified, defaults to UTC.
|
# Use this variable to set a timezone for the Paperless Docker containers. If not specified, defaults to UTC.
|
||||||
PAPERLESS_TIME_ZONE=America/Denver
|
PAPERLESS_TIME_ZONE=America/Denver
|
||||||
@@ -43,4 +43,4 @@ PAPERLESS_TIME_ZONE=America/Denver
|
|||||||
|
|
||||||
# authentik
|
# authentik
|
||||||
PAPERLESS_APPS: "allauth.socialaccount.providers.openid_connect"
|
PAPERLESS_APPS: "allauth.socialaccount.providers.openid_connect"
|
||||||
PAPERLESS_SOCIALACCOUNT_PROVIDERS: '{"openid_connect": {"APPS": [{"provider_id": "authentik","name": "Authentik SSO","client_id": "{{ paperless_oauth_client_id }}","secret": "{{ paperless_oauth_client_secret }}","settings": { "server_url": "https://auth.thesatelliteoflove.com/application/o/paperlessngx/.well-known/openid-configuration"}}]}}'
|
PAPERLESS_SOCIALACCOUNT_PROVIDERS: '{"openid_connect": {"APPS": [{"provider_id": "authentik","name": "Authentik SSO","client_id": "{{ vault_paperlessngx.oidc.client_id }}","secret": "{{ vault_paperlessngx.oidc.client_secret }}","settings": { "server_url": "https://{{ subdomains.auth }}/application/o/paperlessngx/.well-known/openid-configuration"}}]}}'
|
||||||
@@ -1,24 +0,0 @@
|
|||||||
services:
|
|
||||||
pingvin-share:
|
|
||||||
image: stonith404/pingvin-share:latest
|
|
||||||
restart: unless-stopped
|
|
||||||
environment:
|
|
||||||
- TRUST_PROXY=true
|
|
||||||
extra_hosts:
|
|
||||||
- 'auth.thesatelliteoflove.com:172.20.0.5'
|
|
||||||
labels:
|
|
||||||
glance.name: Pingvin
|
|
||||||
glance.icon: si:files
|
|
||||||
glance.url: http://netcup.porgy-porgy.ts.net:8945
|
|
||||||
glance.description: File sharing service
|
|
||||||
glance.id: pingvin
|
|
||||||
volumes:
|
|
||||||
- data:/opt/app/backend/data
|
|
||||||
- images:/opt/app/frontend/public/img
|
|
||||||
volumes:
|
|
||||||
images:
|
|
||||||
data:
|
|
||||||
networks:
|
|
||||||
default:
|
|
||||||
external: true
|
|
||||||
name: lava
|
|
||||||
@@ -5,7 +5,7 @@ services:
|
|||||||
labels:
|
labels:
|
||||||
glance.name: Pinry
|
glance.name: Pinry
|
||||||
glance.icon: si:pinterest
|
glance.icon: si:pinterest
|
||||||
glance.url: https://pin.thesatelliteoflove.com
|
glance.url: https://{{ subdomains.pin }}
|
||||||
glance.description: Pinterest clone
|
glance.description: Pinterest clone
|
||||||
glance.id: pinterest
|
glance.id: pinterest
|
||||||
environment:
|
environment:
|
||||||
@@ -18,4 +18,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
@@ -5,9 +5,9 @@ services:
|
|||||||
restart: always
|
restart: always
|
||||||
environment:
|
environment:
|
||||||
# You must change these. Replace `postiz.your-server.com` with your DNS name - what your web browser sees.
|
# You must change these. Replace `postiz.your-server.com` with your DNS name - what your web browser sees.
|
||||||
MAIN_URL: "https://post.thesatelliteoflove.com"
|
MAIN_URL: "https://{{ subdomains.post }}"
|
||||||
FRONTEND_URL: "https://post.thesatelliteoflove.com"
|
FRONTEND_URL: "https://{{ subdomains.post }}"
|
||||||
NEXT_PUBLIC_BACKEND_URL: "https://post.thesatelliteoflove.com/api"
|
NEXT_PUBLIC_BACKEND_URL: "https://{{ subdomains.post }}/api"
|
||||||
JWT_SECRET: "TShr6Fdcwf67wIhuUvg0gOsJbdcQmgMiJl5kUh6JCfY="
|
JWT_SECRET: "TShr6Fdcwf67wIhuUvg0gOsJbdcQmgMiJl5kUh6JCfY="
|
||||||
|
|
||||||
# These defaults are probably fine, but if you change your user/password, update it in the
|
# These defaults are probably fine, but if you change your user/password, update it in the
|
||||||
@@ -24,7 +24,7 @@ services:
|
|||||||
|
|
||||||
# Social keys
|
# Social keys
|
||||||
LINKEDIN_CLIENT_ID: "86q7ksc8q5pai3"
|
LINKEDIN_CLIENT_ID: "86q7ksc8q5pai3"
|
||||||
LINKEDIN_CLIENT_SECRET: {{ linkedin_secret }}
|
LINKEDIN_CLIENT_SECRET: {{ vault_postiz.linkedin_secret }}
|
||||||
volumes:
|
volumes:
|
||||||
- postiz-config:/config/
|
- postiz-config:/config/
|
||||||
- postiz-uploads:/uploads/
|
- postiz-uploads:/uploads/
|
||||||
@@ -35,9 +35,10 @@ services:
|
|||||||
condition: service_healthy
|
condition: service_healthy
|
||||||
labels:
|
labels:
|
||||||
glance.name: Postiz
|
glance.name: Postiz
|
||||||
glance.url: https://post.thesatelliteoflove.com/
|
glance.url: https://{{ subdomains.post }}/
|
||||||
glance.description: Social media scheduler
|
glance.description: Social media scheduler
|
||||||
glance.id: postiz
|
glance.id: postiz
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
|
||||||
postiz-postgres:
|
postiz-postgres:
|
||||||
image: postgres:14.5
|
image: postgres:14.5
|
||||||
@@ -57,6 +58,7 @@ services:
|
|||||||
labels:
|
labels:
|
||||||
glance.parent: postiz
|
glance.parent: postiz
|
||||||
glance.name: DB
|
glance.name: DB
|
||||||
|
mag37.dockcheck.update: true
|
||||||
postiz-redis:
|
postiz-redis:
|
||||||
image: redis:7.2
|
image: redis:7.2
|
||||||
container_name: postiz-redis
|
container_name: postiz-redis
|
||||||
@@ -71,6 +73,7 @@ services:
|
|||||||
labels:
|
labels:
|
||||||
glance.parent: postiz
|
glance.parent: postiz
|
||||||
glance.name: Redis
|
glance.name: Redis
|
||||||
|
mag37.dockcheck.update: true
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -90,4 +93,4 @@ volumes:
|
|||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
external: true
|
external: true
|
||||||
name: lava
|
name: {{ docker.network_name }}
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user