ahembree / ansible-hms-docker Goto Github PK
View Code? Open in Web Editor NEWAnsible playbook for automated home media server setup
License: GNU General Public License v3.0
Ansible playbook for automated home media server setup
License: GNU General Public License v3.0
Changing the proxy hostname in "hms_docker_container_map" doesnt change in file "authentik_outpost.j2".
Example:
sonarr:
enabled: yes
proxy_host_rule: tv
directory: yes
traefik: yes
authentik: yes
authentik_provider_type: proxy
expose_to_public: yes
in authentik-sonarr.outpost.yml
log_level: info
docker_labels:
traefik.enable: "true"
traefik.http.services.authentik-proxy-REDACT-sonarr-service.loadbalancer.server.port: "9000"
traefik.http.routers.authentik-proxy-REDACT-sonarr-router.rule: Host(`sonarr.REDACT`) && PathPrefix(`/outpost.goauthentik.io/`)
traefik.http.middlewares.authentik-proxy-REDACT-sonarr-router.forwardauth.address: http://authentik-proxy-REDACT-sonarr:9000/outpost.goauthentik.io/>
traefik.http.middlewares.authentik-proxy-REDACT-sonarr-router.forwardauth.trustForwardHeader: "true"
traefik.http.middlewares.authentik-proxy-REDACT-sonarr-router.forwardauth.authResponseHeaders: X-authentik-username,X-authentik-groups,X-authentik-emai>
authentik_host: http://authentik-server:9000
authentik_host_browser: https://auth.REDACT
docker_network: REDACT_traefik_net
container_image: null
docker_map_ports: false
kubernetes_replicas: 1
kubernetes_namespace: default
object_naming_template: authentik-proxy-REDACT-sonarr
authentik_host_insecure: false
kubernetes_service_type: ClusterIP
kubernetes_image_pull_secrets: []
kubernetes_disabled_components: []
kubernetes_ingress_annotations: {}
kubernetes_ingress_secret_name: authentik-outpost-tls
And in traefik-dashboard i can see
Host(
tv.REDACT) && PathPrefix(
/outpost.goauthentik.io)
but no middleware.
If i however change Forward Auth to Proxy things work (in Providers)
Hi, when i run the command
ansible-playbook -i inventory --connection local hms-docker.yml
I get this :
PLAY [all] *********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [docker : Ensure platform agnostic requirements.] *************************
ok: [localhost]
TASK [docker : Ensure previous RHEL Docker packages are absent.] ***************
skipping: [localhost]
TASK [docker : Ensure RHEL required packages.] *********************************
skipping: [localhost]
TASK [docker : Ensure yum-utils.] **********************************************
skipping: [localhost]
TASK [docker : Ensure previous Fedora Docker packages are absent.] *************
skipping: [localhost]
TASK [docker : Ensure Fedora dnf-plugins-core.] ********************************
skipping: [localhost]
TASK [docker : Ensure RHEL Docker repo.] ***************************************
skipping: [localhost]
TASK [docker : Ensure previous Debian Docker packages are absent.] *************
ok: [localhost]
TASK [docker : Ensure Debian Docker requirements.] *****************************
ok: [localhost]
TASK [docker : Ensure Debian Docker GPG Key.] **********************************
ok: [localhost]
TASK [docker : Ensure Debian Docker stable repository.] ************************
ok: [localhost]
TASK [docker : Ensure Docker packages.] ****************************************
ok: [localhost]
TASK [docker : Ensure Docker daemon config] ************************************
skipping: [localhost]
TASK [docker : Ensure docker-compose.] *****************************************
ok: [localhost]
TASK [docker : Ensure docker-compose symlink.] *********************************
changed: [localhost]
TASK [docker : Ensure pip Docker packages.] ************************************
changed: [localhost]
TASK [docker : Ensure Docker service.] *****************************************
ok: [localhost]
TASK [docker : Ensure docker users are in the docker group.] *******************
skipping: [localhost]
TASK [gpu : Ensure Packages] ***************************************************
skipping: [localhost] => (item=tar)
skipping: [localhost] => (item=bzip2)
skipping: [localhost] => (item=make)
skipping: [localhost] => (item=automake)
skipping: [localhost] => (item=gcc)
skipping: [localhost] => (item=gcc-c++)
skipping: [localhost] => (item=vim)
skipping: [localhost] => (item=pciutils)
skipping: [localhost] => (item=elfutils-libelf-devel)
skipping: [localhost] => (item=libglvnd-devel)
skipping: [localhost] => (item=kernel-devel)
TASK [gpu : Ensure libnvidia Docker repo] **************************************
skipping: [localhost]
TASK [gpu : Ensure libnvidia experimental Docker repo] *************************
skipping: [localhost]
TASK [gpu : Ensure Nvidia Container Runtime repo] ******************************
skipping: [localhost]
TASK [gpu : Ensure nvidia container runtime experimental repo] *****************
skipping: [localhost]
TASK [gpu : Ensure nvidia-container-runtime apt GPG key] ***********************
skipping: [localhost]
TASK [gpu : Ensure nvidia-container-runtime repo] ******************************
skipping: [localhost]
TASK [gpu : Ensure nvidia-docker apt GPG key] **********************************
skipping: [localhost]
TASK [gpu : Ensure nvidia-docker repo] *****************************************
skipping: [localhost]
TASK [gpu : Verify nvidia-container-runtime-hook is in $PATH] ******************
skipping: [localhost]
TASK [gpu : Exit if nvidia-container-runtime-hook is not in $PATH] *************
skipping: [localhost]
TASK [gpu : Ensure the nvidia-docker2 package] *********************************
skipping: [localhost]
TASK [gpu : Verify CUDA container works] ***************************************
skipping: [localhost]
TASK [gpu : Exit if CUDA container fails] **************************************
skipping: [localhost]
TASK [hmsdocker : Obtain public IP.] *******************************************
ok: [localhost]
TASK [hmsdocker : Ensure paths exists locally.] ********************************
changed: [localhost] => (item=/opt/hms-docker)
changed: [localhost] => (item=/opt/hms-docker/apps)
changed: [localhost] => (item=/media/mox/PLEX2/hms-docker/media_data)
changed: [localhost] => (item=/media/mox/PLEX2/hms-docker/media_data/_library)
skipping: [localhost] => (item=)
TASK [hmsdocker : NAS base configuration tasks.] *******************************
included: /home/mox/Bureau/ansible-hms-docker-1.2.0/roles/hmsdocker/tasks/nas_local.yml for localhost
TASK [hmsdocker : Using local path] ********************************************
ok: [localhost] => {
"msg": "Using local path for media files"
}
TASK [hmsdocker : Ensure additional NAS configuration] *************************
skipping: [localhost] => (item=None)
skipping: [localhost] => (item=None)
skipping: [localhost]
TASK [hmsdocker : Ensure library folders.] *************************************
changed: [localhost] => (item={'type': 'movies', 'folder_name': 'Movies'})
changed: [localhost] => (item={'type': 'tv_shows', 'folder_name': 'TV_Shows'})
TASK [hmsdocker : Ensure container config directories exist.] ******************
changed: [localhost] => (item={'key': 'traefik', 'value': {'enabled': True, 'proxy_host_rule': 'traefik', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True}})
changed: [localhost] => (item={'key': 'sonarr', 'value': {'enabled': True, 'proxy_host_rule': 'sonarr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
changed: [localhost] => (item={'key': 'radarr', 'value': {'enabled': True, 'proxy_host_rule': 'radarr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
changed: [localhost] => (item={'key': 'bazarr', 'value': {'enabled': True, 'proxy_host_rule': 'bazarr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
changed: [localhost] => (item={'key': 'transmission', 'value': {'enabled': True, 'proxy_host_rule': 'transmission', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
changed: [localhost] => (item={'key': 'portainer', 'value': {'enabled': True, 'proxy_host_rule': 'portainer', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': False, 'expose_to_public': False}})
changed: [localhost] => (item={'key': 'overseerr', 'value': {'enabled': True, 'proxy_host_rule': 'overseerr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
changed: [localhost] => (item={'key': 'prowlarr', 'value': {'enabled': True, 'proxy_host_rule': 'prowlarr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
changed: [localhost] => (item={'key': 'requestrr', 'value': {'enabled': True, 'proxy_host_rule': 'requestrr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
changed: [localhost] => (item={'key': 'plex', 'value': {'enabled': True, 'proxy_host_rule': 'plex', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
changed: [localhost] => (item={'key': 'tautulli', 'value': {'enabled': True, 'proxy_host_rule': 'tautulli', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
changed: [localhost] => (item={'key': 'nzbget', 'value': {'enabled': True, 'proxy_host_rule': 'nzbget', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True}})
skipping: [localhost] => (item={'key': 'authentik', 'value': {'enabled': False, 'proxy_host_rule': 'authentik', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
TASK [hmsdocker : Ensure Traefik config.] **************************************
changed: [localhost]
TASK [hmsdocker : Ensure Traefik certs directory] ******************************
changed: [localhost]
TASK [hmsdocker : Ensure authentik env] ****************************************
skipping: [localhost]
TASK [hmsdocker : Ensure Outposts directory] ***********************************
skipping: [localhost]
TASK [hmsdocker : Ensure authentik Outpost configs] ****************************
skipping: [localhost] => (item={'key': 'traefik', 'value': {'enabled': True, 'proxy_host_rule': 'traefik', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True}})
skipping: [localhost] => (item={'key': 'sonarr', 'value': {'enabled': True, 'proxy_host_rule': 'sonarr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
skipping: [localhost] => (item={'key': 'radarr', 'value': {'enabled': True, 'proxy_host_rule': 'radarr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
skipping: [localhost] => (item={'key': 'bazarr', 'value': {'enabled': True, 'proxy_host_rule': 'bazarr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
skipping: [localhost] => (item={'key': 'transmission', 'value': {'enabled': True, 'proxy_host_rule': 'transmission', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
skipping: [localhost] => (item={'key': 'portainer', 'value': {'enabled': True, 'proxy_host_rule': 'portainer', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': False, 'expose_to_public': False}})
skipping: [localhost] => (item={'key': 'overseerr', 'value': {'enabled': True, 'proxy_host_rule': 'overseerr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
skipping: [localhost] => (item={'key': 'prowlarr', 'value': {'enabled': True, 'proxy_host_rule': 'prowlarr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
skipping: [localhost] => (item={'key': 'requestrr', 'value': {'enabled': True, 'proxy_host_rule': 'requestrr', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
skipping: [localhost] => (item={'key': 'plex', 'value': {'enabled': True, 'proxy_host_rule': 'plex', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
skipping: [localhost] => (item={'key': 'tautulli', 'value': {'enabled': True, 'proxy_host_rule': 'tautulli', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
skipping: [localhost] => (item={'key': 'nzbget', 'value': {'enabled': True, 'proxy_host_rule': 'nzbget', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True}})
skipping: [localhost] => (item={'key': 'authentik', 'value': {'enabled': False, 'proxy_host_rule': 'authentik', 'directory': True, 'traefik': True, 'authentik': False, 'authentik_http_proxy': True, 'expose_to_public': False}})
TASK [hmsdocker : Ensure docker-compose.yml file.] *****************************
changed: [localhost]
TASK [hmsdocker : Ensure containers defined in compose file.] ******************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Configuration error - Service \"tautulli\" uses an undefined network \"authentik_net\""}
RUNNING HANDLER [hmsdocker : restart traefik] **********************************
PLAY RECAP *********************************************************************
localhost : ok=20 changed=8 unreachable=0 failed=1 skipped=26 rescued=0 ignored=0
And most importantly this error that i've no idea how to fix it :
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Configuration error - Service \"tautulli\" uses an undefined network \"authentik_net\""}
If you can help me this would be really appreciated!
Have a nice day!
Just trying to get setup on a fresh install of Ubuntu server, ran into this:
TASK [hmsdocker : Get public IP from Transmission VPN container.] *******************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["docker", "exec", "transmission", "curl", "-s", "icanhazip.com"], "delta": "0:00:00.012565", "end": "2024-01-06 21:36:53.293296", "msg": "non-zero return code", "rc": 1, "start": "2024-01-06 21:36:53.280731", "stderr": "Error response from daemon: Container 1b60906b9ec460354a3e7e7ee629f0273c353cac1cd087621df02f5ba75aa946 is restarting, wait until the container is running", "stderr_lines": ["Error response from daemon: Container 1b60906b9ec460354a3e7e7ee629f0273c353cac1cd087621df02f5ba75aa946 is restarting, wait until the container is running"], "stdout": "", "stdout_lines": []}
if I inspect the logs, this is what I see:
Found configs for PROTONVPN in /config/vpn-configs-contrib/openvpn/protonvpn, will replace current content in /etc/openvpn/protonvpn
No VPN configuration provided. Using default.
Modifying /etc/openvpn/protonvpn/default.ovpn for best behaviour in this container
Modification: Point auth-user-pass option to the username/password file
sed: can't read /etc/openvpn/protonvpn/default.ovpn: No such file or directory
Modification: Change ca certificate path
sed: can't read /etc/openvpn/protonvpn/default.ovpn: No such file or directory
Modification: Change ping options
sed: can't read /etc/openvpn/protonvpn/default.ovpn: No such file or directory
sed: can't read /etc/openvpn/protonvpn/default.ovpn: No such file or directory
sed: can't read /etc/openvpn/protonvpn/default.ovpn: No such file or directory
sed: can't read /etc/openvpn/protonvpn/default.ovpn: No such file or directory
Modification: Update/set resolv-retry to 15 seconds
Modification: Change tls-crypt keyfile path
Modification: Set output verbosity to 3
Modification: Remap SIGUSR1 signal to SIGTERM, avoid OpenVPN restart loop
Modification: Updating status for config failure detection
Setting OpenVPN credentials...
adding route to local network 192.168.1.0/24 via 172.19.0.1 dev eth0
2024-01-06 16:38:33 Cipher negotiation is disabled since neither P2MP client nor server mode is enabled
Options error: You must define TUN/TAP device (--dev)
Use --help for more information.
I wanted to try to troubleshoot this myself but I don't know enough about OVPN and transmission in a container to know what I am doing. My setup is just an old tower setup with Ubuntu server (22.04) and for the VPN I am using the credentials from my proton VPN OVPN credentials. I've tried this both by modifying the config to run ansible remotely from my laptop, and by running ansible directly on server (which seems to be the project intention) I am running the script as root as the make file doesn't seem setup to run via become, etc. I have suspicion that there might be some ovpn config I can tweak here but I am not sure.
Dont know if this is possible or not, but it should!
How bout to add ability to route apps/services through Cloudflare Tunnel using this (https://github.com/cloudflare/cloudflared)
When enabling expose_to_public and authentik via container_map.yml for transmission, only the transmission-proxy.[domain] get's "secured". But via transmission.[domain] transmission itself is accessible without any authentication or whitelist.
What is the desired configuration to make transmission accessible under transmission.[domain] via authentik and nothing else?
I installed some updates, and tried to make some var changes (just adding a custom ovpn location), and now I'm unable to run make apply without it failing with this error:
TASK [hmsdocker : Ensure containers defined in compose file.] ***************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "errors": ["ERROR: for 722c8e35b6df_bazarr 'ContainerConfig'"], "module_stderr": "Recreating a6fddf7b912f_portainer ... \nRecreating 722c8e35b6df_bazarr ... \nRecreating 5f5ba7a164ed_prowlarr ... \nRecreating 401c53621034_radarr ... \nRecreating e6c5bbc6e2a0_requestrr ... \nRecreating 7f234c8a87c2_plex ... \nRecreating a417ced53cfb_transmission ... \nRecreating 2a7b66f2062a_cloudflare-tunnel ... \n\nERROR: for 722c8e35b6df_bazarr 'ContainerConfig'\nRecreating 519440a76d8b_overseerr ... \nRecreating 8a96b530e81e_sonarr ... \nRecreating d16768142c0d_tautulli ... \n", "module_stdout": "", "msg": "Error starting project 'ContainerConfig'"}
Every time I run it, it shows the error for a different container, it's not specific to a particular one, a quick Google looks like some Docker changes may have caused this issue, but really can't figure out what's wrong.
Hi there,
I'm almost brand new to Linux and deploying this project to my environment has been a valuable learning experience for me so far, so thank you for continuing to maintain it. If a total newbie like me can get 75% of the way to deployment, it speaks volumes as to the ease of use for your project.
With that said, I'm stuck on a weird issue with my domain reporting "Gateway Error" around 50% of the time and I'm not sure if this is down to my domain's DNS settings, Traefik configuration or even the Transmission setup.
I am using a *.live domain and utilising Cloudflare for DNS + Tunnels, with the tunnel gateway appearing to report a valid connection.
DNS Records
Tunnel Configuration - Set to HTTPS results in Gateway Error, HTTP results in too many redirects (re-write from Traefik to force HTTPS?)
I am also utilising Mullvad VPN for transmission. Initially had issues with this, but after following advice the issues resolved and the container is now reporting healthy, and Traefik is configured with my external IP in the label for good measure.
SSL Certificate also appears to be valid and as such I have disabled staging for it.
I am trying to access Plex and Overseerr externally and 50% of the time, plex.{mydomain}.com returns a Cloudflare "Bad Gateway" error, although I can eventually access Plex if I refresh the page multiple times before it eventually goes back to not working again. I cannot access Overseerr externally at all though, and often times I am receiving a "Too many redirects" error which does lead me to believe that Traefik might be the issue here?
Even on the server I also cannot access any of the containers except for Plex using the configured domain without using the local IP address (127.0.0.1:PORT) - I suspect this may be me misconfiguring the DNS though?
Any ideas on some things I could check here?
Thanks very much,
Callum
So it finally went through however when trying to access plex.domain it states forbidden however the overseerr.domain does resolve but because the plex.domain is forbidden their is no way of continuing setup.
Aren't the host/subdomains defined automatically from the config? I saw under my cloudflare account where the overseerr record was created then made a cname record for plex pointing to the overseer.domain. Ports are forwarded properly in router but even locally cannot access the plex server.
All other subdomain eg. uptime-kuma, radarr, sonarr, lead to a forbidden page nothing is resolving some guidance on what would be causing this would be helpfull.
Originally posted by @ryu777mtg in #63 (comment)
Think i found a small bug in the docker-compose.yml.j2
If i just enable authentik, portainer and traefik Tautulli's ports are still beeing added to docker-compose.yml
here is a snippet from only Traefik, Authentik and portainer enabled.
Problem is that if i enable sonarr or any other app, that app is getting this ports, 8181.
version: '3'
services:
# Portainer container, webgui for docker
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
command: -H unix:///var/run/docker.sock
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 10m
networks:
- "traefik_net"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/REDACT/apps/portainer/config:/data
labels:
- traefik.enable=true
- traefik.http.services.portainer-REDACT.loadbalancer.server.port=9000
- traefik.http.routers.portainer-REDACT.rule=Host(`portainer.REDACT`)
- traefik.http.routers.portainer-REDACT.middlewares=internal-ipwhitelist
# Watchtower container, automatic updates
watchtower:
image: containrrr/watchtower:latest
container_name: watchtower
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 10m
networks:
- "traefik_net"
command: --cleanup --schedule "0 0 9 * * *"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# Traefik container, loadbalancer/reverse proxy/ssl
traefik:
image: traefik:v2.6.1
container_name: traefik
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 10m
ports:
- 80:80
- 443:443
- 8080:8080
environment:
- TZ=Europe/Stockholm
- PUID=100
- PGID=99
- CF_DNS_API_TOKEN=REDACT
- CF_ZONE_API_TOKEN=REDACT
networks:
- "traefik_net"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/REDACT/apps/traefik/config/traefik.yml:/etc/traefik/traefik.yml
- /opt/REDACT/apps/traefik/config/certs/:/certs/
labels:
- traefik.enable=true
- traefik.http.routers.traefik-REDACT.rule=Host(`traefik.REDACT`)
- traefik.http.services.traefik-REDACT.loadbalancer.server.port=8080
- "traefik.http.middlewares.internal-ipwhitelist.ipwhitelist.sourcerange=127.0.0.1/32, 192.168.56.0/24, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16"
- "traefik.http.middlewares.external-ipwhitelist.ipwhitelist.sourcerange=0.0.0.0/0"
- "traefik.http.routers.traefik-REDACT.middlewares=internal-ipwhitelist"
# Authentik container, authentication/authorization
authentik-postgresql:
container_name: authentik-postgresql
image: postgres:12-alpine
restart: unless-stopped
networks:
- authentik_net
logging:
driver: json-file
options:
max-size: 10m
healthcheck:
test: ["CMD", "pg_isready"]
start_period: 20s
interval: 30s
retries: 5
timeout: 5s
volumes:
- authentik_database:/var/lib/postgresql/data
env_file:
- .env
environment:
- POSTGRES_PASSWORD=${PG_PASS:?database password required}
- POSTGRES_USER=${PG_USER:-REDACT}
- POSTGRES_DB=${PG_DB:-REDACT}
authentik-redis:
container_name: authentik-redis
image: redis:alpine
restart: unless-stopped
networks:
- authentik_net
logging:
driver: json-file
options:
max-size: 10m
healthcheck:
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
start_period: 20s
interval: 30s
retries: 5
timeout: 3s
authentik-server:
container_name: authentik-server
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2022.5.2}
restart: unless-stopped
networks:
- authentik_net
- traefik_net
logging:
driver: json-file
options:
max-size: 10m
command: server
env_file:
- .env
environment:
AUTHENTIK_REDIS__HOST: REDACT
AUTHENTIK_POSTGRESQL__HOST: REDACT
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-REDACT}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-REDACT}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
# AUTHENTIK_ERROR_REPORTING__ENABLED: "true"
# WORKERS: 2
volumes:
- /opt/REDACT/apps/authentik/media:/media
- /opt/REDACT/apps/authentik/custom-templates:/templates
- authentik_geoip:/geoip
labels:
- traefik.enable=true
- traefik.http.services.authentik-server-REDACT.loadbalancer.server.port=9000
- traefik.http.routers.authentik-server-REDCAT.rule=Host(`authentik.REDACT`)
authentik-worker:
container_name: authentik-worker
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2022.5.2}
restart: unless-stopped
networks:
- authentik_net
command: worker
env_file:
- .env
environment:
AUTHENTIK_REDIS__HOST: REDACT
AUTHENTIK_POSTGRESQL__HOST: REDACT
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-REDACT}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-REDACT}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
# AUTHENTIK_ERROR_REPORTING__ENABLED: "true"
# This is optional, and can be removed. If you remove this, the following will happen
# - The permissions for the /media folders aren't fixed, so make sure they are 1000:1000
# - The docker socket can't be accessed anymore
user: root
volumes:
- /opt/REDACT/apps/authentik/media:/media
- /opt/REDACT/apps/authentik/certs:/certs
- /var/run/docker.sock:/var/run/docker.sock
- /opt/REDACT/apps/authentik/custom-templates:/templates
- authentik_geoip:/geoip
authentik-geoipupdate:
container_name: authentik-geoipupdate
image: "maxmindinc/geoipupdate:latest"
networks:
- authentik_net
volumes:
- "authentik_geoip:/usr/share/GeoIP"
authentik-geoipupdate:
container_name: authentik-geoipupdate
image: "maxmindinc/geoipupdate:latest"
networks:
- authentik_net
volumes:
- "authentik_geoip:/usr/share/GeoIP"
env_file:
- .env
environment:
GEOIPUPDATE_ACCOUNT_ID: "REDACT"
GEOIPUPDATE_LICENSE_KEY: "REDACT"
GEOIPUPDATE_EDITION_IDS: "GeoLite2-City"
GEOIPUPDATE_FREQUENCY: "8"
ports:
- 8181:8181
volumes:
- /opt/REDACT/apps/tautulli/config:/config
# Plex logs location
- /opt/REDACT/apps/plex/config/Library/Application Support/Plex Media Server/Logs:/plex_logs:ro
# Cloudflare DDNS container
cloudflare-ddns:
image: oznu/cloudflare-ddns:latest
container_name: cloudflare-ddns
restart: 'unless-stopped'
logging:
driver: json-file
options:
max-size: 10m
environment:
- API_KEY=REDACT
- ZONE=REDACT
- DELETE_ON_STOP=false
- SUBDOMAIN=overseerr
- PROXIED=true
networks:
"download_net":
driver: bridge
attachable: false
"media_net":
driver: bridge
attachable: false
"traefik_net":
driver: bridge
attachable: true
"authentik_net":
driver: bridge
attachable: false
volumes:
authentik_database:
driver: local
authentik_geoip:
driver: local
As you can see i dont have Tautulli enabled but it still adding Tautulli ports to the compose file. From docker-compose.yml.j2 file if there is any problems here i dont know.
{% if container_enabled_tautulli %}
# Tautulli container, analytics
tautulli:
image: tautulli/tautulli:latest
container_name: tautulli
restart: {{ container_restart_policy }}
logging:
driver: json-file
options:
max-size: 10m
networks:
- "media_net"
- "traefik_net"
- "authentik_net"
environment:
- PUID={{ container_uid }}
- PGID={{ container_gid }}
- TZ={{ container_timezone }}
{% if traefik_enabled_tautulli %}
labels:
- traefik.enable=true
- traefik.http.services.tautulli-{{ project_name }}.loadbalancer.server.port=8181
- traefik.http.routers.tautulli-{{ project_name }}.rule=Host(`{{ hms_docker_container_map['tautulli']['proxy_host_rule'] }}.{{ hms_docker_domain }}`)
{% if not expose_public_enabled_tautulli %}
- "traefik.http.routers.tautulli-{{ project_name }}.middlewares=internal-ipwhitelist"
{% endif %}
{% if authentik_enabled_tautulli %}
- traefik.http.routers.tautulli-{{ project_name }}.middlewares=authentik-proxy-{{ project_name }}-tautulli-router@docker
{% endif %}
{% endif %}
{% endif %}
{% if expose_ports_enabled_tautulli %}
ports:
- 8181:8181
{% endif %}
volumes:
- {{ hms_docker_apps_path }}/tautulli/config:/config
# Plex logs location
- {{ hms_docker_apps_path }}/plex/config/Library/Application Support/Plex Media Server/Logs:/plex_logs:ro
{% endif %}
Hello, thanks for this repo!
I'm pretty noob to ansible & co, and the playbook crashes w/o giving me much information (even with -vvv
).
TASK [Ensure HMS-Docker role] ****************************************************************************************************************************************************************************************
task path: /home/ubuntu/ansible-hms-docker/hms-docker.yml:21
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1681553216.6158442-12645315198404 `" && echo ansible-tmp-1681553216.6158442-12645315198404="` echo /root/.ansible/tmp/ansible-tmp-1681553216.6158442-12645315198404 `" ) && sleep 0'
Using module file /usr/lib/python3/dist-packages/ansible/modules/utilities/logic/import_role.py
<localhost> PUT /root/.ansible/tmp/ansible-local-30876cpcutv58/tmpm7m_ukh2 TO /root/.ansible/tmp/ansible-tmp-1681553216.6158442-12645315198404/AnsiballZ_import_role.py
<localhost> PUT /root/.ansible/tmp/ansible-local-30876cpcutv58/tmpuvspups9 TO /root/.ansible/tmp/ansible-tmp-1681553216.6158442-12645315198404/args
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1681553216.6158442-12645315198404/ /root/.ansible/tmp/ansible-tmp-1681553216.6158442-12645315198404/AnsiballZ_import_role.py /root/.ansible/tmp/ansible-tmp-1681553216.6158442-12645315198404/args && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1681553216.6158442-12645315198404/AnsiballZ_import_role.py /root/.ansible/tmp/ansible-tmp-1681553216.6158442-12645315198404/args && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1681553216.6158442-12645315198404/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
}
I'm on an Ubuntu server 20.04, fresh install, and filled all mandatory fields in the advanced config except VPN (do I really need a VPN for transmission?).
What can I do to fix that? Googling didn't help much.
Sorry if the question is stupid. Thanks again!
hello
very nice project
i have use the standar configuration with the home.local example
i can join traefik on home.local:8080 but all other subdomain ( plex.home.local etc ) are not working
How can I add some of my other network services to traefik?
such as home assistant and BlueIris
On Debian 12 I get the following error while running make:
make advanced
⚠ This will overwrite all existing files in './vars/custom', are you sure? [y/n] > /bin/sh: 2: read: arg count
/bin/sh: 3: [[: not found
[OK] Copying files...
The Makefiles statements are run with /bin/sh (apparently) but [[ is only available with bash etc.
I just pulled the latest changes to be up to date but running make check
or make apply
I get the following error:
ERROR! couldn't resolve module/action 'community.docker.docker_compose_v2'. This often indicates a misspelling, missing collection, or incorrect module path.
The error appears to be in '/hms/ansible-hms-docker/roles/hmsdocker/tasks/main.yml': line 108, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Ensure containers defined in compose file.
^ here
make: *** [Makefile:44: check] Fehler 4
How can i specify in the default.yml on each container if i want to have internal or external access?
I know i can edit docker-compose.yml.j2 file and change "middlewares=internal-ipwhitelist" to my needs but how can i do this in the vars file with something like
For Internal access
nzbget:
enabled: yes
directory: yes
traefik: yes
expose: no
or for External access
nzbget:
enabled: yes
directory: yes
traefik: yes
expose: yes
Hey ahembree,
Having an issue running ansible for the first time.
This is the error:
TASK [hmsdocker : Ensure containers defined in compose file.] *********************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_compose) module: env_file Supported parameters include: api_version, build, ca_cert, client_cert, client_key, debug, definition, dependencies, docker_host, files, hostname_check, nocache, project_name, project_src, pull, recreate, remove_images, remove_orphans, remove_volumes, restarted, scale, services, ssl_version, state, stopped, timeout, tls, tls_hostname, validate_certs"}
Im using the advanced config file.
What parameter its having issue with?
Thanks
Key
During the task
TASK [hmsdocker : Ensure containers defined in compose file.]
This error appears for all services. Such as:
ERROR: for bazarr Cannot start service bazarr: error while creating mount source path '/opt/hms-docker/apps/bazarr/config': mkdir /opt/hms-docker: read-only file system
The permissions on these directories are:
ls -la /opt/hms-docker/
total 28
drwxr-xr-x 3 root root 4096 Oct 21 14:28 .
drwxr-xr-x 4 root root 4096 Oct 21 14:27 ..
drwxr-xr-x 13 root root 4096 Oct 21 14:28 apps
-rw-r--r-- 1 root docker 11765 Oct 21 14:28 docker-compose.yml
-rw-r----- 1 root docker 533 Oct 21 14:28 .env
It seems like it can make the directories, like /opt/hms-docker/apps/bazarr/config
but something else is having permission issues.
I tried to clear this out with rm -rf /opt/hms-docker
and re-running make apply
, but I get the same issue. I tried sudo make apply
and also tried modifying the secrets_env_group, as you can see above, but none of that helped.
When I am enabling authenik, it's not autocreating the key and pgpassword files.
I've tried to run the playbook with both ssl enabled and disabled.
It hangs on the following error, I've trawled through the config files and cannot find a reference to ssl_version, unless I missed it.
Ubuntu server LTS, fresh install but fully up-to-date.
TASK [hmsdocker : Ensure env] ***************************************************************************************************************************
ok: [localhost]
TASK [hmsdocker : Ensure docker-compose.yml file.] ******************************************************************************************************
ok: [localhost]
TASK [hmsdocker : Ensure containers defined in compose file.] *******************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Configuration error - kwargs_from_env() got an unexpected keyword argument 'ssl_version'"}
PLAY RECAP **********************************************************************************************************************************************
localhost : ok=24 changed=0 unreachable=0 failed=1 skipped=44 rescued=0 ignored=0
Getting this error while trying to apply
TASK [hmsdocker : Get public IP from Transmission VPN container.] ******************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["docker", "exec", "transmission", "curl", "-s", "icanhazip.com"], "delta": "0:00:00.012615", "end": "2024-05-10 17:25:08.981082", "msg": "non-zero return code", "rc": 1, "start": "2024-05-10 17:25:08.968467", "stderr": "Error response from daemon: Container 06f3a5e442579ae024a0e61fe54fe92ced31ad9a2dec096b69006bf9e290468e is restarting, wait until the container is running", "stderr_lines": ["Error response from daemon: Container 06f3a5e442579ae024a0e61fe54fe92ced31ad9a2dec096b69006bf9e290468e is restarting, wait until the container is running"], "stdout": "", "stdout_lines": []}
The template for transmission includes two logging entries, which ends up causing a parsing error.
Hi, so i've edited my config to use cloudflare, and after reloading the playbook i get this error :
TASK [hmsdocker : Get public IP from Transmission VPN container.] **************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["docker", "exec", "transmission", "curl", "-s", "icanhazip.com"], "delta": "0:00:12.489838", "end": "2023-06-29 21:02:35.376014", "msg": "non-zero return code", "rc": 52, "start": "2023-06-29 21:02:22.886176", "stderr": "WARNING: Error loading config file: /root/.docker/config.json: read /root/.docker/config.json: is a directory", "stderr_lines": ["WARNING: Error loading config file: /root/.docker/config.json: read /root/.docker/config.json: is a directory"], "stdout": "", "stdout_lines": []}
Never got this problem despite the fact that i'm tweaking the config pretty often.
Reverting what i've change didn't change anything..
Hope that u can help me, thanks!
Traefik is deprecating the Pilot Dashboard and Pilot Plugins starting with version 2.7 (currently, this repo is running 2.6.1).
Starting in 2.9, these features will be removed.
Due to these changes, Traefik Pilot configuration will be removed from the repo and Traefik will be bumped to 2.7 to start the deprecation process.
Additional info on this deprecation can be found here: https://doc.traefik.io/traefik/v2.7/deprecation/features/
i have edited the file "roles -> hmsdocker -> defaults -> main.yml" and "templates -> docker-compose.yml.j2" to include MaxMind ID and Key for GeoIP
docker-compose.yml.j2
authentik-geoipupdate:
container_name: authentik-geoipupdate
image: "maxmindinc/geoipupdate:latest"
networks:
- authentik_net
volumes:
- "authentik_geoip:/usr/share/GeoIP"
env_file:
- .env
environment:
GEOIPUPDATE_ACCOUNT_ID: "{{ authentik_maxmind_id }}"
GEOIPUPDATE_LICENSE_KEY: "{{ authentik_maxmind_key }}"
GEOIPUPDATE_EDITION_IDS: "GeoLite2-City"
GEOIPUPDATE_FREQUENCY: "8"
hmsdocker -> defaults -> main.yml
#######################################################################
### Authentik settings
# This OR the option in the container map will enable or disable the Authentik container
authentik_enabled: yes
authentik_provision_config: no
authentik_api_token: ""
authentik_host: "http://authentik-server:9000" # leave this as default if you're using the default hms-docker configuration (really only used if you have a separate authentik server) default: "http://authentik-server:9000"
authentik_external_host: 'https://{{ hms_docker_container_map["authentik"]["proxy_host_rule"] }}.{{ hms_docker_domain }}' # This needs to match the host rule that routes traffic to the Authentik container
authentik_maxmind_id: "CHANGEME-MAXMIND-ID" # MaxMind.com ID
authentik_maxmind_key: "CHANGEME-MAXMIND-KEY" # Maxmind.com Secret Key
### End of authentik settings
#######################################################################
When running make apply
something tries to install pip3 packages (I think docker and docker-compose).
But this fails on my Debian 12 system with the following error:
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.11/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
I could install python3-docker system-wide but not python3-docker-compose (it does not exist).
I tried creating a virtualenv, but the playbook is using directly /usr/bin/python3 so it doesn't matter.
I tried using pipx, but I get the following error, when trying to install pipx install docker-compose
:
Fatal error from pip prevented installation. Full pip output in file:
/root/.local/pipx/logs/cmd_2023-09-14_19.06.36_pip_errors.log
pip seemed to fail to build package:
PyYAML<6,>=3.10
Some possibly relevant errors from pip install:
error: subprocess-exited-with-error
AttributeError: cython_sources
Error installing docker-compose.
I'm lost... Not sure what to do...
I keep movies and tvshows on seperate smb shares,, and browsing through this playbook it looks like you can only mount 1 share which then gets used to create multiple folders inside that share. Having multiple shares seems like something that isn't just an edgecase.
Are there any plans to implement this?
Greetings,
Looks like when enabling plex for external access, it's stuck using the media net for outbound traffic.. I have the port NATted in my firewall, and an ACL set to allow the traffic thru to the docker host IP. An NMAP scan shows that port as closed/filtered.. wondering if traefik is screwing me up somehow.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.