Git Product home page Git Product logo

jimsgarage's People

Contributors

arcao avatar cyberops7 avatar firef4rt avatar fnmeyer avatar fodurrr avatar jamesturland avatar jannesaarinen avatar jlengelbrecht avatar koloblicin avatar martyn-v avatar mofelee avatar nillestimothy avatar pegoku avatar pgumpoldsberger avatar ratnose avatar rc14193 avatar ricardowec51 avatar samuelnitsche avatar shockstruck avatar stianjosok avatar wuast94 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jimsgarage's Issues

Connection errors on tcp port 8080

Hello Jim - I am running the rke2.sh script and getting errors related to connection refused on ipv6 loopback interface . evertything else runs ok. So the script ends without installing correctly. This is on a proxmox host. I have 3 masters and 3 workers ( added an extra worker ). Many thanks for this great resource.

dial tcp [::1]:8080: connect: connection refused

E1215 11:29:58.378448    4193 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Kubernetes Script

When testing the Kubernertes one click script i noitce the following :

error : error: unknown command "k3s-ha" for "kubectl"

Iis this cause for concern ?

K3s failing to switch back to master

One node (K3s-02) tends to go offline temporarily and K3s-01 picks up for it (shows in rancher). Then doesn't comeback online, after restarting the project completely still having issues.

K3S-Deploy not working on Debian12

Hi,

default Debian12 installation didn't include "iptables" and "sudo" so installation will fail with error

Error: error received processing command: Process exited with status 127

Rancher not deploying

I been trying to use the k3s.sh the lite version one. Rancher pods just keep restarting.

NAME READY STATUS RESTARTS AGE
rancher-79f586f445-955jl 0/1 Running 12 (19s ago) 40m
rancher-79f586f445-gs7dt 0/1 Running 11 (6m25s ago) 40m
rancher-79f586f445-hp8rj 0/1 Running 14 (6m18s ago) 60m

K3S-Deploy: Issues on Ubuntu Server 22.04

Had a few issues when running the K3S deploy script (k3s.sh). I'm trying to describe my steps to manually handle the issues in the hopes that it will help the magicians find work arounds to improve the scripts. :)

Context

Using k3s deploy on VMs with Ubuntu Server 22.04 minimal installation.

VMs are setup with 4GB RAM and 4 processors, and are running on a mini pc with 32GB of RAM running XCP-NG hypervisor.

  • Created an admin VM (4GB + 4 processors)
  • Updated apt
  • Installed XCP-NG guest tools
  • Installed ssh
  • Created ssh key
  • Shut down the VM, and created a snapshot
  • From the snapshot, created 5 new VMs
  • Went through each VM to edit hostname and Netplan ip address
  • Shutdown the newly created VMs and snapshot them
  • Turned all VM on

Issues

At this point I expected to have everything ready to run the script, but the script end up in a loop with error stating that could not reach the api on localhost. At first I thought it was a firewall config, but turns out there were errors when running the script.

It was going all the way through, so in order to find the issues we need to check every log message and eventually find the failing commands in the script.

After some time, I noticed the k3sup command in the script failed because SSH requires the user password. I haven't found a way around this, so end up changing ssh config on all VMs to not require password for my username (sudo visudo, and below the config for root added a line for the user with last argument NOPASSWD: ALL).

Warning: This is not an ideal solution because it essentially opens a big security whole.

Another command that was also failing: kubectl apply -f ~/ipAddressPool.yaml. This was leading to nginx not being reachable (displays external IP as pending). Manually running it again solved the issue.

Sorry for messy notes. Hope this is helpful. :)

http-external entrypoint should redirect to https, not https-external (I think)

Hello, I was following your examples, and found that a problem is created when you redirect http-external to https-external. When an external user tries to access http://example.com, some browsers (tested on mobile safari) redirect to https://example.com:444, which won't get through the firewall since only 443 is port forwarded. This issue is not necessarily apparent because most browsers will try https first and get to the https-external entrypoint on the first try. But if you explicitly call http, the redirect doesn't (always?) work.

The solution I came up with is to redirect http-external to https. My instinct was telling me this opens up the same security hole that we were solving in the first place, but I tested this with local DNS entries, and it seems to hold. I suspect what is happening is that the redirect does not put the client straight through to the https entrypoint, but instead tells the client to make a new request using the https format (i.e., rewrites the url to https://example.org on port 443). This then comes back into the firewall and gets converted to port 444 before it hits traefik again. It seems to work.

I'll admit half of the reason I'm posting this is to let you know, and the other half of it is for you to let me know why this really is a security flaw and why I shouldn't use it.

Request for Assistance with GitHub Project: RKE2 HA Stack Ansible Playbook

Dear Jim,

I hope this message finds you well.

My name is Rob Wingate, and I am an avid follower of your YouTube channel and GitHub tutorials, especially those focused on homelab projects and open-source guides. Your content has been incredibly insightful and inspiring, and I want to express my gratitude for all the valuable knowledge you share.

I am currently working on a GitHub project aimed at creating an Ansible playbook to automate the deployment of a robust and highly scalable High Availability (HA) RKE2 cluster. The project can be found here: https://github.com/imaginestack/rke2-ha-stack.

My goal is to include all the essential components for load balancing, service discovery, and certificate management. As of now, I am focusing on getting RKE2 with 2 master nodes (etcd, control plane), Kube-VIP, and Cilium working together before moving on to cert-manager and Traefik.

I have spent over 50 hours debugging this setup, particularly struggling with the asynchronous process flow involving Cilium and Kube-VIP. It feels like a catch-22 situation, and I am finding it quite challenging to get everything working seamlessly.

I would be immensely grateful if you could find some time to review my project and provide any guidance or suggestions. Your expertise would be incredibly valuable to me, and I believe your insights could help overcome the hurdles I am currently facing.

Thank you very much for considering my request. I look forward to any assistance you can provide.

Best regards,

Rob

k3s deploy error nginx address not working

I am getting an error during the deployment of the script:

Error from server (InternalError): error when creating "/home/ubuntu/ipAddressPool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": no endpoints available for service "webhook-service"

I am assuming my problems have to do with this error, if I got to the nginx address I get no connection. I tried a lil bit of problem solving with my limited knowledge and found the data in the screen shots but I am not sure how to fix these problems.
Screenshot (209)
Screenshot (210)
Screenshot (211)
Screenshot (212)

How to fix : Environment variables BACKUP_CRON and CHECK_CRON are mutually exclusive. Please fix your configuration. Exiting.

Whatever combination I put inside BACKUP_CRON, CHECK_CRON and PRUNE_CRON I am still having the same error.. Did you got the same ? how to fix that
https://github.com/JamesTurland/JimsGarage/blob/2d553a0115f94897bab273cb9a98fae6fc8717cd/restic/docker-compose.yml#L10C7-L10C7

restic-check | Environment variables BACKUP_CRON and CHECK_CRON are mutually exclusive. Please fix your configuration. Exiting.
restic-prune | Repository found.

Thanks in advance

K3sup script fail requiring a password.

I am testing the install script but using unraid as the hypervisor rather than proxmox. AFAIK this should make a difference, it just changes the creation of the base vms.

The issue I'm seeing is:

sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required

This is returned when the install part of the script runs (ie # Step 1: Bootstrap First k3s Node). Looking at the issue here it appears that sudo is a requirement for K3sup. this would mean the script shouldn't work, but it obviously does. what am I missing?

Traefik + PiHole Script not working RKE2

I'm trying to run the script script on my wsl ubuntu distro. the deploy command is creating the helm and manifest forlder but there is nothing inside of it.

Before running the script
image

After runnning the script
image

Terminal output:
personal@AMADOR-PC:~$ ./deploy.sh __ _ _ ___ \ \(_)_ __ ___( )__ / _ \__ _ _ __ __ _ __ _ ___ \ \ | '_ _ / | / // | '__/ _ |/ ` |/ _
/_/ / | | | | | _
\ / /
\ (
| | | | (| | (| | /
_/||| || ||
/ _/_,|| _,|_, |_|
|
_/
Traefik, Cert-Manager, and PiHole

          https://youtube.com/@jims-garage

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
unzip is already the newest version (6.0-26ubuntu3.1).
The following packages were automatically installed and are no longer required:
ieee-data python3-argcomplete python3-distutils python3-dnspython python3-lib2to3 python3-libcloud python3-lockfile
python3-netaddr python3-pycryptodome python3-requests-toolbelt python3-selinux python3-simplejson
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 49 not upgraded.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 1411k 0 1411k 0 0 1260k 0 --:--:-- 0:00:01 --:--:-- 1260k
Archive: master.zip
d57cbaa
creating: /home/personal/jimsgarage/JimsGarage-main/
inflating: /home/personal/jimsgarage/JimsGarage-main/.pre-commit-config.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Authelia/
creating: /home/personal/jimsgarage/JimsGarage-main/Authelia/Authelia/
inflating: /home/personal/jimsgarage/JimsGarage-main/Authelia/Authelia/configuration.yml
inflating: /home/personal/jimsgarage/JimsGarage-main/Authelia/Authelia/docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Authelia/Authelia/users_database.yml
creating: /home/personal/jimsgarage/JimsGarage-main/Authelia/Nginx/
inflating: /home/personal/jimsgarage/JimsGarage-main/Authelia/Nginx/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Authelia/Traefik/
inflating: /home/personal/jimsgarage/JimsGarage-main/Authelia/Traefik/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Authentik/
inflating: /home/personal/jimsgarage/JimsGarage-main/Authentik/.env
creating: /home/personal/jimsgarage/JimsGarage-main/Authentik/Web-Proxies/
linking: /home/personal/jimsgarage/JimsGarage-main/Authentik/Web-Proxies/.env -> ../.env
inflating: /home/personal/jimsgarage/JimsGarage-main/Authentik/Web-Proxies/authentik-docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Authentik/Web-Proxies/example-nginx-docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Authentik/Web-Proxies/traefik-conf.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Authentik/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Cloudflare-Tunnel/
inflating: /home/personal/jimsgarage/JimsGarage-main/Cloudflare-Tunnel/docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Cloudflare-Tunnel/macvlan
creating: /home/personal/jimsgarage/JimsGarage-main/Code-Server/
inflating: /home/personal/jimsgarage/JimsGarage-main/Code-Server/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Crowdsec/
creating: /home/personal/jimsgarage/JimsGarage-main/Crowdsec/Traefik/
inflating: /home/personal/jimsgarage/JimsGarage-main/Crowdsec/Traefik/config.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Crowdsec/Traefik/traefik.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Crowdsec/acquis.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Crowdsec/docker-compose.yml
creating: /home/personal/jimsgarage/JimsGarage-main/Frigate/
inflating: /home/personal/jimsgarage/JimsGarage-main/Frigate/config.yml
inflating: /home/personal/jimsgarage/JimsGarage-main/Frigate/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/GPU_passthrough/
inflating: /home/personal/jimsgarage/JimsGarage-main/GPU_passthrough/readme.md
creating: /home/personal/jimsgarage/JimsGarage-main/Gotify/
inflating: /home/personal/jimsgarage/JimsGarage-main/Gotify/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Grafana-Monitoring/
creating: /home/personal/jimsgarage/JimsGarage-main/Grafana-Monitoring/Part-2/
inflating: /home/personal/jimsgarage/JimsGarage-main/Grafana-Monitoring/Part-2/mibs.txt
inflating: /home/personal/jimsgarage/JimsGarage-main/Grafana-Monitoring/Part-2/prometheus.yml
inflating: /home/personal/jimsgarage/JimsGarage-main/Grafana-Monitoring/Part-2/telegraf.conf
inflating: /home/personal/jimsgarage/JimsGarage-main/Grafana-Monitoring/docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Grafana-Monitoring/prometheus.yml
inflating: /home/personal/jimsgarage/JimsGarage-main/Grafana-Monitoring/telegraf.conf
creating: /home/personal/jimsgarage/JimsGarage-main/Headscale/
creating: /home/personal/jimsgarage/JimsGarage-main/Headscale/Tailscale-Client/
inflating: /home/personal/jimsgarage/JimsGarage-main/Headscale/Tailscale-Client/docker-compose,yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Headscale/config.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Headscale/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Headscale/with-Traefik/
inflating: /home/personal/jimsgarage/JimsGarage-main/Headscale/with-Traefik/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Homelab-Buyer's-Guide/
inflating: /home/personal/jimsgarage/JimsGarage-main/Homelab-Buyer's-Guide/Q3-2023.md
creating: /home/personal/jimsgarage/JimsGarage-main/Homepage/
creating: /home/personal/jimsgarage/JimsGarage-main/Homepage/Homepage/
inflating: /home/personal/jimsgarage/JimsGarage-main/Homepage/Homepage/docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Homepage/Homepage/services.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Immich/
inflating: /home/personal/jimsgarage/JimsGarage-main/Immich/.env
inflating: /home/personal/jimsgarage/JimsGarage-main/Immich/docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Immich/hwaccel.yml
creating: /home/personal/jimsgarage/JimsGarage-main/Jellyfin/
inflating: /home/personal/jimsgarage/JimsGarage-main/Jellyfin/docker-compose.yml
creating: /home/personal/jimsgarage/JimsGarage-main/Jitsi/
inflating: /home/personal/jimsgarage/JimsGarage-main/Jitsi/.env
inflating: /home/personal/jimsgarage/JimsGarage-main/Jitsi/docker-compose.yml
inflating: /home/personal/jimsgarage/JimsGarage-main/Jitsi/gen-passwords.sh
creating: /home/personal/jimsgarage/JimsGarage-main/Keycloak/
inflating: /home/personal/jimsgarage/JimsGarage-main/Keycloak/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Cloud-Init/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Cloud-Init/readme.md
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Create-manifest-helm/
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Create-manifest-helm/Portainer/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Create-manifest-helm/Portainer/default-headers.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Create-manifest-helm/Portainer/ingress.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Create-manifest-helm/Portainer/values.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Create-manifest-helm/WireGuard-Easy/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Create-manifest-helm/WireGuard-Easy/default-headers.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Create-manifest-helm/WireGuard-Easy/deployment.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Create-manifest-helm/WireGuard-Easy/ingress.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Create-manifest-helm/WireGuard-Easy/ingressRouteUDP.yaml
extracting: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Create-manifest-helm/readme.md
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Docker-Kubernetes-Data-Migration/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Docker-Kubernetes-Data-Migration/readme.md
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/GitOps/
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/GitOps/Gotify/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/GitOps/Gotify/default-headers.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/GitOps/Gotify/deployment.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/GitOps/Gotify/ingress.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/GitOps/Grafana/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/GitOps/Grafana/fleet.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/GitOps/Grafana/values.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/GitOps/readme.md
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/K3S-Deploy/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/K3S-Deploy/k3s.sh
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/K3S-Deploy/readme.md
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Longhorn/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Longhorn/longhorn-K3S.sh
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Longhorn/longhorn-RKE2.sh
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Longhorn/longhorn.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Longhorn/readme.md
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/RKE2/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/RKE2/k3s
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/RKE2/rke2.sh
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Rancher-Deployment/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Rancher-Deployment/readme.md
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Cert-Manager/
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Cert-Manager/Certificates/ creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Cert-Manager/Certificates/Production/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Cert-Manager/Certificates/Production/jimsgarage-production.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Cert-Manager/Issuers/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Cert-Manager/Issuers/letsencrypt-production.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Cert-Manager/Issuers/secret-cf-token.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Cert-Manager/values.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Dashboard/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Dashboard/ingress.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Dashboard/middleware.yaml

inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/Dashboard/secret-dashboard.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/default-headers.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Helm/Traefik/values.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Manifest/
creating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Manifest/PiHole/
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Manifest/PiHole/PiHole-Deployment.yaml

inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Manifest/PiHole/default-headers.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/Manifest/PiHole/ingress.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/deploy.sh
inflating: /home/personal/jimsgarage/JimsGarage-main/Kubernetes/Traefik-PiHole/readme.md
creating: /home/personal/jimsgarage/JimsGarage-main/Logo/
inflating: /home/personal/jimsgarage/JimsGarage-main/Logo/Jim's Garage-1 (1).mp4
inflating: /home/personal/jimsgarage/JimsGarage-main/Logo/Jim's Garage-1 (1).png
inflating: /home/personal/jimsgarage/JimsGarage-main/Logo/Jim's Garage-1 (2).png
inflating: /home/personal/jimsgarage/JimsGarage-main/Logo/Jim's Garage-1 (3).png
inflating: /home/personal/jimsgarage/JimsGarage-main/Logo/Jim's Garage-1 (4).png
inflating: /home/personal/jimsgarage/JimsGarage-main/Logo/Jim's Garage-1 (5).png
inflating: /home/personal/jimsgarage/JimsGarage-main/Logo/Jim's Garage-1.mp4
inflating: /home/personal/jimsgarage/JimsGarage-main/Logo/Jim's Garage-1.png
inflating: /home/personal/jimsgarage/JimsGarage-main/Logo/Jim'sGarage-1(2).png
creating: /home/personal/jimsgarage/JimsGarage-main/Nextcloud/
inflating: /home/personal/jimsgarage/JimsGarage-main/Nextcloud/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Pihole/
inflating: /home/personal/jimsgarage/JimsGarage-main/Pihole/docker-compose.yml
inflating: /home/personal/jimsgarage/JimsGarage-main/Pihole/ubuntu port 53 fix
creating: /home/personal/jimsgarage/JimsGarage-main/Portainer/
inflating: /home/personal/jimsgarage/JimsGarage-main/Portainer/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Proxmox-Backup-Server/
inflating: /home/personal/jimsgarage/JimsGarage-main/Proxmox-Backup-Server/readme.md
inflating: /home/personal/jimsgarage/JimsGarage-main/README.md
creating: /home/personal/jimsgarage/JimsGarage-main/Synapse/
inflating: /home/personal/jimsgarage/JimsGarage-main/Synapse/docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Synapse/homeserver.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Synapse/mautrix-discord-bridge/
inflating: /home/personal/jimsgarage/JimsGarage-main/Synapse/mautrix-discord-bridge/docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Synapse/mautrix-discord-bridge/example-config.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Synapse/mautrix-discord-bridge/example-registration.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Synapse/readme.md
creating: /home/personal/jimsgarage/JimsGarage-main/Torrent-VPN/
inflating: /home/personal/jimsgarage/JimsGarage-main/Torrent-VPN/docker-compose.yml
creating: /home/personal/jimsgarage/JimsGarage-main/Traefik-Secure/
inflating: /home/personal/jimsgarage/JimsGarage-main/Traefik-Secure/config.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Traefik-Secure/docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Traefik-Secure/traefik.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Traefik/
inflating: /home/personal/jimsgarage/JimsGarage-main/Traefik/docker-compose.yml
creating: /home/personal/jimsgarage/JimsGarage-main/Traefik/traefik-config/
extracting: /home/personal/jimsgarage/JimsGarage-main/Traefik/traefik-config/acme.json
extracting: /home/personal/jimsgarage/JimsGarage-main/Traefik/traefik-config/config.yml
inflating: /home/personal/jimsgarage/JimsGarage-main/Traefik/traefik-config/traefik.yml
creating: /home/personal/jimsgarage/JimsGarage-main/Trilium/
inflating: /home/personal/jimsgarage/JimsGarage-main/Trilium/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/UptimeKuma/
inflating: /home/personal/jimsgarage/JimsGarage-main/UptimeKuma/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Vaultwarden/
inflating: /home/personal/jimsgarage/JimsGarage-main/Vaultwarden/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Web-Servers/
creating: /home/personal/jimsgarage/JimsGarage-main/Web-Servers/Hugo/
inflating: /home/personal/jimsgarage/JimsGarage-main/Web-Servers/Hugo/docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Web-Servers/Hugo/site-build-command
creating: /home/personal/jimsgarage/JimsGarage-main/Web-Servers/Nginx/
inflating: /home/personal/jimsgarage/JimsGarage-main/Web-Servers/Nginx/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Web-Servers/WordPress/
inflating: /home/personal/jimsgarage/JimsGarage-main/Web-Servers/WordPress/.env
inflating: /home/personal/jimsgarage/JimsGarage-main/Web-Servers/WordPress/docker-compose.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/Wireguard/
inflating: /home/personal/jimsgarage/JimsGarage-main/Wireguard/docker-compose.yml
creating: /home/personal/jimsgarage/JimsGarage-main/Zitadel/
inflating: /home/personal/jimsgarage/JimsGarage-main/Zitadel/docker-compose.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Zitadel/example-zitadel-config.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Zitadel/example-zitadel-init-steps.yaml
inflating: /home/personal/jimsgarage/JimsGarage-main/Zitadel/example-zitadel-secrets.yaml
creating: /home/personal/jimsgarage/JimsGarage-main/rClone/
inflating: /home/personal/jimsgarage/JimsGarage-main/rClone/docker-compose.yml
creating: /home/personal/jimsgarage/JimsGarage-main/rClone/mount/
inflating: /home/personal/jimsgarage/JimsGarage-main/rClone/mount/docker-compose.yml
inflating: /home/personal/jimsgarage/JimsGarage-main/rClone/mount/windows_mount.bat
inflating: /home/personal/jimsgarage/JimsGarage-main/rClone/remote-upload
inflating: /home/personal/jimsgarage/JimsGarage-main/rClone/sync_script
creating: /home/personal/jimsgarage/JimsGarage-main/restic/
inflating: /home/personal/jimsgarage/JimsGarage-main/restic/docker-compose.yml
finishing deferred symbolic links:
/home/personal/jimsgarage/JimsGarage-main/Authentik/Web-Proxies/.env -> ../.env
cp: cannot stat '/home/personal/jimsgarage/Traefik-tlsmain/Kubernetes/Traefik-PiHole/*': No such file or directory`

Authelia server is not starting

Hello,

I am trying to setup on my machine but unable to setup because always get below message. Please help

dinesh.gupta@vm1:~/james-turland-authelia/Authelia$ docker compose up
[+] Running 2/0
 ✔ Container authelia  Created                                                                                                                                                      0.0s 
 ✔ Container redis     Created                                                                                                                                                      0.0s 
Attaching to authelia, redis
redis     | 1:C 01 Dec 2023 13:11:12.458 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis     | 1:C 01 Dec 2023 13:11:12.458 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis     | 1:C 01 Dec 2023 13:11:12.458 * Redis version=7.2.3, bits=64, commit=00000000, modified=0, pid=1, just started
redis     | 1:C 01 Dec 2023 13:11:12.458 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis     | 1:M 01 Dec 2023 13:11:12.459 * monotonic clock: POSIX clock_gettime
redis     | 1:M 01 Dec 2023 13:11:12.460 * Running mode=standalone, port=6379.
redis     | 1:M 01 Dec 2023 13:11:12.460 * Server initialized
redis     | 1:M 01 Dec 2023 13:11:12.460 * Loading RDB produced by version 7.2.3
redis     | 1:M 01 Dec 2023 13:11:12.460 * RDB age 132 seconds
redis     | 1:M 01 Dec 2023 13:11:12.460 * RDB memory usage when created 0.83 Mb
redis     | 1:M 01 Dec 2023 13:11:12.460 * Done loading RDB, keys loaded: 0, keys expired: 0.
redis     | 1:M 01 Dec 2023 13:11:12.460 * DB loaded from disk: 0.000 seconds
redis     | 1:M 01 Dec 2023 13:11:12.460 * Ready to accept connections tcp
authelia  | time="2023-12-01T13:11:12+05:30" level=error msg="Configuration: authentication_backend: you must ensure either the 'file' or 'ldap' authentication backend is configured"
authelia  | time="2023-12-01T13:11:12+05:30" level=error msg="Configuration: access control: 'default_policy' option 'deny' is invalid: when no rules are specified it must be 'two_factor' or 'one_factor'"
authelia  | time="2023-12-01T13:11:12+05:30" level=error msg="Configuration: storage: configuration for a 'local', 'mysql' or 'postgres' database must be provided"
authelia  | time="2023-12-01T13:11:12+05:30" level=error msg="Configuration: storage: option 'encryption_key' is required"
authelia  | time="2023-12-01T13:11:12+05:30" level=error msg="Configuration: notifier: you must ensure either the 'smtp' or 'filesystem' notifier is configured"
authelia  | time="2023-12-01T13:11:12+05:30" level=fatal msg="Can't continue due to the errors loading the configuration"
authelia exited with code 0
authelia  | time="2023-12-01T13:11:13+05:30" level=error msg="Configuration: authentication_backend: you must ensure either the 'file' or 'ldap' authentication backend is configured"
authelia  | time="2023-12-01T13:11:13+05:30" level=error msg="Configuration: access control: 'default_policy' option 'deny' is invalid: when no rules are specified it must be 'two_factor' or 'one_factor'"
authelia  | time="2023-12-01T13:11:13+05:30" level=error msg="Configuration: storage: configuration for a 'local', 'mysql' or 'postgres' database must be provided"
authelia  | time="2023-12-01T13:11:13+05:30" level=error msg="Configuration: storage: option 'encryption_key' is required"
authelia  | time="2023-12-01T13:11:13+05:30" level=error msg="Configuration: notifier: you must ensure either the 'smtp' or 'filesystem' notifier is configured"
authelia  | time="2023-12-01T13:11:13+05:30" level=fatal msg="Can't continue due to the errors loading the configuration"
authelia exited with code 0
authelia  | time="2023-12-01T13:11:13+05:30" level=error msg="Configuration: authentication_backend: you must ensure either the 'file' or 'ldap' authentication backend is configured"
authelia  | time="2023-12-01T13:11:13+05:30" level=error msg="Configuration: access control: 'default_policy' option 'deny' is invalid: when no rules are specified it must be 'two_factor' or 'one_factor'"
authelia  | time="2023-12-01T13:11:13+05:30" level=error msg="Configuration: storage: configuration for a 'local', 'mysql' or 'postgres' database must be provided"
authelia  | time="2023-12-01T13:11:13+05:30" level=error msg="Configuration: storage: option 'encryption_key' is required"
authelia  | time="2023-12-01T13:11:13+05:30" level=error msg="Configuration: notifier: you must ensure either the 'smtp' or 'filesystem' notifier is configured"
authelia  | time="2023-12-01T13:11:13+05:30" level=fatal msg="Can't continue due to the errors loading the configuration"
authelia exited with code 1
authelia  | time="2023-12-01T13:11:14+05:30" level=error msg="Configuration: authentication_backend: you must ensure either the 'file' or 'ldap' authentication backend is configured"
authelia  | time="2023-12-01T13:11:14+05:30" level=error msg="Configuration: access control: 'default_policy' option 'deny' is invalid: when no rules are specified it must be 'two_factor' or 'one_factor'"
authelia  | time="2023-12-01T13:11:14+05:30" level=error msg="Configuration: storage: configuration for a 'local', 'mysql' or 'postgres' database must be provided"
authelia  | time="2023-12-01T13:11:14+05:30" level=error msg="Configuration: storage: option 'encryption_key' is required"
authelia  | time="2023-12-01T13:11:14+05:30" level=error msg="Configuration: notifier: you must ensure either the 'smtp' or 'filesystem' notifier is configured"
authelia  | time="2023-12-01T13:11:14+05:30" level=fatal msg="Can't continue due to the errors loading the configuration"

Thanks in advance.

Deployment generation is 33360, but latest observed generation is 33359

When doing your rancher dashboard tutorial everything went great. One thing I wanted to do was use metallb instead of the one that comes with k3s.

However I get random interrupts on connecting to that IP and I think it's because of this error from the deployment:

'Deployment generation is 33360, but latest observed generation is 33359'

This will just keep increasing higher and higher. Do you know what this means?

K3S-Deploy - Suggestions for enhancement

Below are some simple enhancements which will improve the robustness of the k3s.sh script.

End script immediately on error

Note: This will provide a cleaner log as to what caused the error.

Insert at the top of the script:

set -e  # Exit immediately if a command exits with a non-zero status.

Update and upgrade system packages, with lock mitigation support.

Add:

# Update and upgrade system packages, with lock mitigation support
attempt_limit=10
attempt_delay_seconds=3
for ((attempt=1; attempt<=attempt_limit; attempt++)); do
    if sudo apt-get update && sudo apt-get upgrade -y; then
        echo "Package list updated and packages upgraded successfully."
        break # Success
    elif ((attempt == attempt_limit)); then
        echo "Failed to update and upgrade packages within $attempt_limit attempts."
        exit 1 # Failure after all attempts
    else
        echo "Attempt $attempt of $attempt_limit failed. Retrying in $attempt_delay_seconds seconds..."
        sleep $attempt_delay_seconds
    fi
done

Synchronize node NTPs to ensure time synchronization on nodes

Note: k3sup and other downloads may fail if time is not synchronized between VM snapshots, so this is important.

Insert:

# Install policycoreutils for each node
for newnode in "${all[@]}"; do
  ssh $user@$newnode -i ~/.ssh/$certName sudo su <<EOF
  sudo timedatectl set-ntp off  # ***** This has been inserted *****
  sudo timedatectl set-ntp on  # ***** This has been inserted *****
  NEEDRESTART_MODE=a apt install policycoreutils -y
  exit
EOF
  echo -e " \033[32;5mPolicyCoreUtils installed!\033[0m"
done

Add robust wait on "Install Metallb"

Append:

# Step 8: Install Metallb
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml
kubectl wait --for=condition=ready pod -l app=metallb --namespace=metallb-system --timeout=300s   # ***** This has been appended *****

After Crowdsec installation containers can't be reached via Traefik reverse proxy

As the title says, I can no longer connect to my Docker containers via the domain name and only get an error message (404 page not found). I did not install the NGINX container, so I already saw this error when typing in the url of my domain name without a wildcard in front of it.

In my Crowdsec log, I only have one type of error message which I don't think applies to the error that is occurring, but I am sharing the lines all the same:

time="23-08-2023 21:31:48" level=warning msg="failed to run filter : invalid character 'i' in literal true (expecting 'r') (1:1)\n | UnmarshalJSON(evt.Line.Raw, evt.Unmarshaled, \"traefik\") in [\"\", nil]\n | ^" id=cold-dust name=child-crowdsecurity/traefik-logs stage=s01-parse
time="23-08-2023 21:31:48" level=error msg="UnmarshalJSON : invalid character 'i' in literal true (expecting 'r')"

Traefik spits out these lines:

time="2023-08-23T00:55:57+02:00" level=error msg="Error while starting server: accept tcp [::]:80: use of closed network connection" entryPointName=http
time="2023-08-23T00:56:34+02:00" level=info msg="Configuration loaded from file: /traefik.yml"
time="2023-08-23T23:02:53+02:00" level=error msg="accept tcp [::]:443: use of closed network connection" entryPointName=https
time="2023-08-23T23:02:53+02:00" level=error msg="accept tcp [::]:80: use of closed network connection" entryPointName=http
time="2023-08-23T23:02:53+02:00" level=error msg="close tcp [::]:80: use of closed network connection" entryPointName=http
time="2023-08-23T23:02:53+02:00" level=error msg="close tcp [::]:443: use of closed network connection" entryPointName=https

headscale config file not matching version

Hi,

While setting up Headscale, I noticed that the Headscale developers deliberately not use latest tag.

Since the config.yaml has specific settings related to the version, this leads to a not working docker container.

Their current recommendation for the container deployment is: v0.23.0-alpha5

They additionally changed:
command: headscale serve to command: server and the volume mounts.

version: "3.9"
services:
  headscale:
    container_name: headscale
    image: headscale/headscale:0.23.0-alpha5
    restart: unless-stopped
    command: serve
    volumes:
      - ./config:/etc/headscale
      - /lib:/var/lib/headscale
      - /run:/var/run/headscale

The config.yaml should be pulled from the documentation with the correct tag.

Traefik secure issues

hi i followed this your YouTube guide here https://youtu.be/IBlZgrwc1T8?si=QMjnL0tmoqBh6piI and used the traefik-secure docs here https://github.com/JamesTurland/JimsGarage/tree/main/Traefik-Secure

Made all nessasery canges to yml files and get to the log back in to traefik dashboard about 8 mins 16 seconds and no mater what port i use eg.

192.168.0.7:80
192.168.0.7:81
192.168.0.7:443
192.168.0.7:444

i get a 404 page not found error

docker-compose

version: '3'

services:
  traefik:
    image: traefik:latest
    container_name: traefik
    restart: unless-stopped
    security_opt:
      - no-new-privileges:true
    networks:
       proxy:
    ports:
      - 80:80
      - 81:81 # external http
      - 443:443
      - 444:444 # external https
    environment:
      - [email protected]
      - CF_DNS_API_TOKEN=sn5LXLvygyfyfyfyf
      # - CF_API_KEY=YOU_API_KEY
      # be sure to use the correct one depending on if you are using a token or key
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /home/user/docker/traefik/traefik.yml:/traefik.yml:ro
      - /home/user/docker/traefik/acme.json:/acme.json
      - /home/user/docker/traefik/config.yml:/config.yml:ro
      - /home/user/docker/traefik/logs:/var/log/traefik
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.traefik.entrypoints=http" # restricts dashboard to internal entrypoint
      - "traefik.http.routers.traefik.rule=Host(`traefik-dashboard.firesand.xyz`)" # if you want a internal domain, get the wildcard cert for it and then choos traefik-dashboard.home.firesand.xyz or what you want
      - "traefik.http.middlewares.traefik-auth.basicauth.users=user:$$apr1$$xbeynWpH$$nEqvtGTGgS4/"
      - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https"
      - "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
      - "traefik.http.routers.traefik-secure.entrypoints=https"
      - "traefik.http.routers.traefik-secure.rule=Host(`traefik-dashboard.fires.xyz`)" # if you want a internal domain, get the wildcard cert for it and then choos traefik-dashboard.home.firesand.xyz or what you want
      - "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
      - "traefik.http.routers.traefik-secure.tls=true"
      - "traefik.http.routers.traefik-secure.tls.certresolver=cloudflare"
      #- "traefik.http.routers.traefik-secure.tls.domains[0].main=home.fires.xyz" # If you want *.home.fires.xyz subdomain or something else, you have to get the certifcates at first
      #- "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.home.fires.xyz" # get a wildcard certificat for your .home.fires.xyz
      - "traefik.http.routers.traefik-secure.tls.domains[0].main=fires.xyz" #if you use the .home.firesand.xyz entry you have to change the [0] into [1]
      - "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.fires.xyz" # same here, change 0 to 1
      - "traefik.http.routers.traefik-secure.service=api@internal"

networks:
  proxy:
    external: true

traefik.yml

api:
  dashboard: true
  debug: true
entryPoints:
  http:
    address: ":80"
    http:
      middlewares:
        #- crowdsec-bouncer@file
      redirections:
        entrypoint:
          to: https
          scheme: https
  https:
    address: ":443"
    http:
      middlewares:
        #- crowdsec-bouncer@file
  http-external:
    address: ":81"
    http:
      middlewares:
       # - crowdsec-bouncer@file
      redirections:
        entrypoint:
          to: https-external
          scheme: https
  https-external:
    address: ":444"
    http:
      middlewares:
        #- crowdsec-bouncer@file

serversTransport:
  insecureSkipVerify: true
providers:
  docker:
    endpoint: "unix:///var/run/docker.sock"
    exposedByDefault: false
  file:
    filename: /config.yml
certificatesResolvers:
  cloudflare:
    acme:
      email: [email protected]
      storage: acme.json
      dnsChallenge:
        provider: cloudflare
        #disablePropagationCheck: true # uncomment this if you have issues pulling certificates through cloudflare, By setting this flag to true disables the need to wait for the propagation of the TXT record to all authoritative name servers.
        resolvers:
          - "1.1.1.1:53"
          - "1.0.0.1:53"

log:
  level: "INFO"
  filePath: "/var/log/traefik/traefik.log"
accessLog:
  filePath: "/var/log/traefik/access.log"

I have changed the sensitive parts names passwords ect.

any help would be great

Thanka

can´t run headscale with traefik

can you helpme to configure headscale and headscale-ui to work with traefik installed with my domain,
i have a self-hosted headscale running on debian 11 only by commnads,installed with dpkg,thanks in advanced....

Cloud Init

Cloud Init

Command qm set 5000 --scsihw virtio-scsi-pci --scsi0 local:vm-5000-disk-0 is wrong.
Must be qm set 5000 --scsihw virtio-scsi-pci --scsi0 local:5000/vm-5000-disk-0.raw

Or four commands:

qm create 1000 \
 --memory 4196 \
 --balloon 0 \
 --core 2 \
 --sockets 2 \
 --cpu cputype=host \
 --numa 1 \
 --name ubuntu-cloud \
 --net0 virtio,bridge=vmbr0 \
 --agent enabled=1,type=virtio \
 --ostype l26 \
 --sshkeys /root/.ssh/my_key.pub \
 --ciuser root \
 --cipassword "123qweASD" \
 --serial0 socket \
 --vga serial0 \
 --ipconfig0 "ip=dhcp"
 
qm importdisk 1000 /var/lib/vz/template/iso/jammy-server-cloudimg-amd64.img --format qcow2 local
qm set 1000 --scsihw virtio-scsi-pci --scsi0 local:1000/vm-1000-disk-0.qcow2  --ide2 local:cloudinit  --boot c --bootdisk scsi0
qm template 1000

Running the k3s.sh from Ubuntu 22.04 and destination Raspberry PI nodes

I successfully run your script deploying K3S into a 7 raspberry nodes, thanks a lot ...

I just made the following modifications:
I updated the version of K3S to be v1.28.4+k3s2
I added a new variable called admuser to identify the userid of the Ubuntu server running the script, the users of the Raspberry Pis nodes are "pi"
I check if the cert files exist:

Move SSH certs to ~/.ssh and change permissions

if [-f /home/$admuser/$certName ]; then
cp /home/$admuser/{$certName,$certName.pub} /home/$admuser/.ssh
fi

In-video commands - are they posted somewhere?

Trying to step through your Traefik (Secure Everything with SSL) video and reproduce the same result. For the step to create the user/password for the basic auth, is this command available here on Github somewhere? Just curious..I could get it out of the video. Just would be convenient if if was here on Github.

Thanks so much for sharing all that you do. A wonderful learning experience.

Error from server (NotFound): services "name=rancher-lb" not found

Hi,

On line 231 of the rke2.sh script

kubectl get svc name=rancher-lb -n cattle-system

This command returns an error

debian@rke3-prod-admin:~$ kubectl get svc name="rancher-lb" -n cattle-system
Error from server (NotFound): services "name=rancher-lb" not found

I think the command should be

kubectl get svc --name=rancher-lb -n cattle-system

keycloak not connecting to postgres

I tried the docker-compose.yml file from this Repro. It worked, but after shutting down the container and deleting them, all my changes were gone.

I assumed, that Keycloak was not connecting to Postgres. I checked the database, and no tables were created,

After searching, I found that all Keycloak environment variables have now a KC_ in front. I found this example, an it worked.
https://keycloak.discourse.group/t/setup-keycloak-version-21-0-2-with-postgres-using-docker/21345

Thanks for all your great tutorials on Youtube.

Regards
Holger

Tailscale container example improvement

I stumbled uppon a few things with your tailscale container example, that I'd like you to know.

Source: docker-compose.yaml

After reading the following docs

I'm under the impression access to the /dev/net/tun-DEVICE (via the volume mount) could likely be removed, as it shouldn't be used.

This is because the environment variable TS_USERSPACE is not set and defaults to true. The point of userspace networking is to work without /dev/net/tun.

I'm still in the process of optimizing my own setup. I'll try to share an update based on my experience, (possible with small additional tweaks suggested) soon. But just in case I forget I wrote this issue, already.

k3s.sh script not working on Jammy ?

I've tried to use the latest version of the script and i'm getting few errors. The main one being that the script hangs on this line :

while [[ $(kubectl get pods -l app=nginx -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}') != "True" ]]; do
sleep 1
done

Any idea how to debug/help improve the script ? Happy to help

This is what I get from my kubectl get nodes


╰─➤  kubectl get nodes
E1220 16:26:41.114483    3859 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E1220 16:26:41.122258    3859 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E1220 16:26:41.124219    3859 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E1220 16:26:41.125862    3859 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
NAME            STATUS     ROLES                       AGE     VERSION
k3s-master-01   Ready      control-plane,etcd,master   7m31s   v1.26.10+k3s2
k3s-master-02   NotReady   control-plane,etcd,master   7m14s   v1.26.10+k3s2
k3s-master-03   Ready      control-plane,etcd,master   6m57s   v1.26.10+k3s2
k3s-worker-01   Ready      <none>                      6m49s   v1.26.10+k3s2
k3s-worker-02   Ready      <none>                      6m40s   v1.26.10+k3s2

Longhorn Debian 12 Fails

Running the longhorn script on all Debian 12 nodes results in the following continuous loop:

longhorn-driver-deployer-6db5964bd7-dsfpt   0/1     Init:0/1           0             3m12s
longhorn-manager-54b9r                      0/1     CrashLoopBackOff   4 (81s ago)   3m12s
longhorn-manager-n482r                      0/1     CrashLoopBackOff   4 (73s ago)   3m12s
longhorn-manager-wx77w                      0/1     CrashLoopBackOff   5 (4s ago)    3m12s
longhorn-ui-686ddb449f-29bgg                1/1     Running            0             3m11s
longhorn-ui-686ddb449f-lfmmw                1/1     Running            0             3m12s
longhorn-manager-54b9r                      0/1     Running            5 (87s ago)   3m18s
longhorn-manager-54b9r                      0/1     Error              5 (88s ago)   3m19s
longhorn-manager-54b9r                      0/1     CrashLoopBackOff   5 (2s ago)    3m20s
longhorn-manager-n482r                      0/1     Running            5 (91s ago)   3m30s
longhorn-manager-n482r                      0/1     Error              5 (92s ago)   3m31s
longhorn-manager-n482r                      0/1     CrashLoopBackOff   5 (2s ago)    3m32s
dlonghorn-manager-wx77w                      0/1     Error              6 (2m51s ago)   5m59s
longhorn-manager-54b9r                      0/1     Error              6 (2m41s ago)   5m59s
longhorn-manager-54b9r                      0/1     CrashLoopBackOff   6 (2s ago)      6m1s
longhorn-manager-wx77w                      0/1     CrashLoopBackOff   6 (4s ago)      6m3s
longhorn-manager-n482r                      0/1     Error              6 (2m55s ago)   6m25s
longhorn-manager-n482r                      0/1     CrashLoopBackOff   6 (6s ago)      6m31s

I see the following: invalid capacity 0 on image filesystem

No tls: certResolver?

I was following your examples and trying to figure out what to do with the

tls:
  certResolver:

entries (calling the appropriate certificateResolvers) I have in my https entrypoint once I create the second https-external entrypoint. Should I duplicate the entry in both? Leave only in one or another? Tweak it in the second one? Basically, I want to use the same certificates for the new entrypoint, and I don't want to create duplicate certificates.

I notice that your examples do not include a tls: certResolver entry in your entrypoints, so I'm wondering if I'm doing it wrong. But when I remove that entry from my internal https entrypoint, the webpages will not resolve. So for now I have duplicated my entry in both https entrypoints. This seems to be working, but I'd like to better understand why you've done it the way you have.

Thanks.

Dangerous command in k3s.sh

Command in script
echo "StrictHostKeyChecking no" > ~/.ssh/config
destroying your config file for SSH. :(

Please change to
sed -i '1s/^/StrictHostKeyChecking no\n/' ~/.ssh/config

Error with metal lb on RKE script

I'm getting the next error when I run the rke script:

Error from server (InternalError): error when creating "ipAddressPool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": endpoints "webhook-service" not found
Error from server (InternalError): error when creating "https://raw.githubusercontent.com/JamesTurland/JimsGarage/main/Kubernetes/RKE2/l2Advertisement.yaml": Internal error occurred: failed calling webhook "l2advertisementvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-l2advertisement?timeout=10s": endpoints "webhook-service" not found

Any Idea?

Issue with authelia

Hi I use the same config but getting an error with authelia. can you please advise how to to fix

time="2023-08-15T00:15:51+01:00" level=error msg="Configuration: authentication_backend: you must ensure either the 'file' or 'ldap' authentication backend is configured" authelia | time="2023-08-15T00:15:51+01:00" level=error msg="Configuration: access control: 'default_policy' option 'deny' is invalid: when no rules are specified it must be 'two_factor' or 'one_factor'" authelia | time="2023-08-15T00:15:51+01:00" level=error msg="Configuration: storage: configuration for a 'local', 'mysql' or 'postgres' database must be provided" authelia | time="2023-08-15T00:15:51+01:00" level=error msg="Configuration: storage: option 'encryption_key' is required" authelia | time="2023-08-15T00:15:51+01:00" level=error msg="Configuration: notifier: you must ensure either the 'smtp' or 'filesystem' notifier is configured" authelia | time="2023-08-15T00:15:51+01:00" level=fatal msg="Can't continue due to the errors loading the configuration"

Hard coded vipInterface rke2.sh

Hi,

It looks like the vipInterface is hard coded on the line 98 on the rke2.sh script

curl -sL https://raw.githubusercontent.com/JamesTurland/JimsGarage/main/Kubernetes/RKE2/k3s |  vipAddress=$vip vipInterface=eth0 sh | sudo tee /var/lib/rancher/rke2/server/manifests/kube-vip.yaml

Should be

curl -sL https://raw.githubusercontent.com/JamesTurland/JimsGarage/main/Kubernetes/RKE2/k3s |  vipAddress=$vip vipInterface=$interface sh | sudo tee /var/lib/rancher/rke2/server/manifests/kube-vip.yaml

Gluetun - Windscribe - private key is not valid: wgtypes: failed to parse base64-encoded key

Hi All, I'm totally new to all of this so please keep that in mind. I've spent the day following Jim's video in relation to setting up a stack which will filter everything through a VPN. I'm almost there but failed at the VPN hurdle on my QNAP NAS using their version of Docker. I've got a paid Windscribe VPN subscription, have got my wireguard credentials and have read a dozen sites or more to try and adapt the compose file to fix this, but I'm getting the error:

2023-12-28T16:00:10Z INFO [routing] default route found: interface eth0, gateway 172.29.0.1, assigned IP 172.29.0.2 and family v4
2023-12-28T16:00:10Z INFO [routing] local ethernet link found: eth0
2023-12-28T16:00:10Z INFO [routing] local ipnet found: 172.29.0.0/22
2023-12-28T16:00:10Z INFO [firewall] enabling...
2023-12-28T16:00:10Z INFO [firewall] enabled successfully
2023-12-28T16:00:12Z INFO [storage] merging by most recent 17685 hardcoded servers and 17685 servers read from /gluetun/servers.json
2023-12-28T16:00:12Z ERROR VPN settings: Wireguard settings: private key is not valid: wgtypes: failed to parse base64-encoded key: illegal base64 data at input byte 0
2023-12-28T16:00:12Z INFO Shutdown successful

My compose file reads like this:

environment:
  - VPN_SERVICE_PROVIDER=windscribe
  - VPN_TYPE=wireguard
  - WIREGUARD_PRIVATE_KEY=redacted 44 character private key=
  - WIREGUARD_ADDRESSES=redacted/32
  - WIREGUARD_PRESHARED_KEY=redacted 44 character key=

- SERVER_REGIONS=United Kingdom

- SERVER_CITIES=Edinburgh

  - HTTPPROXY=on
  - SERVER_HOSTNAMES=edi-369-wg.whiskergalaxy.com:443
  # Timezone for accurate log times
  - TZ=Europe/London
  # Server list updater
  # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list
  - UPDATER_PERIOD=24h

I'm throwing myself open to anyone who can help please. I've tried 3 private keys obtained through Windscribe so far and always the same outcome, everything looks fine, but I'm missing something.

Many thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.