Git Product home page Git Product logo

k8s-mediaserver-operator's Introduction

k8s-mediaserver-operator

Your all-in-one resource for your media needs!

I am so happy to announce the first release of the k8s-mediaserver-operator, a project that mixes up some of the mainstream tools for your media needs.

The Custom Resource that is implemented, allows you to create a fully working and complete media server on #kubernetes, based on:

Plex Media Server - A complete and fully funtional mediaserver that allows you to render in a webUI your movies, TV Series, podcasts, video streams.

Jellyfin - It is an alternative to the proprietary Emby and Plex, to provide media from a dedicated server to end-user devices via multiple apps

Sonarr - A TV series and show tracker, that allows the integration with download managers for searching and retrieving TV Series, organizing them, schedule notifications when an episode comes up and much more.

Radarr - The same a Sonarr, but for movies!

Jackett - An API interface that keeps easy your life interacting with trackers for torrents.

Prowlarr - An indexer manager/proxy built on the popular *arr .net/reactjs base stack to integrate with your various PVR apps. Prowlarr supports management of both Torrent Trackers and Usenet Indexers.

Transmission - A fast, easy and reliable torrent client.

Sabnzbd - A free and easy binary newsreader.

All container images used by the operator are from linuxserver

Each of the components can be enabled or disabled if you already have something in place in your lab!

Introduction

I started working on this project because I was tired of using the 'containerized' version with docker/podman-compose, and I wanted to experiment a bit both with helm and operators.

It is quite simple to use, and very minimalistic, with customizations that are strictly related to usability and access, rather than complex customizations, even if, maybe in the future, we could add it!

Each container has its init container in order to initialize configurations on the PV before starting the actual pod and avoid to restart the pods.

QuickStart

The operator and the CR are already configured with some defaults settings to make you jump and go with it.

All you need is:

  • A namespace where you want to put your Custom Resource and all the pods that will spawn
  • Being able to provision an RWX PV where to store configurations, downloads, and all related stuff (suggested > 200GB). Persistent Volume or StorageClasses for dynamically provisioned volumes are REQUIRED (See below for NFS)
  1. First install the CRD and the operator:

AMD/Intel:

kubectl apply -f k8s-mediaserver-operator.yml

ARM - Raspberry Pi:

kubectl apply -f k8s-mediaserver-operator-arm64.yml
  1. Install the custom resource with the default values:
kubectl apply -f k8s-mediaserver.yml

In seconds, you will be ready to use your applications!

With default settings, your applications will run in these paths:

Service Link
Sonarr http://k8s-mediaserver.k8s.test/sonarr
Radarr http://k8s-mediaserver.k8s.test/radarr
Transmission http://k8s-mediaserver.k8s.test/transmission
Jackett http://k8s-mediaserver.k8s.test/jackett
Prowlarr http://k8s-mediaserver.k8s.test/prowlarr
Jellyfin http://k8s-jelly.k8s.test/
PLEX http://k8s-plex.k8s.test/
  1. (Optional) Use custom values:

If you want to use your custom setup for all the services:

  • Copy the default values files cp ./helm-charts/k8s-mediaserver/values.yaml my-values.yaml
  • Make all the changes you want in the new file ./helm-charts/k8s-mediaserver/my-values.yaml

With this value saved in the top level directory of this repo, running the below will add the resources to your cluster, under the helm release name k8s-mediaserver

helm install -f my-values.yaml k8s-mediaserver ./helm-charts/k8s-mediaserver/

To make changes to the deploy

helm upgrade -f my-values.yaml k8s-mediaserver ./helm-charts/k8s-mediaserver/

This is equivalent to running mount {SERVER-IP}:/mount/path/on/nfs/server ... in each container, where ... is different per resource, as defined in the templates directory (for each resource). In addition to the above, you should also edit your subpaths so that they when they are appended to your path: specified in values.yaml, they map to the directories you intend.

The mediaserver CR

The CR is quite simple to configure, and I tried to keep the number of parameters low to avoid confusion, but still letting some customization to fit the resource inside your cluster.

General config

Config path Meaning Default
general.ingress_host The hostname to use in ingress definition, this will be the hostname where the applications will be exposed k8s-mediaserver.k8s.test
general.plex_ingress_host The hostname to use for PLEX as it must be exposed on a / dedicated path k8s-plex.k8s.test
general.jellyfin_ingress_host The hostname to use for JellyFin as it must be exposed on a / dedicated path k8s-jelly.k8s.test
general.image_tag The name of the image tag (arm32v7-latest, arm64v8-latest, development) latest
general.pgid The GID for the process 1000
general.puid The UID for the process 1000
general.nodeSelector Default Node Selector for all the pods. Per-service nodeSelectors are merged against this. {}
general.storage.customVolume Flag if you want to supply your own volume and not use a PVC false
general.storage.pvcName Name of the persistenVolumeClaim configured in deployments mediaserver-pvc
general.storage.accessMode Access mode for mediaserver PVC in case of single nodes ReadWriteMany
general.storage.pvcStorageClass Specifies a storageClass for the PVC ""
general.storage.size Size of the persistenVolume 50Gi
general.storage.subPaths.tv Default subpath for tv series folder on used storage media/tv
general.storage.subPaths.movies Default subpath for movies folder on used storage media/movies
general.storage.subPaths.downloads Default root path for downloads for both sabnzbd and transmission on used storage downloads
general.storage.subPaths.transmission Default subpath for transmission downloads on used storage general.storage.subPaths.downloads/transmission
general.storage.subPaths.sabnzbd Default subpath for sabnzbd downloads on used storage general.storage.subPaths.downloads/sabnzbd
general.storage.subPaths.config Default subpath for all config files on used storage config
general.storage.volumes Supply custom volume to be mounted for all services {}
general.ingress.ingressClassName Reference an IngressClass resource that contains additional Ingress configuration ""

Plex

Config path Meaning Default
plex.enabled Flag if you want to enable Plex true
plex.claim IMPORTANT Token from your account, needed to claim the server CHANGEME
plex.replicaCount Number of replicas serving Plex 1
plex.container.nodeSelector Node Selector for the Plex pods {}
plex.container.port The port in use by the container 32400
plex.container.image The image used by the container docker.io/linuxserver/plex
plex.container.tag The tag used by the container null
plex.service.type The kind of Service (ClusterIP/NodePort/LoadBalancer) ClusterIP
plex.service.port The port assigned to the service 32400
plex.service.nodePort In case of service.type NodePort, the nodePort to use ""
plex.service.extraLBService If true, creates an additional LoadBalancer service with '-lb' suffix (requires a cloud provider or metalLB) false
plex.service.extraLBService.annotations Instead of using extraLBService as a bool, you can use it as a map to define annotations on the loadbalancer null
plex.ingress.enabled If true, creates the ingress resource for the application true
plex.ingress.annotations Additional field for annotations, if needed {}
plex.ingress.path The path where the application is exposed /plex
plex.ingress.tls.enabled If true, tls is enabled false
plex.ingress.tls.secretName Name of the secret holding certificates for the secure ingress ""
plex.resources Limits and Requests for the container {}
plex.volume If set, Plex will create a PVC for it's config volume, else it will be put on general.storage.subPaths.config {}

Jellyfin

Config path Meaning Default
jellyfin.enabled Flag if you want to enable Jellyfin true
jellyfin.replicaCount Number of replicas serving Jellyfin 1
jellyfin.container.nodeSelector Node Selector for the Jellyfin pods {}
jellyfin.container.port The port in use by the container 8096
jellyfin.container.image The image used by the container docker.io/linuxserver/jellyfin
jellyfin.container.tag The tag used by the container null
jellyfin.service.type The kind of Service (ClusterIP/NodePort/LoadBalancer) ClusterIP
jellyfin.service.port The port assigned to the service 8096
jellyfin.service.nodePort In case of service.type NodePort, the nodePort to use ""
jellyfin.service.extraLBService If true, creates an additional LoadBalancer service with '-lb' suffix (requires a cloud provider or metalLB) false
jellyfin.service.extraLBService.annotations Instead of using extraLBService as a bool, you can use it as a map to define annotations on the loadbalancer null
jellyfin.ingress.enabled If true, creates the ingress resource for the application true
jellyfin.ingress.annotations Additional field for annotations, if needed {}
jellyfin.ingress.path The path where the application is exposed /jellyfin
jellyfin.ingress.tls.enabled If true, tls is enabled false
jellyfin.ingress.tls.secretName Name of the secret holding certificates for the secure ingress ""
jellyfin.resources Limits and Requests for the container {}
jellyfin.volume If set, Jellyfin will create a PVC for it's config volume, else it will be put on general.storage.subPaths.config {}

Sonarr

Config path Meaning Default
sonarr.enabled Flag if you want to enable Sonarr true
sonarr.container.port The port in use by the container 8989
sonarr.container.nodeSelector Node Selector for the Sonarr pods {}
sonarr.container.image The image used by the container docker.io/linuxserver/sonarr
sonarr.container.tag The tag used by the container null
sonarr.service.type The kind of Service (ClusterIP/NodePort/LoadBalancer) ClusterIP
sonarr.service.port The port assigned to the service 8989
sonarr.service.nodePort In case of service.type NodePort, the nodePort to use ""
sonarr.service.extraLBService If true, creates an additional LoadBalancer service with '-lb' suffix (requires a cloud provider or metalLB) false
sonarr.service.extraLBService.annotations Instead of using extraLBService as a bool, you can use it as a map to define annotations on the loadbalancer null
sonarr.ingress.enabled If true, creates the ingress resource for the application true
sonarr.ingress.annotations Additional field for annotations, if needed {}
sonarr.ingress.path The path where the application is exposed /sonarr
sonarr.ingress.tls.enabled If true, tls is enabled false
sonarr.ingress.tls.secretName Name of the secret holding certificates for the secure ingress ""
sonarr.resources Limits and Requests for the container {}
sonarr.volume If set, Plex will create a PVC for it's config volume, else it will be put on general.storage.subPaths.config {}

Radarr

Config path Meaning Default
radarr.enabled Flag if you want to enable Radarr true
radarr.container.port The port in use by the container 7878
radarr.container.nodeSelector Node Selector for the Radarr pods {}
radarr.container.image The image used by the container docker.io/linuxserver/radarr
radarr.container.tag The tag used by the container null
radarr.service.type The kind of Service (ClusterIP/NodePort/LoadBalancer) ClusterIP
radarr.service.port The port assigned to the service 7878
radarr.service.nodePort In case of service.type NodePort, the nodePort to use ""
radarr.service.extraLBService If true, creates an additional LoadBalancer service with '-lb' suffix (requires a cloud provider or metalLB) false
radarr.service.extraLBService.annotations Instead of using extraLBService as a bool, you can use it as a map to define annotations on the loadbalancer null
radarr.ingress.enabled If true, creates the ingress resource for the application true
radarr.ingress.annotations Additional field for annotations, if needed {}
radarr.ingress.path The path where the application is exposed /radarr
radarr.ingress.tls.enabled If true, tls is enabled false
radarr.ingress.tls.secretName Name of the secret holding certificates for the secure ingress ""
radarr.resources Limits and Requests for the container {}
radarr.volume If set, Plex will create a PVC for it's config volume, else it will be put on general.storage.subPaths.config {}

Jackett

Config path Meaning Default
jackett.enabled Flag if you want to enable Jackett true
jackett.container.port The port in use by the container 9117
jackett.container.nodeSelector Node Selector for the Jackett pods {}
jackett.container.image The image used by the container docker.io/linuxserver/jackett
jackett.container.tag The tag used by the container null
jackett.service.type The kind of Service (ClusterIP/NodePort/LoadBalancer) ClusterIP
jackett.service.port The port assigned to the service 9117
jackett.service.nodePort In case of service.type NodePort, the nodePort to use ""
jackett.service.extraLBService If true, it creates an additional LoadBalancer service with '-lb' suffix (requires a cloud provider or metalLB) false
jackett.service.extraLBService.annotations Instead of using extraLBService as a bool, you can use it as a map to define annotations on the loadbalancer null
jackett.ingress.enabled If true, creates the ingress resource for the application true
jackett.ingress.annotations Additional field for annotations, if needed {}
jackett.ingress.path The path where the application is exposed /jackett
jackett.ingress.tls.enabled If true, tls is enabled false
jackett.ingress.tls.secretName Name of the secret holding certificates for the secure ingress ""
jackett.resources Limits and Requests for the container {}
jackett.volume If set, Plex will create a PVC for it's config volume, else it will be put on general.storage.subPaths.config {}

Prowlarr

Config path Meaning Default
prowlarr.enabled Flag if you want to enable Prowlarr true
prowlarr.container.port The port in use by the container 9117
prowlarr.container.nodeSelector Node Selector for the Prowlarr pods {}
prowlarr.container.image The image used by the container docker.io/linuxserver/prowlarr
prowlarr.container.tag The tag used by the container develop
prowlarr.service.type The kind of Service (ClusterIP/NodePort/LoadBalancer) ClusterIP
prowlarr.service.port The port assigned to the service 9117
prowlarr.service.nodePort In case of service.type NodePort, the nodePort to use ""
prowlarr.service.extraLBService If true, it creates an additional LoadBalancer service with '-lb' suffix (requires a cloud provider or metalLB) false
prowlarr.service.extraLBService.annotations Instead of using extraLBService as a bool, you can use it as a map to define annotations on the loadbalancer null
prowlarr.ingress.enabled If true, creates the ingress resource for the application true
prowlarr.ingress.annotations Additional field for annotations, if needed {}
prowlarr.ingress.path The path where the application is exposed /prowlarr
prowlarr.ingress.tls.enabled If true, tls is enabled false
prowlarr.ingress.tls.secretName Name of the secret holding certificates for the secure ingress ""
prowlarr.resources Limits and Requests for the container {}
prowlarr.volume If set, Plex will create a PVC for it's config volume, else it will be put on general.storage.subPaths.config {}

Transmission

Config path Meaning Default
transmission.enabled Flag if you want to enable Transmission true
transmission.container.port.utp The port in use by the container 9091
transmission.container.nodeSelector Node Selector for the Transmission pods {}
transmission.container.port.peer The port in use by the container for peer connection 51413
transmission.container.image The image used by the container docker.io/linuxserver/transmission
transmission.container.tag The tag used by the container null
transmission.service.utp.type The kind of Service (ClusterIP/NodePort/LoadBalancer) for Transmission itself ClusterIP
transmission.service.utp.port The port assigned to the service for Transmission itself 9091
transmission.service.utp.nodePort In case of service.type NodePort, the nodePort to use for Transmission itself ""
transmission.service.utp.extraLBService If true, creates an additional LoadBalancer service with '-lb' suffix (requires a cloud provider or metalLB) false
transmission.service.peer.type The kind of Service (ClusterIP/NodePort/LoadBalancer) for peer port ClusterIP
transmission.service.peer.port The port assigned to the service for peer port 51413
transmission.service.peer.nodePort In case of service.type NodePort, the nodePort to use for peer port ""
transmission.service.peer.nodePortUDP In case of service.type NodePort, the nodePort to use for peer port UDP service ""
transmission.service.peer.extraLBService If true, creates an additional LoadBalancer service with '-lb' suffix (requires a cloud provider or metalLB) false
transmission.service.extraLBService.annotations Instead of using extraLBService as a bool, you can use it as a map to define annotations on the loadbalancer null
transmission.ingress.enabled If true, creates the ingress resource for the application true
transmission.ingress.annotations Additional field for annotations, if needed {}
transmission.ingress.path The path where the application is exposed /transmission
transmission.ingress.tls.enabled If true, tls is enabled false
transmission.ingress.tls.secretName Name of the secret holding certificates for the secure ingress ""
transmission.config.auth.enabled Enables authentication for Transmission false
transmission.config.auth.username Username for Transmission ""
transmission.config.auth.password Password for Transmission ""
transmission.resources Limits and Requests for the container {}
transmission.volume If set, Plex will create a PVC for it's config volume, else it will be put on general.storage.subPaths.config {}

Sabnzbd

Config path Meaning Default
sabnzbd.enabled Flag if you want to enable Sabnzbd true
sabnzbd.container.nodeSelector Node Selector for the Sabnzbd pods {}
sabnzbd.container.port.http The port in use by the container 8080
sabnzbd.container.port.https The port in use by the container for peer connection 9090
sabnzbd.container.image The image used by the container docker.io/linuxserver/sabnzbd
sabnzbd.container.tag The tag used by the container null
sabnzbd.service.http.type The kind of Service (ClusterIP/NodePort/LoadBalancer) for Sabnzbd itself ClusterIP
sabnzbd.service.http.port The port assigned to the service for Sabnzbd itself 9091
sabnzbd.service.http.nodePort In case of service.type NodePort, the nodePort to use for Sabnzbd itself ""
sabnzbd.service.http.extraLBService If true, creates an additional LoadBalancer service with '-lb' suffix (requires a cloud provider or metalLB) false
sabnzbd.service.https.type The kind of Service (ClusterIP/NodePort/LoadBalancer) for https port ClusterIP
sabnzbd.service.https.port The port assigned to the service for peer port 51413
sabnzbd.service.https.nodePort In case of service.type NodePort, the nodePort to use for https port ""
sabnzbd.service.https.extraLBService If true, creates an additional LoadBalancer service with '-lb' suffix (requires a cloud provider or metalLB) false
sabnzbd.service.extraLBService.annotations Instead of using extraLBService as a bool, you can use it as a map to define annotations on the loadbalancer null
sabnzbd.ingress.enabled If true, creates the ingress resource for the application true
sabnzbd.ingress.annotations Additional field for annotations, if needed {}
sabnzbd.ingress.path The path where the application is exposed /sabnzbd
sabnzbd.ingress.tls.enabled If true, tls is enabled false
sabnzbd.ingress.tls.secretName Name of the secret holding certificates for the secure ingress ""
sabnzbd.resources Limits and Requests for the container {}
sabnzbd.volume If set, Plex will create a PVC for it's config volume, else it will be put on general.storage.subPaths.config {}

Helpful use-cases

Using a cluster-external NFS server

This assumes that you have a pre-configured NFS server set up on your network that is accessible from all nodes. If it is not accessible by all nodes, pods will not enter ready state when scheduled on nodes that do not have NFS access.

To add an NFS volume to each resource, edit the K8SMediaServer CR to match below snipttet. You should change the server: and path: values to match your NFS.

general:
  storage:
    customVolume: true
    volumes:
      nfs:
        server: { SERVER-IP }
        path: /mount/path/on/nfs/server/

Adding annotations to the extra load balancer

If you need an extra load balancer on any service, you can either enable it like this:

plex:
  service:
    extraLBService: true

or like this, if you need to add annotations to it (for use with cloud providers to configure the load balancer for example) :

plex:
  service:
    extraLBService:
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags:

About the project

This project is intended as an exercise, and absolutely for fun. This is not intended to promote piracy.

Also feel free to contribute and extend it!

k8s-mediaserver-operator's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-mediaserver-operator's Issues

Feature Request: NZB via sabnzbd or nzbget

Is your feature request related to a problem? Please describe.
While this helm chat works well for torrents, a lot of the plex community uses Usenet instead. Being able to deploy sabnzbd or nzbget via a k8s helm chart would be preferred by a large number of users in the community.

Describe the solution you'd like
I'd suggest converting the current jackett + transmission setup to be a bool that can enable and disable the deployment of those resources and then setting up something similar for sabnzbd or nzbget. This would allow users to select one over the other, or, both options.

In general, I would suggest this for all services. (I have a plex server on baremetal already, and wouldn't want it deployed in k8s due to the hardware required to run a library of the size I already run.)

Describe alternatives you've considered
Obviously manually deploying my own containers based on https://hub.docker.com/r/linuxserver/sabnzbd is an option, but not quite as clean as an all-in-one helm chart would be.

[BR] New releases remove previous docker images and tags

Describe the bug
Every time a new version of the image is pushed, the previous versions disappear from quay.io, leading to errors such as:

Failed to pull image "quay.io/kubealex/k8s-mediaserver-operator:v0.6": rpc error: code = Unknown desc = Error response from daemon: unknown: Tag v0.6 was deleted or has expired. To pull, revive via time machine

The history on quay.io for this images indicates that the previous tags are indeed being deleted: https://quay.io/repository/kubealex/k8s-mediaserver-operator?tab=history

Expected behavior
There are two usual behaviors that are expected when using semver and docker images:

  • Previous tags should be immutable and not purged on new versions.
  • Multiple tags should be attached to each release. For exemple for 0.6.1 (current latest), we should expect tags latest, 0, 0.6 and 0.6.1. Users would then be able to specify a tagged version and expect it to continue working throughout new releases.

Additional context
This seems like something that should be modified in the docker-build process in the Makefile but I haven't figured out why it is purging previous images.

Nginx subpath url do not work

Describe the bug
The subpath URL doesn't work with the default settings.

To Reproduce
spec:
general:
ingress_host: media.labs.rvsharma.com

  radarr:
    enabled: true
    container:
      nodeSelector: {}
      port: 7878
    service:
      type: ClusterIP
      port: 7878
      nodePort:
      extraLBService: false
    ingress:
      enabled: true
      annotations:
        kubernetes.io/ingress.class: nginx
      path: /radarr

Expected behavior
media.labs.example.com/radarr

logs

ingress-nginx/ingress-nginx-nginx-ingress-8555b45c5c-jncnx[ingress-nginx-nginx-ingress]: 2022/09/15 18:59:43 [error] 344#344: *17678 open() "/etc/nginx/html/radarr" failed (2: No such file or directory), client: 192.168.0.113, server: media.labs.rvsharma.com, request: "HEAD /radarr HTTP/1.1", host: "media.labs.example.com"
ingress-nginx/ingress-nginx-nginx-ingress-8555b45c5c-jncnx[ingress-nginx-nginx-ingress]: 192.168.0.113 - - [15/Sep/2022:18:59:43 +0000] "HEAD /radarr HTTP/1.1" 404 0 "-" "curl/7.79.1" "-"

Environment:

  • K8s version: 1.21+
  • CNI Plugin: calico
  • CSI Type: longhorn

Additional context
Plex is working fine on the subdomain.

[BR] Readiness probes based on TCP should use httpGet instead.

Describe the bug
TCP Readiness probes are not suitable for integration tests, as the port is ready way before the tool is properly initialized.

To Reproduce
Steps to reproduce the behavior:

  • Install the operator
  • Install the CRD
  • try reaching the tools will fail

Expected behavior
No failure on tests

Does Helm Install alone work without installing this project as an operator?

Hi,

I see that this project is an k8s operator based on the name & instructions. However, the README does speak about using Helm. I see a helm-charts directory and wondering if I can solely install it with Helm without having to install this as an operator.

I personally disagree with the design choice of using an operator rather than making it a helm app. However, I understand that this has been done to learn by the author.

Thanks,
Chinmaya

[DISC] Plex Client Setup

Setup:
AMD64 based nodes running K3S (with Flannel ), Longhorn, Nginx Ingress and MetalLB with a NFS OMV.
Note: there is no external/public dns setup involved here.

Thank you and your team for working on this operator. it is a lifesaver!
I am facing a challenge in utilizing a plex app with the server.
I can access the plex media server locally (using a webbrowser) via ingress at plex.domain.com
However, using the plex app on a phone, tv or tablet on the local network does not discover the plex media server.

I expanded what is considered "local network" in the Network settings to include my IP pool for the router's DNS server and custom host name is populated to plex.domain.com

From what i can tell, in the client app logs, the plex pod is broadcasting its cluster ip.
in other distributions, ADVERTISE_IP could be leveraged for LB style approaches. How would you manage it with an ingress?

Is there a configuration i am missing?

[GR] LoadBalancerIP

Describe the bug
LoadBalancerIP: "static ip" cannot be applied on services
The loadbalancer assignes different ip

Steps to reproduce the behavior:

set loadbalancerIp on all services on k8s-mediaserver.yml file.
type: LoadBalancer
loadBalancerIP: Static-IP

kubectl apply -f k8s-mediaserver.yml

Expected behavior
all the services of the pods must have the same external IP = static-ip

[BR] No PVC defined for Prowlarr in chart

Describe the bug
When configuring a custom volume for Prowlarr, the deployment is properly configured but the PVC is missing from the storage-resources.yml file.

To Reproduce
Deploy a K8SMediaServer with Prowlarr configuration:

prowlarr:
  enabled: true
  volume:
      name: pvc-prowlarr-volume
      accessModes: ReadWriteOnce
      storage: 2Gi

Additional context
I did the initial push for both the volumes and prowlarr, I can contribute the fix if you'd like.

[BR] Cannot use general.storage.volumes

Describe the bug
The operator seems to ignore general.storage.volumes

To Reproduce
Steps to reproduce the behavior:

  • Install the operator
  • Create a K8SMediaserver resource with this in spec.general/storage:
...
spec:
  general:
    storage:
      pvcName: mediaserver-pvc
      size: 5Gi
      pvcStorageClass: "nfs-pvcname-storageclass"
      subPaths:
        tv: media/tv
        movies: media/movies
        downloads: downloads    
        transmission: transmission
        sabnzbd: sabnzbd
        config: config    
      volumes:
        persistentVolumeClaim:
          claimName: ext01-mmedia
...

Expected behavior
I would expect the operator to create a Deployment for plex that mounts a new volume using nfs-pvcname-storageclass and also an existing volume which pvc name is ext01-mmedia.

Environment:

  • K8s version: v1.22.6+k3s1
  • CNI Plugin: flannel
  • CSI Type:

[FR] Add an nginx security/protection container in the mix

Is your feature request related to a problem? Please describe.
The only problem is that nothing is protected on the network, as all the 'Ingress' controllers are on the LB for the K8S cluster anyone on the network can hit any of them with out issue.

Describe the solution you'd like
Use the same domain but have some basic auth security before being able to access the content.

Describe alternatives you've considered
Transmission has some auth but it's considered not very good, and sonarr/radarr don't have any afaik.
Plex and Jellyfin (which is what I am using) both have there own AUTH attached.

Additional context

Personally I added this myself after the operator by using the nginx bitnami helm chart with image: docker.io/bitnami/nginx:1.23.3-debian-11-r17

          volumeMounts:
            - name: nginx-server-block
              mountPath: /opt/bitnami/nginx/conf/server_blocks
            - name: nginx-htpasswd
              readOnly: true
              mountPath: /etc/nginx/passwords

So we load a secret with the user/pass based on how nginx generates accounts as per https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/ (using .htaccess)

I did this all manually, but it could be built in to 'auto generate a admin account and password for you and print it to console on first start or something'.

Example of a location block for the applications in the nginx.conf

		  location ^~ /radarr/ {
		    proxy_pass http://radarr.mediaserver.svc.cluster.local:7878;
		    proxy_pass_request_headers on;
		    proxy_buffering off;
		    proxy_set_header X-Real-IP $remote_addr;
		    proxy_set_header X-Forwarded-Host $host;
		    proxy_set_header X-Forwarded-Port $server_port;
		    auth_basic \"Restricted Content\";
		    auth_basic_user_file /etc/nginx/passwords/.htpasswd;
		  }

[BR] Error when enabling AdditionalLB for Jellyfin

Describe the bug
When adding an additional LB to jellyfin I got:

template: k8s-mediaserver/templates/jellyfin-resources.yml:111:23: executing "k8s-mediaserver/templates/jellyfin-resources.yml" at <.Values.jellyfin.service.extraLBService.annotations>: can't evaluate field annotations in type interface {}

To Reproduce
Steps to reproduce the behavior:

  • create a copy of the default values called my-values.yaml and activate the additional load balancer for jellyfin
  • run this command in the main foler to execute the templating
    helm template -f my-values.yaml k8s-mediaserver ./helm-charts/k8s-mediaserver/ > k8s-mediaserver.yml

Expected behavior
The error should appear

[BR] unable to use NFS mount as per read me

when adding NFS details to my-values.yaml the installation breaks

helm install -f my-values.yaml k8s-mediaserver ./helm-charts/k8s-mediaserver/

Error: INSTALLATION FAILED: YAML parse error on k8s-mediaserver/templates/jackett-resources.yml: error converting YAML to JSON: yaml: line 59: did not find expected key

my-values.yaml

general:
  storage:
    customVolume: true
  volumes:
    nfs:
      server: 192.168.0.5
      path: /opt/plexmedia/
plex:
  enabled: false

pods launch fine if overrides are omitted

[FR]Configure URL Base

Ive been struggling to set this up for a while with sonarr and radarr just loading blank pages. Ended up having to deploy everything, jump on the NFS server update the url base in the configs and then restart the containers

Would be great if we could set this in the my-values.yaml file

[FR] Bump operator-sdk version

We are quite behind with the operator-sdk version (1.25).
Bumping the image will solve many CVEs and open the road to implement multi-arch builds #81

[BR]

Describe the bug
I'm not sure this is a bug, but don't think it's a feature request. The ingress isn't being created on my implementation, and I just wanted to know if this is expected. If I need to set up an ingress on my own, that's fine but I'm not seeing it attached. I'm also not seeing the PVC, but I'm new to helm charts and kubernetes overall.

[FR] Helm repository

Is your feature request related to a problem? Please describe.
There is a helm template located in this repository. My deployment is done via ArgoCD that requires a helm repository.

Describe the solution you'd like
Have this repository also contain the helm repository, should be possible with github raw.

Describe alternatives you've considered
None

Additional context
Add any other context or screenshots about the feature request here.

[BR] Folder is not writable by user abc on K3s using longhorn

Folder is not writable by user abc on K3s using longhorn
On a k3s cluster of PIs using SSD drivers and longhorn for kubernetes, when I try to do any action to write files from Sonarr I got the error:

[Warn] SonarrErrorPipeline: Invalid request Validation failed: 
 -- Path: Folder is not writable by user abc

Checking folders and permissions I saw only a few folders are allow for ABC user. What should be the folder for sonarr as rootFolder?

To Reproduce
Steps to reproduce the behavior:

  • enable general and sonarr
  • try to add a new serie
  • check sonarr pod logs

Expected behavior
create the folder of the added serie

Screenshots
If applicable, add screenshots to help explain your problem.

Environment:

  • K8s version: k3s lates
  • CNI Plugin:
  • CSI Type:

Additional context
Add any other context about the problem here.

I've tried to ser the user and group of sonarr pod to 1000, doesn't let the pod start.

folders permissions

drwxr-xr-x   1 root root  4096 Jan 23 23:22 .
drwxr-xr-x   1 root root  4096 Jan 23 23:22 ..
drwxr-xr-x   1 abc  abc   4096 Jan 17 13:43 app
lrwxrwxrwx   1 root root     7 Mar 16  2022 bin -> usr/bin
drwxr-xr-x   2 root root  4096 Apr 15  2020 boot
drwxr-xr-x   2 root root 12288 Aug 29 20:09 command
drwxrwxrwx   4 abc  abc   4096 Jan 23 23:23 config
drwxr-xr-x   1 abc  abc   4096 Jan 10 04:47 defaults
drwxr-xr-x   5 root root   340 Jan 23 23:22 dev
-rwxrwxr-x   1 root root  9252 Jan 10 04:47 docker-mods
-rwxrwxr-x   1 root root    33 Nov  1 14:07 donate.txt
drwxrwxrwx   4 abc  abc   4096 Jan 23 22:42 downloads
drwxrwxr-x   1 root root  4096 Jan 23 23:22 etc
drwxr-xr-x   2 root root  4096 Apr 15  2020 home
-rwxr-xr-x   1 root root   907 Aug 29 20:09 init
lrwxrwxrwx   1 root root     7 Mar 16  2022 lib -> usr/lib
drwxr-xr-x   2 root root  4096 Mar 16  2022 media
drwxr-xr-x   2 root root  4096 Mar 16  2022 mnt
drwxr-xr-x   2 root root  4096 Mar 16  2022 opt
drwxr-xr-x   6 root root  4096 Aug 29 20:09 package
dr-xr-xr-x 299 root root     0 Jan 23 23:22 proc
drwx------   2 root root  4096 Mar 16  2022 root
drwxr-xr-x   1 root root  4096 Jan 23 23:22 run
lrwxrwxrwx   1 root root     8 Mar 16  2022 sbin -> usr/sbin
drwxr-xr-x   2 root root  4096 Mar 16  2022 srv
dr-xr-xr-x  12 root root     0 Jan 23 23:22 sys
drwxrwxrwt   1 root root  4096 Jan 23 23:22 tmp
drwxrwxrwx   2 root root  4096 Jan 23 22:38 tv
drwxrwxr-x   1 root root  4096 Jan 10 04:47 usr
drwxr-xr-x   1 root root  4096 Mar 16  2022 var

[FR] Add Helm Repo

Is your feature request related to a problem? Please describe.
It's annoying when Helm Charts aren't available online so they can be added with helm repo add.

Describe the solution you'd like
Add Helm repo for this project backed by GitHub pages.

Describe alternatives you've considered
Not using this Operator or its Chart.

[BR] Nginx Ingress does not allow reuse of the same host name.

Describe the bug
Nginx-ingress is reporting that the host name has been used in another ingress and such it cannot be allocated to another ingress.

To Reproduce
Steps to reproduce the behavior:

  • deploy the chart
  • try to access the ingresses.

Expected behavior
All ingresses should work.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment:

  • K8s version: K3S Latest
  • CNI Plugin: Flannel with Nginx ingress and metalLb
  • CSI Type: Longhorn?

Additional context
Add any other context about the problem here.

Can't init Plex media server

First of all, thank you @kubealex so much for these awesome configurations! it saved me so much time!!!

Describe the bug
I don't get the Plex initial "server setup wizard"
and can't connect to the server which should run on plex.[MY_MASTER_NODE_IP].nip.io
I'm able to see all the dashboards (including Plex's), connected successfully Sonarr & Radarr to => Jackett & Transmission
but when testing the connection from Sonarr & Radarr to plex it fails

To Reproduce
Steps to reproduce the behavior:
those are the exact changes:
kubealex:master...ilanKushnir:master

Expected behavior

  • To setup Plex media server (via the initial wizard)
    and not skipping - instead, I login and see the Plex main page:
  • To connect to the created server successfully via Sonarr and Radarr, failed with both 32400 and 80

Screenshots
Plex dashboard after login
Failing to connect in Sonaar (tried also port 80)

Environment:

  • K8s version: K3S

[FR] - Allow Auto Unpack of Multi Rar torrents.

Is your feature request related to a problem? Please describe.
unrar capabilities of Transmission to allow sonarr/radarr to auto move torrents that are multi packaged rar files.

Describe the solution you'd like
Using the rtorrent config file update and adding a configMap shell script with correct permissions, you can have a small shell script run to unrar the files once downloading has completed.

Describe alternatives you've considered
none

Additional context
I've got it working locally with direct modifications to the ouput yamls, changes are needed to the Operator to allow the config updates and Volumes/Mounts. I am happy to do a PR for this work.

[FR] Having different PV for each tool

Is your feature request related to a problem? Please describe.
When using the current setup, all the tools are configured directly with the same persistent volume, where the media, configs and apparently sqlite databases are created. This causes issues for some storage backends that might not be well suited to run databases, such as NFS or SMB shares.

Describe the solution you'd like
I'd like to add a specific PV/PVC for each tool, in addition to the main "media" PV where all the movies and shows are stored.

Describe alternatives you've considered
There is already the additional PV field under spec.general.storage.volumes but this is not really helpful to set specific mounts on specific tools.

Additional context
Jackett, transmission, plex and sabnzbd seem to work as they all just rely on flat files, while sonarr and radarr run sqlite instances and rely on those.

I'm opening this issue for the discussion, I'll bring the changes (or at least the way I see them) in a PR if we define a good retro-compatible way to do this.

[BR] Docker image tags set to devel/latest

How do we know that the latest images are stable, both on their own and in concert with one another? That will depend entirely on timing. For some users it will work, for some it won't because breaking changes have been released (Sonarr v4 coming up for example), or because of bugs in stable releases.

I think it would make sense to pin the docker image versions for every release, and test to see that it works decently. Alternatively, if it takes too much time, add a warning at the top of the README saying that you need to pin these versions yourself. This is why helm charts have an appVersion for example, indicating a strong coupling between chart version X and app version Y.

The operator and the CR are already configured with some defaults settings to make you jump and go with it.

Texts like these imply the very opposite, that it comes configured out of the box.

[FR] add testing to each release

As we started using actions, it would be great to have some tests running.

I am actually working on that, keeping this issue as a reference.

  • deploy on kind
  • add ingress
  • add tests

[FR] guide on how exactly to use with helm including example of PV setup

Hey hey.

Maybe I'm just being thick, but I'm not sure I understand how exactly to properly set this up correctly with PV claims etc. Does every service need a PV? I found myself writing out a very lengthy helm config by hand and after several dozen attempts still don't have all the properties configured.
it's not clear to me if the media should be shared on a created PV or via some other means with NFS or both.
An example of how to do that would be useful. Thanks in advance.
Happy to be told where to go, though ๐Ÿ˜…

[FR] Helm Only and Additional Options

Is your feature request related to a problem? Please describe.
Operators seem to add an additional level of complexity and in this case it seems it only adds reconciliation to the core Helm chart.

I was wondering if we could break the core chart out here and even modularize the individual components?

To me the overall product could be a superchart, and all of the individual components could be their own versioned charts.

I'd also be interested in helping add additional features like VPN, and Jellyfin compatibility.

Describe the solution you'd like
Break out the Helm chart into versioned releases by component. Create a superchart for the current set of components.

Minimally support additional features even if only by examples of kustomization with side-cars.

Describe alternatives you've considered
Downloading and installing directly from the git repo. I plan to use ArgoCD anyway, so technically I could point it to the helm chart here without a fully release versioned chart, but why not add a helm repo index to the repository and make releases more official?

Additional context
I think the helm operator was a cool way to add reconciliation to helm charts, but operators add a large layer of complexity.

Other systems such as ArgoCD and Flux are becoming the dominant path for Helm installation with reconciliation anyway.

Thanks for putting this together! It's super cool, and I look forward to deploying it.

[BR] Applying k8s-mediaserver.yml doesn't reliably create anything

Describe the bug
When running kubectl apply -f k8s-mediaserver.yml within the default namespace, some times it immediately begins to then create all of the expected deployments, services, pods, etc. Other times it simply outputs "k8smediaserver.kubealex.com/k8smediaserver created" and does nothing else.

To Reproduce
Steps to reproduce the behavior:

  • Run kubectl apply -f k8s-mediaserver.yml
  • Run kubectl get all
  • Verify that no services, pods, replicasets, etc. are being created:
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/kubernetes ClusterIP 10.152.183.1 443/TCP 29h

Expected behavior
The following should be created and viewable via running kubectl get all after running kubectl apply -f k8s-mediaserver.yml:
NAME READY STATUS RESTARTS AGE
pod/transmission-55c766c788-k72ps 0/1 Pending 0 106s
pod/jackett-5fc4cccd5d-q4bcq 0/1 Pending 0 105s
pod/radarr-bccc8b58f-ggwc7 0/1 Pending 0 106s
pod/sabnzbd-7fd7fb4fd5-2dj8p 0/1 Pending 0 106s
pod/plex-694949d5d7-flx29 0/1 Pending 0 106s
pod/sonarr-756d7cfb87-95cgn 0/1 Pending 0 106s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.152.183.1 443/TCP 29h
service/sabnzbd-https ClusterIP 10.152.183.172 9090/TCP 109s
service/sonarr ClusterIP 10.152.183.67 8989/TCP 109s
service/radarr ClusterIP 10.152.183.42 7878/TCP 108s
service/transmission ClusterIP 10.152.183.136 9091/TCP 108s
service/transmission-peer-tcp ClusterIP 10.152.183.126 51413/TCP 107s
service/transmission-peer-udp ClusterIP 10.152.183.17 51413/UDP 107s
service/jackett ClusterIP 10.152.183.144 9117/TCP 107s
service/plex ClusterIP 10.152.183.70 32400/TCP 107s
service/sabnzbd ClusterIP 10.152.183.221 8080/TCP 106s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/transmission 0/1 1 0 106s
deployment.apps/radarr 0/1 1 0 106s
deployment.apps/sabnzbd 0/1 1 0 106s
deployment.apps/plex 0/1 1 0 106s
deployment.apps/sonarr 0/1 1 0 106s
deployment.apps/jackett 0/1 1 0 106s

NAME DESIRED CURRENT READY AGE
replicaset.apps/transmission-55c766c788 1 1 0 106s
replicaset.apps/sabnzbd-7fd7fb4fd5 1 1 0 106s
replicaset.apps/radarr-bccc8b58f 1 1 0 106s
replicaset.apps/plex-694949d5d7 1 1 0 106s
replicaset.apps/sonarr-756d7cfb87 1 1 0 106s
replicaset.apps/jackett-5fc4cccd5d 1 1 0 106s

Environment:

  • K8s version: v1.24.0-2+59bbb3530b6769 (running via microk8s)

[FR] Add Ombi to the Stack

Is your feature request related to a problem? Please describe.
Sonarr and Radarr are great, but requires thier own local logins and it's not an easy endpoint to access for "end-users"

Describe the solution you'd like

Ombi unifies them both and allows multiple users to request movies and shows by logging in using their Plex credentials, get updates on when their requested content is avaiible, and see things that have already been requested.
https://fleet.linuxserver.io/image?name=linuxserver/ombi

Describe alternatives you've considered
I only have experience with Ombi in my own "non k8 stack"

Additional context
Nothing else

[FR] - Passwords as Secrets

Is your feature request related to a problem? Please describe.

Transmission rpc-password is shown as plain text in config-map.

Describe the solution you'd like
rpc-password to be base64d and added as secret via reference.

[FR] Support Overseerr

Is your feature request related to a problem? Please describe.
Overseerr is a slick unified UI sitting on top of Sonarr and Radarr, tightly integrated with Plex. Sort of analogous to Jackett/Prowlarr but for media discovery and download rather than for configuration.

Describe the solution you'd like
Add an Overseerr service: https://overseerr.dev/
There's a linuxserver image as well: https://hub.docker.com/r/linuxserver/overseerr

Describe alternatives you've considered
Haven't seen any comparable alternatives.

[FR] Performance guidance

Is your feature request related to a problem? Please describe.
My plex server is quite slow and unresponsive.

Describe the solution you'd like
I would like to get some guidance on what the minimum/best specs are for deploying this in the main cloud providers. (Azure/AWS/...)

This can be broken down into multiple sections:
Storage
Networking
VM Size

Questions I want to raise:

  • Would it help to replicate some of the pods?
  • Would it be better to have your downloads on a second node?
  • How to minimize read/write operations?

Error: unknown flag: --metrics-bind-address

Describe the bug
After running kubectl apply -f k8s-mediaserver-operator-arm64.yml on a raspberry pi cluster at home, I see the operator pod displaying a CrashLoopBackOff error. Investigating the logs I see the following error: Error: unknown flag: --metrics-bind-address. I am running K3s on the cluster but I don't think that should be an issue with this error? When deleting the apply, removing the line in the yaml, and running again, I get a similar error with Error: unknown flag: --leader-elect. Seems the args may not be valid?

To Reproduce
Steps to reproduce the behavior:

  • Run the code on a raspberry pi

Expected behavior
Pod launches without error

[BR]

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

Expected behavior
A clear and concise description of what you expected to happen.

expecting to get to any of the referenced links
http://k8s-mediaserver.k8s.test/sonarr
http://k8s-mediaserver.k8s.test/radarr
http://k8s-mediaserver.k8s.test/transmission
http://k8s-mediaserver.k8s.test/jackett

http://k8s-plex.k8s.test/

get DNS_PROBE_FINISHED_NXDOMAIN error failed to reach resource, could this be a DNS or HOSTS entry missing, in k8 dashboard all seems well pod and svcs get deployed.

I am certainly a noob with K8s and have a gap in my knowledge on the networking components.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment:

  • K8s version:
    Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
  • CNI Plugin:
  • CSI Type:

Additional context
Add any other context about the problem here.

[FR] Simplify with defaults

The CR is getting bigger and bigger, would like to provide a more concise one as an example, adding defaults in the chart.

[BR] Unable to connect to Plex

Describe the bug
Plex wont let me connect in any form

To Reproduce
Steps to reproduce the behavior:

  • Download the helm charts
  • Create an application on ArgoCD

Expected behavior
Plex would connect to UI

Screenshots
image

Environment:
Client Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.13-3+b0496756fa948e", GitCommit:"b0496756fa948e718d67351ed8e5293c3a28f0b8", GitTreeState:"clean", BuildDate:"2022-06-08T10:21:43Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/arm64"}

Additional context
There is my values file:

# Default values for k8s-mediaserver.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

general:
  ingress_host: k8s-mediaserver.k8s.test
  plex_ingress_host: k8s-plex.k8s.test
  image_tag: latest
  podDistribution: cluster # can be "spread" or "cluster"
  #UID to run the process with
  puid: 1000
  #GID to run the process with
  pgid: 1000
  #Persistent storage selections and pathing
  storage:
    customVolume: true  #set to true if not using a PVC (must provide volume below)
    pvcName: mediaserver-pvc
    accessMode: ""
    size: 1500Gi
    pvcStorageClass: ""
    # the path starting from the top level of the pv you're passing. If your share is server.local/share/, then tv is server.local/share/media/tv
    subPaths:
      tv: media/tv
      movies: media/movies
      downloads: downloads
      transmission: transmission
      sabnzbd: sabnzbd
      config: config
    volumes:
      hostPath:
        path: /media/ryuunosukeds3/Raspberry_HD/Docker/movies
  ingress:
    ingressClassName: ""
  nodeSelector: {}

sonarr:
  enabled: true
  container:
    image: docker.io/linuxserver/sonarr
    nodeSelector: {}
    port: 8989
  service:
    type: LoadBalancer
    targetPort: 80
    port: 8989
    nodePort:
    extraLBService: false
    extraLBAnnotations: {}
    # Defines an additional LB service, requires cloud provider service or MetalLB
  ingress:
    enabled: false
    annotations: {}
    path: /sonarr
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
    #name: pvc-sonarr-config
    #storageClassName: longhorn
    #annotations:
    #  my-annotation/test: my-value
    #labels:
    #  my-label/test: my-other-value
    #accessModes: ReadWriteOnce
    #storage: 5Gi
    #selector: {}

radarr:
  enabled: true
  container:
    image: docker.io/linuxserver/radarr
    nodeSelector: {}
    port: 7878
  service:
    type: LoadBalancer
    targetPort: 80
    port: 7878
    nodePort:
    # Defines an additional LB service, requires cloud provider service or MetalLB
    extraLBService: false
    extraLBAnnotations: {}
  ingress:
    enabled: false
    annotations: {}
    path: /radarr
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
    #name: pvc-radarr-config
    #storageClassName: longhorn
    #annotations: {}
    #labels: {}
    #accessModes: ReadWriteOnce
    #storage: 5Gi
    #selector: {}

jackett:
  enabled: true
  container:
    image: docker.io/linuxserver/jackett
    nodeSelector: {}
    port: 9117
  service:
    type: LoadBalancer
    targetPort: 80
    port: 9117
    nodePort:
    extraLBService: false
    extraLBAnnotations: {}
    # Defines an additional LB service, requires cloud provider service or MetalLB
  ingress:
    enabled: false
    annotations: {}
    path: /jackett
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
  #  name: pvc-jackett-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

transmission:
  enabled: true
  container:
    image: docker.io/linuxserver/transmission
    nodeSelector: {}
    port:
      utp: 9091
      peer: 51413
  service:
    utp:
      type: LoadBalancer
      targetPort: 80
      port: 9091
      # if type is NodePort, nodePort must be set
      nodePort:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
      extraLBAnnotations: {}
    peer:
      type: LoadBalancer
      targetPort: 51413
      port: 51413
      # if type is NodePort, nodePort and nodePortUDP must be set
      nodePort:
      nodePortUDP:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
      extraLBAnnotations: {}
  ingress:
    enabled: false
    annotations: {}
    path: /transmission
    tls:
      enabled: false
      secretName: ""
  config:
    auth:
      enabled: false
      username: ""
      password: ""
  resources: {}
  volume: {}
  #  name: pvc-transmission-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

sabnzbd:
  enabled: true
  container:
    image: docker.io/linuxserver/sabnzbd
    nodeSelector: {}
    port:
      http: 8080
      https: 9090
  service:
    http:
      type: LoadBalancer
      targetPort: 80
      port: 8080
      nodePort:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
      extraLBAnnotations: {}
    https:
      type: LoadBalancer
      targetPort: 9090
      port: 9090
      nodePort:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
      extraLBAnnotations: {}
  ingress:
    enabled: false
    annotations: {}
    path: /sabnzbd
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
  #  name: pvc-plex-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

prowlarr:
  enabled: true
  container:
    image: docker.io/linuxserver/prowlarr
    tag: develop
    nodeSelector: {}
    port: 9696
  service:
    type: LoadBalancer
    targetPort: 80
    port: 9696
    nodePort:
    extraLBService: false
    extraLBAnnotations: {}
  ingress:
    enabled: false
    annotations: {}
    path: /prowlarr
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
  #  name: pvc-prowlarr-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

plex:
  enabled: true
  claim: "REDACTED"
  replicaCount: 1
  container:
    image: docker.io/linuxserver/plex
    nodeSelector: {}
    port: 32400
  service:
    type: LoadBalancer
    targetPort: 80
    port: 32400
    nodePort:
    # Defines an additional LB service, requires cloud provider service or MetalLB
    extraLBService: false
    extraLBAnnotations: {}
  ingress:
    enabled: false
    annotations: {}
    tls:
      enabled: false
      secretName: ""
  resources: {}
  #  limits:
  #    cpu: 100m
  #    memory: 100Mi
  #  requests:
  #    cpu: 100m
  #    memory: 100Mi
  volume: {}
  #  name: pvc-plex-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

Application file:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: k8s-mediaserver-operator
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    namespace: k8s-mediaserver-operator
    name: in-cluster
  project: default
  source:
    path: k8s-mediaserver-operator/helm
    repoURL: https://github.com/RyuunosukeDS3/argocd.git
    targetRevision: main
    helm:
      valueFiles:
        - values.yaml
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

[FR] Support native multi-arch build in workflow

Is your feature request related to a problem? Please describe.
Not related to a problem, the idea is to improve the build process and avoid multi-manifests for the operator.

Describe the solution you'd like
Standardized multi-arch build in workflow.

[FR] Jellyfin

Relly nice wrok.
I wonder if it possible to add jellyfin also.

I will try to add it.
Im not as good as you on kubernetes maybe i will need help

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.