Git Product home page Git Product logo

charts's People

Contributors

bartversluijs avatar blafry avatar bootc avatar crisu1710 avatar doriath avatar gmartinez-sisti avatar gtsiam avatar irizzant avatar ironashram avatar itspngu avatar ivanovoleg avatar jcmoraisjr avatar jfcoz avatar knackaron avatar lvergergsk avatar manics avatar merusso avatar mhyllander avatar michalschott avatar morishiri avatar naseemkullah avatar nathanaelytj avatar petergardfjall avatar ptzianos avatar scukerman avatar taxilian avatar timdeluxe avatar tristanpeers avatar vvbogdanov87 avatar xianlubird avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

charts's Issues

Add a dedicated setting for the "publish-service" argument to the chart

As discussed on Slack,

it would make sense to have a way to set the "publish-service" argument with a key in values.yaml; this option tells the controller where to fetch its public IP from so it can in turn populate the status.loadBalancer.ingress[].ip field(s) of the Ingress resource(s) it manages.

This can currently be achieved manually by adding publish-service: <namespace>/<service> to controller.extraArgs in values.yaml, which is not the greatest workaround because the installation namespace is passed to Helm and the service name is rendered by the chart itself, which means that users must assume some of the rendered output in advance and make it part of the values file itself, rather than having an idempotent option.

Care should be taken to not enable/set publish-service by default to not break existing setups, as well as setting up the yaml rendering process in such a way that any already-set value for publish-service (by means of specifying it in controller.extraArgs) is prioritized and thus preserved over the dedicated setting this issue calls for.

The following is a suggested syntax for the proposed setting, inspired by how ingress-nginx handles the same issue.

# values.yaml

controller:
  # Allows customization of the source of the IP address or FQDN to report
  # in the ingress status field. If disabled (default), the status field remains 
  # unpopulated. If enabled, it reads the information provided by the service,
  # unless pathOverride is specified.
  publishService:
    # Enable 'publishService' or not
    enabled: false
    # Allows overriding of the publish service to bind to
    # Must be <namespace>/<service_name>
    pathOverride: ""

psp scheduled for removal

kubernetes/kubernetes#97171

The PodSecurityPolicy API is deprecated in 1.21, and will no longer be served starting in 1.25.

For some cloudvendors this timeline is even more strict. like azure AKS.

most fields can be set right in the deployment manifest. as podsecuritycontext, or securityContext per container. Maybe they can be moved there?

v0.15.0-alpha.1 chart can't bind socket 80

Hi, seems like default chart parameters doesn't work from non-root user:

cannot bind socket (Permission denied) for [0.0.0.0:80]

I tried to override http port but got in the haproxy.cfg :80 port:

  # # # # # # # # # # # # # # # # # # #
# #
#     HTTP frontend
#
frontend _front_http
    mode http
    bind :80

Config:

controller:
  containerPorts:
    http: 8080
    https: 4443
  service:
    httpsPorts: []
    httpPorts:
      - port: 8080
        targetPort: 44331
      - port: 4443
        targetPort: http
  config:
    config-sections: |-
      frontend _front_http_redirect
        mode http
        bind :44331        
        # ...

Configuration of service port and targetPort

I was just bitten by a configuration issue that occurred when I upgraded from the incubator version of this chart. In the incubator version of the chart, the service ports were configured like this:

  service:
    httpPorts:
      - port: 80
    httpsPorts:
      - port: 443
      - port: 8443

In the current chart, you are required to also specify the targetPort, like this:

  service:
    httpPorts:
      - port: 80
        targetPort: http
    httpsPorts:
      - port: 443
        targetPort: https
      - port: 8443
        targetPort: https

Unless I'm mistaken, haproxy is only listening on two ports, http (80) and https (443). Since I didn't notice this difference between the charts and still had my original configuration, kubernetes silently assumed that targetPort is the same as port. This of course broke things, since haproxy is not listening on port 8443.

The incubator chart assumed that all ports under the httpPorts key should have targetPort: http, and all ports under the httpsPorts key should have targetPort: https. Is there any reason why the targetPort is required now? It seems to be redundant.

haproxy-ingress fails with 'cannot list resource "ingresses" in API group "networking.k8s.io'

I've installed haproxy-ingress on k3s v1.19.3+k3s3.
A fresh cluster.

    helm repo add haproxy-ingress https://haproxy-ingress.github.io/charts
    helm repo update

    helm install haproxy-ingress haproxy-ingress/haproxy-ingress \
    --create-namespace --namespace=ingress-controller \
    --set controller.hostNetwork=true \
    --set controller.kind=DaemonSet \
    --set controller.image.tag=v0.11-beta.3

I got an error that I managed to fix by following the guide here: kubernetes/ingress-nginx#4296 (comment) .

E1116 19:43:38.325095       7 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:ingress-controller:haproxy-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope

when user deamonset and hostnetwork,is this a bug?

  • command
helm install haproxy-ingress haproxy-ingress-0.13.3.tgz  --create-namespace --namespace ingress-controller  -f haproxy-ingress-values.yaml
  • haproxy-ingress-values.yaml
controller:
  daemonset:
    useHostPort: true
    hostPorts:
      http: 8090
      https: 4433
  #service:
  #  httpPorts:
  #    - port: 8090
  #      targetPort: http
  #  httpsPorts:
  #    - port: 4433
  #      targetPort: https
  haproxy:
    image:
      repository: harbor.enmotech.com/library/haproxy-ingress
      tag: v0.13.3
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  kind: DaemonSet
  nodeSelector: 
    haproxy-ingress-controller: "true"
  • the error
Error: DaemonSet.apps "haproxy-ingress" is invalid: [spec.template.spec.containers[0].ports[0].containerPort: Invalid value: 80: must match `hostPort` when `hostNetwork` is true, spec.template.spec.containers[0].ports[1].containerPort: Invalid value: 443: must match `hostPort` when `hostNetwork` is true]
  • question
    • when use hostnetwork, it use hostport automatically?
    • this is a bug? if not, when use deamoset and hostnetwork, how to change the port it listens?I don't want to listen 80 and 443!

create an IngressClass resource

Hi,

it would be nice if the chart allowed to create an IngressClass resource, something like:

controller:
  ingressClass: haproxy  # existing .Values
  ingressClassResource:  # new .Values
    enabled: false
    controller: haproxy-ingress.github.io/controller

and

{{- if .Values.controller.ingressClassResource.enabled }}
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: {{ .Values.controller.ingressClass }}
spec:
  controller: {{ .Values.ingressClassResource.controller }}
{{- end }}
  • adding --controller-class {{ .Values.ingressClassResource.controller }} to the pod template.

ref:
https://haproxy-ingress.github.io/docs/configuration/keys/#class-matter
https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L98
https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-ingressclass.yaml

what do you think ?

Helm waits forever

I am using this ingress controller in a manner similar to #23. I'm using controller.hostNetwork=true and thus do not require the Service resource that this chart creates. My values config is as follows:

controller:
  hostNetwork: true
  extraArgs:
    "watch-ingress-without-class":
  kind: DaemonSet
  stats:
    enabled: true
  daemonset:
    useHostPort: true
  nodeSelector:
    role: ingress-controller

When deploying with helm, with the --wait as I need the command to block until the controller is ready, causes Helm to wait forever on a condition that will never happen:

$ helm upgrade --install haproxy-ingress haproxy-ingress/haproxy-ingress  --create-namespace --namespace ingress-controller  --version 0.14.2  -f values.yaml --debug --wait
history.go:56: [debug] getting history for release haproxy-ingress
upgrade.go:142: [debug] preparing upgrade for haproxy-ingress
upgrade.go:150: [debug] performing update for haproxy-ingress
upgrade.go:322: [debug] creating upgraded release for haproxy-ingress
client.go:396: [debug] checking 11 resources for changes
client.go:684: [debug] Patch PodDisruptionBudget "haproxy-ingress" in namespace ingress-controller
client.go:675: [debug] Looks like there are no changes for ServiceAccount "haproxy-ingress"
client.go:675: [debug] Looks like there are no changes for ConfigMap "haproxy-ingress"
client.go:675: [debug] Looks like there are no changes for ConfigMap "haproxy-ingress-tcp"
client.go:675: [debug] Looks like there are no changes for ClusterRole "haproxy-ingress"
client.go:675: [debug] Looks like there are no changes for ClusterRoleBinding "haproxy-ingress"
client.go:675: [debug] Looks like there are no changes for Role "haproxy-ingress"
client.go:675: [debug] Looks like there are no changes for RoleBinding "haproxy-ingress"
client.go:684: [debug] Patch Service "haproxy-ingress" in namespace ingress-controller
client.go:675: [debug] Looks like there are no changes for Service "haproxy-ingress-stats"
client.go:684: [debug] Patch DaemonSet "haproxy-ingress" in namespace ingress-controller
upgrade.go:394: [debug] waiting for release haproxy-ingress resources (created: 0 updated: 11  deleted: 0)
wait.go:48: [debug] beginning wait for 11 resources with timeout of 5m0s
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
... (message is repeated many times) ...
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
upgrade.go:434: [debug] warning: Upgrade "haproxy-ingress" failed: timed out waiting for the condition
Error: UPGRADE FAILED: timed out waiting for the condition
helm.go:84: [debug] timed out waiting for the condition
UPGRADE FAILED
main.newUpgradeCmd.func2
	helm.sh/helm/v3/cmd/helm/upgrade.go:202
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:1044
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:968
main.main
	helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
	runtime/proc.go:250
runtime.goexit
	runtime/asm_arm64.s:1172

This goes on for some time until Helm gives up and returns an error, despite everything working correctly.

Not using --wait is not an option here as I need to wait until the daemonset is actually up and healthy.

How to use it as a DaemonSet with hostNetwork? - bug in chart ?

Hi,

I'm trying to figure out how to use haproxy-ingress as a DaemonSet with hostNetwork so it listens on 80 and 443 on all my nodes.

I have the following config:

    controller:
      hostNetwork: true
      kind: DaemonSet
      daemonset:
        useHostPort: true
      config:
        use-proxy-protocol: "true"

And I get a LoadBalancer type service listening on high ports.

 k get svc 
NAME              TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
haproxy-ingress   LoadBalancer   10.43.190.56   <pending>     80:31738/TCP,443:30543/TCP   13m

Broken support for some v1.25+ clusters

HAProxy ingress installed fine on v1.25 cluster in our test system, however in another v1.25, it is showing this error:

Error: UPGRADE FAILED: unable to build kubernetes objects from current release manifest: resource mapping not found for name: "haproxy-ingress" namespace: "" from "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"

It seems like it is picking up v1beta1 by mistake instead of v1, semverCompare could be buggy intercepting version numbers.

haproxy metrics exposed by defaultBackend by ingress

I have issue that metrics of haproxy or haproxy-ingress are exposed to ingress on path /metrics. They shouldn't be exposed external by defaultBackend.

Values:

  defaultBackend:
    enabled: true
    resources:
      limits:
        cpu: 20m
        memory: 20Mi
      requests:
        cpu: 20m
        memory: 20Mi
    replicaCount: 2
  controller:
    haproxy:
      enabled: enabled
      image:
        repository: haproxy
        tag: "2.3.12"
        pullPolicy: IfNotPresent
    resources:
      limits:
        cpu: 2
        memory: 512Mi
      requests:
        cpu: 200m
        memory: 128Mi
    autoscaling:
      enabled: true
      minReplicas: 2
      maxReplicas: 10
      targetCPUUtilizationPercentage: 50
    serviceMonitor:
      enabled: true
    ingressClass: haproxy
    podAnnotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "10254"
          topologyKey: "kubernetes.io/hostname"
    extraArgs:
      publish-service: "haproxy-ingress/haproxy-ingress"
    securityContext:
      sysctls:
      - name: net.ipv4.ip_local_port_range
        value: "1024 65535"
    livenessProbe:
      failureThreshold: 10
    service:
      type: LoadBalancer  
      labels:
        app.kubernetes.io/release: haproxy-ingress
    stats:
      enabled: true
    metrics:
      enabled: true
      embaded: false
    config:
      timeout-keep-alive: 620s
      config-global: |
        tune.bufsize 32768
        tune.maxrewrite 8192
      config-defaults: |
        option redispatch
      securityContext:
        privileged: true

Metrics are available on

http://ingress-domain.com/metrics

When ingress object has defined path /metrics it will display app metrics

Support sidecar containers

Hi!

I would like to add a sidecar container to the ingress but I'm currently unable to do so via this chart.

Automagic support for arm64 deployments

The image for the default-backend defaults to k8s.gcr.io/defaultbackend-amd64 in the helm chart, which naturally fails when deploying on arm64. It would be nice to have it point to a manifest with all supported architectures (at least amd64/arm64)

Missing resource for RBAC ClusterRole

I was getting the following from the haproxy-ingress pod logs.

E0105 11:11:35.101979 6 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:ingress-controller:haproxy-ingress" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope

I had to edit the ClusterRole manually and change the following from

 - apiGroups:
  - extensions
  - networking.k8s.io
  resources:
  - ingresses

to

- apiGroups:
  - extensions
  - networking.k8s.io
  resources:
  - ingresses
  - ingressclasses
  - 

Not sure if I was missing something in my values.yaml file though nothing was obvious to me.

Syslog logging doesn't work in a dual-stack or IPv6-only cluster

In a dual-stack (or IPv6 only) cluster, localhost resolves to ::1. HAProxy is configured to log to localhost:514 and dutifully sends log messages to [::1]:514, but the whereisaaron/kube-syslog-sidecar listens only on the IPv4 0.0.0.0 address. Thus, the messages go nowhere:

10:01:39.195717 IP6 ::1.48755 > ::1.514: SYSLOG local0.info, length: 258
10:01:39.195739 IP6 ::1 > ::1: ICMP6, destination unreachable, unreachable port, ::1 udp port 514, length 314

A quick workaround for dual-stack clusters would be to change the localhost address to 127.0.0.1, but that wouldn't help IPv6-only clusters. Ideally the latter image would switch to using :: to listen, or have a configurable listen address.

Another potential workaround that I've just thought of is to use the "external" HAProxy, which can instead log to its own stdout. I'll give that a try and report back.

cp: can't create directory '/etc/haproxy/lua': Permission denied

Hello and thanks for your work on this ingress controller.

We tested with many different settings but we keep getting the following error:

  • installation

helm upgrade --install -n haproxy-ingress haproxy-ingress haproxy-ingress/haproxy-ingress --version 0.12.6 -f values.yaml

  • error
[root@xxxxxxxxxxxxx-01 haproxy]# kubectl -n haproxy-ingress logs haproxy-ingress-f7b8dc97f-znlq4 haproxy-ingress
cp: can't create directory '/etc/haproxy/lua': Permission denied
NAME                                               READY   STATUS             RESTARTS   AGE
haproxy-ingress-7f64d48f4b-ml24c                   0/2     CrashLoopBackOff   10         4m43s
  • values.yaml

values-lab.yaml.txt

Disable all services and use host based network

Hi jcmorais,

I started with haproxy ingress using my own k8s yml files. Now I started using helm. In this new context,
I realized that there is a creation of some services. I want to work 100% host based network with no NAT.
So I dont need a service of kind loadbalancer that is been created because the haproxy ingress will bind directly in the host's ports.
I have already put the "config:" (my config map) and the hostnetwork to true. It worked.
But I dont need this service and I dont know how to disable this service creation.
Can you help me ?

Reload HAProxy configuration without pod restart

Hi!

I'm using Fluxv2 to deploy haproxy-ingress chart to my cluster. I've noticed that each time I change values that are destined for ConfigMap I have ingress controller pods recreated. AFAIK Flux just issues helm upgrade on values change, so I think it's not related to Flux itself, hence posting here.

I was wondering how could I apply HAProxy config changes without pod recreation? For example, I want to enable logs and have new configuration reloaded by HAProxy itself.

Thanks.

support for topologyspreadconstraint?

Hi,

Could you add support for topologySpreadConstraint for the Deployment in the helm chart? In my bare metal cluster I often want to keep processing as spread out amongst systems as possible -- and I definitely want to ensure that I don't end up with all of my ingress controllers on the same system in case it goes out somehow -- so I often use something like this in my Deployment:

      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: ScheduleAnyway
        labelSelector:
          matchLabels:
            k8s-app: myappname

That tells it to try to keep the same number of pods on each node, +/- 1 (maxSkew). Something similar can be done with podAffinity by setting an antiAffinity, but then if it autoscales to more instances than I have nodes the processes can't be started up. I don't see any way to add this with helm, though, so to use it I have to edit the deployment after deploying the chart -- and that'll become a problem when/if I want to upgrade.

I could make the change and submit a PR, but I have zero experience with golang or writing helm charts and I honestly have no idea how to do it.

log sidecar log external IP?

Hi Guys

I've enabled sidecar logging controller.logs.enabled: true but in the request logs I see the IP address of the haproxy ingress pod (10.42.3.133 in my case) and not the external IP. I run the ingress on two different types of clusters:

  1. Behind AWS application load balancer (which sets X-Forwarded-For)
  2. On bare metal using a floating IP (Hetzner Cloud) with no load balancer

I tried setting this configuration, but nothing changed:

controller:
  config:
    forwardfor: ifmissing

Any ideas?

IPv6 healthzPort is wrong

IPv6 only, HAProxy /healthzPort is listening to port 10254.
Where this port comes from is bit unclear.

Also the configuration uses in readiness probe config uses :10254 instead of [::]:10254

PSP, ClusterRole fixes and other additions

Hi,

I want to propose some changes.

Fixes:

  • There were problems when using PodSecurityPolicy because of wrong PSP apiGroup name. Set proper name and rewritten a bit.
  • Added ingressclasses to allowed resources for ClusterRole, because operator was throwing errors on it.

Additions:

  • Separate PSP for default backend.
  • Added optional ServiceMonitor.

Tested all this on 1.18.10 kubernetes version with PSP enabled.

Can I make a PR on this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.