haproxy-ingress / charts Goto Github PK
View Code? Open in Web Editor NEWHAProxy Ingress helm charts
License: Apache License 2.0
HAProxy Ingress helm charts
License: Apache License 2.0
As discussed on Slack,
it would make sense to have a way to set the "publish-service" argument with a key in values.yaml; this option tells the controller where to fetch its public IP from so it can in turn populate the status.loadBalancer.ingress[].ip
field(s) of the Ingress resource(s) it manages.
This can currently be achieved manually by adding publish-service: <namespace>/<service>
to controller.extraArgs
in values.yaml, which is not the greatest workaround because the installation namespace is passed to Helm and the service name is rendered by the chart itself, which means that users must assume some of the rendered output in advance and make it part of the values file itself, rather than having an idempotent option.
Care should be taken to not enable/set publish-service
by default to not break existing setups, as well as setting up the yaml rendering process in such a way that any already-set value for publish-service
(by means of specifying it in controller.extraArgs
) is prioritized and thus preserved over the dedicated setting this issue calls for.
The following is a suggested syntax for the proposed setting, inspired by how ingress-nginx handles the same issue.
# values.yaml
controller:
# Allows customization of the source of the IP address or FQDN to report
# in the ingress status field. If disabled (default), the status field remains
# unpopulated. If enabled, it reads the information provided by the service,
# unless pathOverride is specified.
publishService:
# Enable 'publishService' or not
enabled: false
# Allows overriding of the publish service to bind to
# Must be <namespace>/<service_name>
pathOverride: ""
The PodSecurityPolicy API is deprecated in 1.21, and will no longer be served starting in 1.25.
For some cloudvendors this timeline is even more strict. like azure AKS.
most fields can be set right in the deployment manifest. as podsecuritycontext, or securityContext per container. Maybe they can be moved there?
Hi, seems like default chart parameters doesn't work from non-root user:
cannot bind socket (Permission denied) for [0.0.0.0:80]
I tried to override http port but got in the haproxy.cfg :80
port:
# # # # # # # # # # # # # # # # # # #
# #
# HTTP frontend
#
frontend _front_http
mode http
bind :80
Config:
controller:
containerPorts:
http: 8080
https: 4443
service:
httpsPorts: []
httpPorts:
- port: 8080
targetPort: 44331
- port: 4443
targetPort: http
config:
config-sections: |-
frontend _front_http_redirect
mode http
bind :44331
# ...
I was just bitten by a configuration issue that occurred when I upgraded from the incubator version of this chart. In the incubator version of the chart, the service ports were configured like this:
service:
httpPorts:
- port: 80
httpsPorts:
- port: 443
- port: 8443
In the current chart, you are required to also specify the targetPort
, like this:
service:
httpPorts:
- port: 80
targetPort: http
httpsPorts:
- port: 443
targetPort: https
- port: 8443
targetPort: https
Unless I'm mistaken, haproxy is only listening on two ports, http
(80) and https
(443). Since I didn't notice this difference between the charts and still had my original configuration, kubernetes silently assumed that targetPort
is the same as port
. This of course broke things, since haproxy is not listening on port 8443.
The incubator chart assumed that all ports under the httpPorts
key should have targetPort: http
, and all ports under the httpsPorts
key should have targetPort: https
. Is there any reason why the targetPort
is required now? It seems to be redundant.
I've installed haproxy-ingress on k3s v1.19.3+k3s3.
A fresh cluster.
helm repo add haproxy-ingress https://haproxy-ingress.github.io/charts
helm repo update
helm install haproxy-ingress haproxy-ingress/haproxy-ingress \
--create-namespace --namespace=ingress-controller \
--set controller.hostNetwork=true \
--set controller.kind=DaemonSet \
--set controller.image.tag=v0.11-beta.3
I got an error that I managed to fix by following the guide here: kubernetes/ingress-nginx#4296 (comment) .
E1116 19:43:38.325095 7 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:ingress-controller:haproxy-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
helm install haproxy-ingress haproxy-ingress-0.13.3.tgz --create-namespace --namespace ingress-controller -f haproxy-ingress-values.yaml
controller:
daemonset:
useHostPort: true
hostPorts:
http: 8090
https: 4433
#service:
# httpPorts:
# - port: 8090
# targetPort: http
# httpsPorts:
# - port: 4433
# targetPort: https
haproxy:
image:
repository: harbor.enmotech.com/library/haproxy-ingress
tag: v0.13.3
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
kind: DaemonSet
nodeSelector:
haproxy-ingress-controller: "true"
Error: DaemonSet.apps "haproxy-ingress" is invalid: [spec.template.spec.containers[0].ports[0].containerPort: Invalid value: 80: must match `hostPort` when `hostNetwork` is true, spec.template.spec.containers[0].ports[1].containerPort: Invalid value: 443: must match `hostPort` when `hostNetwork` is true]
PodDisruptionBudget prevents cluster from upgrading when controller.replicaCount = 1. I would like to suggest you to change controller.replicaCount default value to 2.
Hi,
it would be nice if the chart allowed to create an IngressClass resource, something like:
controller:
ingressClass: haproxy # existing .Values
ingressClassResource: # new .Values
enabled: false
controller: haproxy-ingress.github.io/controller
and
{{- if .Values.controller.ingressClassResource.enabled }}
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: {{ .Values.controller.ingressClass }}
spec:
controller: {{ .Values.ingressClassResource.controller }}
{{- end }}
--controller-class {{ .Values.ingressClassResource.controller }}
to the pod template.ref:
https://haproxy-ingress.github.io/docs/configuration/keys/#class-matter
https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L98
https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-ingressclass.yaml
what do you think ?
I am using this ingress controller in a manner similar to #23. I'm using controller.hostNetwork=true
and thus do not require the Service
resource that this chart creates. My values config is as follows:
controller:
hostNetwork: true
extraArgs:
"watch-ingress-without-class":
kind: DaemonSet
stats:
enabled: true
daemonset:
useHostPort: true
nodeSelector:
role: ingress-controller
When deploying with helm, with the --wait
as I need the command to block until the controller is ready, causes Helm to wait forever on a condition that will never happen:
$ helm upgrade --install haproxy-ingress haproxy-ingress/haproxy-ingress --create-namespace --namespace ingress-controller --version 0.14.2 -f values.yaml --debug --wait
history.go:56: [debug] getting history for release haproxy-ingress
upgrade.go:142: [debug] preparing upgrade for haproxy-ingress
upgrade.go:150: [debug] performing update for haproxy-ingress
upgrade.go:322: [debug] creating upgraded release for haproxy-ingress
client.go:396: [debug] checking 11 resources for changes
client.go:684: [debug] Patch PodDisruptionBudget "haproxy-ingress" in namespace ingress-controller
client.go:675: [debug] Looks like there are no changes for ServiceAccount "haproxy-ingress"
client.go:675: [debug] Looks like there are no changes for ConfigMap "haproxy-ingress"
client.go:675: [debug] Looks like there are no changes for ConfigMap "haproxy-ingress-tcp"
client.go:675: [debug] Looks like there are no changes for ClusterRole "haproxy-ingress"
client.go:675: [debug] Looks like there are no changes for ClusterRoleBinding "haproxy-ingress"
client.go:675: [debug] Looks like there are no changes for Role "haproxy-ingress"
client.go:675: [debug] Looks like there are no changes for RoleBinding "haproxy-ingress"
client.go:684: [debug] Patch Service "haproxy-ingress" in namespace ingress-controller
client.go:675: [debug] Looks like there are no changes for Service "haproxy-ingress-stats"
client.go:684: [debug] Patch DaemonSet "haproxy-ingress" in namespace ingress-controller
upgrade.go:394: [debug] waiting for release haproxy-ingress resources (created: 0 updated: 11 deleted: 0)
wait.go:48: [debug] beginning wait for 11 resources with timeout of 5m0s
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
... (message is repeated many times) ...
ready.go:258: [debug] Service does not have load balancer ingress IP address: ingress-controller/haproxy-ingress
upgrade.go:434: [debug] warning: Upgrade "haproxy-ingress" failed: timed out waiting for the condition
Error: UPGRADE FAILED: timed out waiting for the condition
helm.go:84: [debug] timed out waiting for the condition
UPGRADE FAILED
main.newUpgradeCmd.func2
helm.sh/helm/v3/cmd/helm/upgrade.go:202
github.com/spf13/cobra.(*Command).execute
github.com/spf13/[email protected]/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/[email protected]/command.go:1044
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/[email protected]/command.go:968
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_arm64.s:1172
This goes on for some time until Helm gives up and returns an error, despite everything working correctly.
Not using --wait
is not an option here as I need to wait until the daemonset is actually up and healthy.
Hi,
I'm trying to figure out how to use haproxy-ingress as a DaemonSet with hostNetwork so it listens on 80 and 443 on all my nodes.
I have the following config:
controller:
hostNetwork: true
kind: DaemonSet
daemonset:
useHostPort: true
config:
use-proxy-protocol: "true"
And I get a LoadBalancer type service listening on high ports.
k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-ingress LoadBalancer 10.43.190.56 <pending> 80:31738/TCP,443:30543/TCP 13m
HAProxy ingress installed fine on v1.25 cluster in our test system, however in another v1.25, it is showing this error:
Error: UPGRADE FAILED: unable to build kubernetes objects from current release manifest: resource mapping not found for name: "haproxy-ingress" namespace: "" from "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
It seems like it is picking up v1beta1 by mistake instead of v1, semverCompare could be buggy intercepting version numbers.
I'd like to use different syslog container (which exposes different port than standard 514). Currently port is hardcoded.
I have issue that metrics of haproxy or haproxy-ingress are exposed to ingress on path /metrics. They shouldn't be exposed external by defaultBackend.
Values:
defaultBackend:
enabled: true
resources:
limits:
cpu: 20m
memory: 20Mi
requests:
cpu: 20m
memory: 20Mi
replicaCount: 2
controller:
haproxy:
enabled: enabled
image:
repository: haproxy
tag: "2.3.12"
pullPolicy: IfNotPresent
resources:
limits:
cpu: 2
memory: 512Mi
requests:
cpu: 200m
memory: 128Mi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 50
serviceMonitor:
enabled: true
ingressClass: haproxy
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
topologyKey: "kubernetes.io/hostname"
extraArgs:
publish-service: "haproxy-ingress/haproxy-ingress"
securityContext:
sysctls:
- name: net.ipv4.ip_local_port_range
value: "1024 65535"
livenessProbe:
failureThreshold: 10
service:
type: LoadBalancer
labels:
app.kubernetes.io/release: haproxy-ingress
stats:
enabled: true
metrics:
enabled: true
embaded: false
config:
timeout-keep-alive: 620s
config-global: |
tune.bufsize 32768
tune.maxrewrite 8192
config-defaults: |
option redispatch
securityContext:
privileged: true
Metrics are available on
http://ingress-domain.com/metrics
When ingress object has defined path /metrics it will display app metrics
Hi!
I would like to add a sidecar container to the ingress but I'm currently unable to do so via this chart.
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+;
Chart should at some point use the new one or use a capabilities check to change api
The image for the default-backend defaults to k8s.gcr.io/defaultbackend-amd64
in the helm chart, which naturally fails when deploying on arm64. It would be nice to have it point to a manifest with all supported architectures (at least amd64/arm64)
I was getting the following from the haproxy-ingress pod logs.
E0105 11:11:35.101979 6 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:ingress-controller:haproxy-ingress" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
I had to edit the ClusterRole manually and change the following from
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
to
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
- ingressclasses
-
Not sure if I was missing something in my values.yaml file though nothing was obvious to me.
In a dual-stack (or IPv6 only) cluster, localhost
resolves to ::1
. HAProxy is configured to log to localhost:514
and dutifully sends log messages to [::1]:514
, but the whereisaaron/kube-syslog-sidecar
listens only on the IPv4 0.0.0.0
address. Thus, the messages go nowhere:
10:01:39.195717 IP6 ::1.48755 > ::1.514: SYSLOG local0.info, length: 258
10:01:39.195739 IP6 ::1 > ::1: ICMP6, destination unreachable, unreachable port, ::1 udp port 514, length 314
A quick workaround for dual-stack clusters would be to change the localhost
address to 127.0.0.1
, but that wouldn't help IPv6-only clusters. Ideally the latter image would switch to using ::
to listen, or have a configurable listen address.
Another potential workaround that I've just thought of is to use the "external" HAProxy, which can instead log to its own stdout. I'll give that a try and report back.
Hello guys -
Just curious if there is a reason to maintain both of these charts?
https://github.com/haproxytech/kubernetes-ingress
Cheers ๐
Hello and thanks for your work on this ingress controller.
We tested with many different settings but we keep getting the following error:
helm upgrade --install -n haproxy-ingress haproxy-ingress haproxy-ingress/haproxy-ingress --version 0.12.6 -f values.yaml
[root@xxxxxxxxxxxxx-01 haproxy]# kubectl -n haproxy-ingress logs haproxy-ingress-f7b8dc97f-znlq4 haproxy-ingress
cp: can't create directory '/etc/haproxy/lua': Permission denied
NAME READY STATUS RESTARTS AGE
haproxy-ingress-7f64d48f4b-ml24c 0/2 CrashLoopBackOff 10 4m43s
Hi jcmorais,
I started with haproxy ingress using my own k8s yml files. Now I started using helm. In this new context,
I realized that there is a creation of some services. I want to work 100% host based network with no NAT.
So I dont need a service of kind loadbalancer that is been created because the haproxy ingress will bind directly in the host's ports.
I have already put the "config:" (my config map) and the hostnetwork to true. It worked.
But I dont need this service and I dont know how to disable this service creation.
Can you help me ?
Hi!
I'm using Fluxv2 to deploy haproxy-ingress chart to my cluster. I've noticed that each time I change values that are destined for ConfigMap I have ingress controller pods recreated. AFAIK Flux just issues helm upgrade
on values change, so I think it's not related to Flux itself, hence posting here.
I was wondering how could I apply HAProxy config changes without pod recreation? For example, I want to enable logs and have new configuration reloaded by HAProxy itself.
Thanks.
Hi,
Could you add support for topologySpreadConstraint
for the Deployment in the helm chart? In my bare metal cluster I often want to keep processing as spread out amongst systems as possible -- and I definitely want to ensure that I don't end up with all of my ingress controllers on the same system in case it goes out somehow -- so I often use something like this in my Deployment:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
k8s-app: myappname
That tells it to try to keep the same number of pods on each node, +/- 1 (maxSkew). Something similar can be done with podAffinity by setting an antiAffinity, but then if it autoscales to more instances than I have nodes the processes can't be started up. I don't see any way to add this with helm, though, so to use it I have to edit the deployment after deploying the chart -- and that'll become a problem when/if I want to upgrade.
I could make the change and submit a PR, but I have zero experience with golang or writing helm charts and I honestly have no idea how to do it.
Hi Guys
I've enabled sidecar logging controller.logs.enabled: true
but in the request logs I see the IP address of the haproxy ingress pod (10.42.3.133 in my case) and not the external IP. I run the ingress on two different types of clusters:
I tried setting this configuration, but nothing changed:
controller:
config:
forwardfor: ifmissing
Any ideas?
IPv6 only, HAProxy /healthzPort
is listening to port 10254.
Where this port comes from is bit unclear.
Also the configuration uses in readiness probe config uses :10254
instead of [::]:10254
Today, controller-hpa.yaml
requires both CPU and memory to be specified. These can be optional if they are not specified.
Hi,
I want to propose some changes.
Fixes:
Additions:
Tested all this on 1.18.10 kubernetes version with PSP enabled.
Can I make a PR on this?
Hi @jcmoraisjr ,
How can I change the port here for lb-tcp-80 and lb-tcp-443?
The reason I want to change them because I have another nginx running in my kube-system namespace with the same port and because of this I cn't dpeloy it to the default ports.
See #31
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.