Git Product home page Git Product logo

nghttpx-ingress-lb's Introduction

nghttpx Ingress Controller

This is an Ingress Controller which uses nghttpx as L7 load balancer.

nghttpx ingress controller was initially created based on nginx ingress controller.

Docker images

The official Docker images since v0.67.0 are available at GitHub Container Registry. The images for older releases can be found at Docker Hub.

Requirements

If --internal-default-backend flag is given to false, the default backend service is necessary:

Actually, any backend web server will suffice as long as it returns some kind of error code for any requests.

Deploy the Ingress controller

Load balancers are created via a Deployment or DaemonSet:

$ kubectl create -f examples/default/service-account.yaml
$ kubectl create -f examples/default/rc-default.yaml

IngressClass

This controller supports IngressClass resource. The default IngressClass controller name is "zlab.co.jp/nghttpx". It supports ingressclass.kubernetes.io/is-default-class annotation.

This controller no longer supports the deprecated kubernetes.io/ingress.class annotation.

The default behavior around IngressClass does not follow the standard rule for a historical reason. nghttpx ingress controller processes Ingress resource which does not have .spec.ingressClassName specified. It also interprets the default IngressClass in its own way. If Ingress resource does not have .spec.ingressClassName specified, but the default IngressClass does not point to nghttpx ingress controller, it does not process the resource. The standard rule is that if Ingress resource does not have .spec.ingressClassName, it should be ignored, and only process the resource which is explicitly designated to the controller via IngressClass. --require-ingress-class flag enforces this rule. Obviously, it completely changes which resources are processed by this controller. You need to set .spec.ingressClassName for all Ingress resources in your cluster. And create the default IngressClass resource to ensure that Ingress.spec.ingressClassName is defaulted properly.

networking.k8s.io/v1 Ingress

This controller only recognizes Service backend. It ignores pathType and behaves as if ImplementationSpecific is specified. Hosts in .spec.tls are also ignored.

HTTP

First we need to deploy some application to publish. To keep this simple we will use the echoheaders app that just returns information about the HTTP request as output

kubectl create deployment echoheaders --image=registry.k8s.io/echoserver:1.10

Now we expose the same application in two different services (so we can create different Ingress rules)

kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-y

Next we create a couple of Ingress rules

kubectl create -f examples/ingress.yaml

we check that ingress rules are defined:

$ kubectl get ing
NAME      HOSTS                     ADDRESS         PORTS   AGE
echomap   foo.bar.com,bar.baz.com   192.168.0.1     80      1m11s

Check nghttpx it is running with the defined Ingress rules:

$ LBIP=$(kubectl get node `kubectl get po -l name=nghttpx-ingress-lb --namespace=kube-system --template '{{range .items}}{{.spec.nodeName}}{{end}}'` --template '{{range $i, $n := .status.addresses}}{{if eq $n.type "ExternalIP"}}{{$n.address}}{{end}}{{end}}')
$ curl $LBIP/foo -H 'Host: foo.bar.com'

The above command might not work properly. In that case, check out Ingress resource's .Status.LoadBalancer.Ingress field. nghttpx Ingress controller periodically (30 - 60 seconds) writes its IP address there.

TLS

You can secure an Ingress by specifying a secret that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller supports SNI. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS:

apiVersion: v1
data:
  tls.crt: <base64 encoded cert>
  tls.key: <base64 encoded key>
kind: Secret
metadata:
  name: testsecret
  namespace: default
type: kubernetes.io/tls

You can create this kind of secret using kubectl create secret tls subcommand.

Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the load balancer using TLS:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: no-rules-map
spec:
  tls:
  - secretName: testsecret
  defaultBackend:
    service:
      name: s1
      port:
        number: 80

If TLS is configured for a service, and it is accessed via cleartext HTTP, those requests are redirected to HTTPS URI. If --default-tls-secret flag is used, all cleartext HTTP requests are redirected to https URI. This behaviour is configurable using path-config annotation.

TLS OCSP stapling

By default, nghttpx performs OCSP request to OCSP responder for each certificate. This requires that the controller pod is allowed to make outbound connections to the server. If there are several Ingress controllers, this method is not efficient since each controller performs OCSP request.

With --fetch-ocsp-resp-from-secret flag, the controller fetches OCSP response from TLS Secret described above. Although we have to store OCSP response to these Secrets in a separate step, and update regularly, they are shared among all controllers, and therefore it is efficient for large deployment.

Note that currently the controller has no facility to store and update OCSP response to TLS Secret. The controller just fetches OCSP response from TLS Secret.

The key for OCSP response in TLS Secret is tls.ocsp-resp by default. It can be changed by --ocsp-resp-key flag. The value of OCSP response in TLS Secret must be DER encoded.

Sharing TLS ticket keys

By default, each nghttpx encrypts TLS ticket by its own key. This means that if there are several nghttpx ingress controller instances, TLS session resumption might not work if the new connection goes to the different instance. With --share-tls-ticket-key flag, the controller generates TLS ticket key in a Secret specified by --nghttpx-secret, which is shared by all controllers. This ensures that all nghttpx instances use the same encryption key, which enables stable TLS session resumption.

HTTP/3 (Experimental)

In order to enable the experimental HTTP/3 feature, run the controller with --http3 flag. The controller will create and maintain a Secret, specified by --nghttpx-secret flag, which contains QUIC keying materials in the same namespace as the controller Pod. The controller maintains the secret as a whole, and it should not be altered by an external tool or user. The keying materials are rotated and new key is generated in the interval specified by --quic-secret-period flag. nghttpx listens on UDP port specified by --nghttpx-https-port flag.

Warning

As of v0.66.0, Secret is integrated to the one specified by --nghttpx-secret flag, and --quic-keying-materials-secret flag has been removed. The default value is also changed. Previously, it is nghttpx-quic-km but now nghttpx-km. To migrate from the previous release, before upgrading nghttpx-ingress-controller to v0.66.0, copy Secret nghttpx-quic-km to nghttpx-km, and upgrade nghttpx-ingress-controller. The keying materials are now rotated and new key is generated in every 4 hours by default. The new key is first placed at the end of the list. In the next rotation, it is moved to the first, and is used for encryption.

HTTP/3 requires the extra capabilities to load eBPF program. Add the following capabilities to the nghttpx-ingress-controller container:

apiVersion: apps/v1
kind: DaemonSet
...
spec:
  template:
    spec:
      containers:
      - image: zlabjp/nghttpx-ingress-controller:latest
        ...
        securityContext:
          capabilities:
            add:
            - SYS_ADMIN
            - SYS_RESOURCE
        ...

PROXY protocol support - preserving ClientIP addresses

In case you are running nghttpx-ingress-lb behind a LoadBalancer you might to preserve the ClientIP addresses accessing your Kubernetes cluster.

As an example we are using a deployment on a Kubernetes on AWS.

In order to use all nghttpx features, especially upstream HTTP/2 forwarding and TLS SNI, the only way to deploy nghttpx-ingress-lb, is to use an AWS ELB (Classic LoadBalancer) in TCP mode and let nghttpx do the TLS-termination, because:

  • AWS ELB does not handle HTTP/2 at all.
  • AWS ALB does not handle upstream HTTP/2.

Therefore using an X-Forward-For header does not work, and you have to rely on the PROXY-protocol feature.

Enable PROXY-protocol on external LoadBalancer

You can enable the PROXY protocol manually on the external AWS ELB(http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-proxy-protocol.html), which forwards traffic to your nghttpx-ingress-lb, or you can let Kubernetes handle this for your like:

# Kubernetes LoadBalancer ELB configuration, which forwards traffic
# on ports 80 and 443 to an nghttpx-ingress-lb controller.
# Kubernetes enabled the PROXY protocol on the AWS ELB.
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
  name: nghttpx-ingress
  namespace: kube-system
spec:
  selector:
    k8s-app: nghttpx-ingress-lb
  type: LoadBalancer
  ports:
  - name: http-ingress
    port: 80
  - name: tls-ingress
    port: 443

Enable PROXY-protocol on nghttpx-ingress-lb

Once the external LoadBalancer has PROXY-protocol enabled, you have to enable the PROXY-protocol on nghttpx-ingress-lb as well by specifying an additional launch-parameter of the nghttpx-ingress-lb, using --proxy-proto=true

See the examples/proxyproto subdirectory for a working deployment or use:

# Deploy nghttpx-ingress-lb behind LoadBalancer with PROXY protocol and RBAC enabled.
kubctl apply -f examples/proxyproto/

Default backend

The default backend is used when the request does not match any given rules. The default backend must be set in command-line flag of nghttpx Ingress controller unless --internal-default-backend flag is given to false, see below. It can be overridden by specifying Ingress.Spec.Backend. If multiple Ingress resources have .Spec.Backend, one of them is used, but it is undefined which one is used. The default backend always does not require TLS.

If --internal-default-backend is given to true, which is the default, the controller configures nghttpx to act as a default backend. In this case, the default backend service is not necessary.

Services without selectors

nghttpx supports Services without selectors.

Logs

The access, and error log of nghttpx are written to stdout, and stderr respectively. They can be configured using accesslog-file and errorlog-file options respectively. No log file rotation is configured by default.

Ingress status

By default, nghttpx Ingress controller periodically writes the addresses of their Pods in all Ingress resource status. If multiple nghttpx Ingress controllers are running, the controller first gets all Pods with the same labels of its own, and writes all addresses in Ingress status.

If a Service is specified in --publish-service flag, external IPs, and load balancer addresses in the specified Service are written into Ingress resource instead.

Additional configurations

nghttpx supports additional configurations via Ingress Annotations.

ingress.zlab.co.jp/backend-config annotation

nghttpx-ingress-controller understands ingress.zlab.co.jp/backend-config key in Ingress .metadata.annotations to configure the particular backend. Its value is a serialized YAML or JSON dictionary. The configuration is done per service port (.spec.rules[*].http.paths[*].backend.service.port). The first key under the root dictionary is the name of service name (.spec.rules[*].http.paths[*].backend.service.name). Its value is the JSON dictionary, and its keys are service port (.spec.rules[*].http.paths[*].backend.service.port.name or .spec.rules[*].http.paths[*].backend.service.port.number if .port.name is not specified). The final value is the JSON dictionary, and can contain the following key value pairs:

  • proto: Specify the application protocol used for this service port. The value is of type string, and it should be either h2, or http/1.1. Use h2 to use HTTP/2 for backend connection. This is optional, and defaults to "http/1.1".

  • tls: Specify whether or not TLS is used for this service port. This is optional, and defaults to false.

  • sni: Specify SNI hostname for TLS connection. This is used to validate server certificate.

  • dns: Specify whether backend host name should be resolved dynamically.

  • weight: Specify the weight of the backend selection. nghttpx ingress controller can aggregates multiple services under single host and path pattern. The weight specifies how frequently this service is selected compared to the other services aggregated under the same pattern. The service with weight 3 is 3 times more frequently used than the one with weight 1. Using this settings, one can send more/less traffic to a particular service. This is useful, for example, if one wants to send 80% of traffic to Service A, while remaining 20% traffic to Service B. The value must be [1, 256], inclusive.

The following example specifies HTTP/2 as backend connection for service "greeter", and service port "50051":

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: greeter
  annotations:
    ingress.zlab.co.jp/backend-config: '{"greeter": {"50051": {"proto": "h2"}}}'
spec:
  rules:
  - http:
      paths:
      - path: /helloworld.Greeter/
        pathType: ImplementationSpecific
        backend:
          service:
            name: greeter
            port:
              number: 50051

Or in YAML:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: greeter
  annotations:
    ingress.zlab.co.jp/backend-config: |
      greeter:
        50051:
          proto: h2
spec:
  rules:
  - http:
      paths:
      - path: /helloworld.Greeter/
        pathType: ImplementationSpecific
        backend:
          service:
            name: greeter
            port:
              number: 50051

The controller also understands ingress.zlab.co.jp/default-backend-config annotation. It serves default values for missing values in backend-config per field basis in the same Ingress resource. It can contain single dictionary which contains the same key/value pairs. It is useful if same set of backend-config is required for lots of services.

For example, if a pair of service, and port has the backend-config like so:

{"sni": "www.example.com"}

And the default-backend-config looks like so:

{"proto": "h2", "sni": "example.com"}

The final backend-config becomes like so:

{"proto": "h2", "sni": "www.example.com"}

A values which specified explicitly in an individual backend-config always takes precedence.

Note that Ingress allows regular expression in .spec.rules[*].http.paths[*].path, but nghttpx does not support it.

ingress.zlab.co.jp/path-config annotation

nghttpx-ingress-controller understands ingress.zlab.co.jp/path-config key in Ingress .metadata.annotations to allow additional configuration per host and path pattern. Its value is a serialized YAML or JSON dictionary. The configuration is done per host and path pattern. The key under the root dictionary is the concatenation of host and path. For example, if host is "www.example.com" and path is "/foo", its key is "www.example.com/foo". For convenience, if "www.example.com" is specified as a key, it is normalized as "www.example.com/". Its value is the dictionary and can contain the following key value pairs:

  • mruby: Specify mruby script which is invoked when the given pattern is selected. For mruby script, see nghttpx manual page

  • affinity: Specify session affinity method. Specifying ip enables client IP based session affinity. Specifying cookie enables cookie-based session affinity. Specifying none or omitting this key disables session affinity.

    If cookie is specified, additional configuration is required. See affinityCookieName, affinityCookiePath, and affinityCookieSecure fields.

  • affinityCookieName: Specify a name of cookie to use. This is required field if cookie is set in affinity field.

  • affinityCookiePath: Specify a path of cookie path. This is optional, and if not set, cookie path is not set.

  • affinityCookieSecure: Specify whether Secure attribute of cookie is added, or not. Omitting this field, specifying empty string, or specifying "auto" sets Secure attribute if client connection is TLS encrypted. If "yes" is specified, Secure attribute is always added. If "no" is specified, Secure attribute is always omitted.

  • affinityCookieStickiness: Specify the stickiness of session cookie. If loose is given, which is default, the affinity might break if an existing backend server is removed, or new backend server is added. If strict is given, if the designated backend server is removed, the request is forwarded to a new server as if it is a new request. However, adding new backend server does not cause breakage.

  • readTimeout: Specify read timeout. If specified, it overrides global backend read timeout set by --backend-read-timeout. You can use string representation of time used in Golang (e.g., 5s, 5m)

  • writeTimeout: Specify write timeout. If specified, it overrides global backend write timeout set by --backend-write-timeout. You can use string representation of time used in Golang (e.g., 5s, 5m)

  • redirectIfNotTLS: Specify whether cleartext HTTP request is redirected to HTTPS if TLS is configured. This defaults to true.

  • doNotForward: Do not forward a request to a backend server. It assumes that a response is generated by mruby script. .spec.rules[*].http.paths[*].backend is a required field, so a placeholder service must be given (it never be accessed). Note that .spec.rules[*].http.paths[*].backend.resource does not work as a placeholder.

Here is an example to rewrite request path to "/foo" from "/pub/foo" using mruby:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: greeter
  annotations:
    ingress.zlab.co.jp/path-config: |
      www.example.com/pub/foo:
        readTimeout: 5m
        mruby: |
          class App
            def on_req(env)
              env.req.path = "/foo"
            end
          end
          App.new
spec:
  rules:
  - host: www.example.com
    http:
      paths:
      - path: /pub/foo
        pathType: ImplementationSpecific
        backend:
          service
            name: bar
            port:
              number: 80

The controller also understands ingress.zlab.co.jp/default-path-config annotation. It serves default values for missing values in path-config per field basis in the same Ingress resource. It can contain single dictionary which contains the same key/value pairs. It is useful if same configuration is shared by lots of patterns.

A values which specified explicitly in an individual path-config always takes precedence.

Custom nghttpx configuration

Using a ConfigMap it is possible to customize the defaults in nghttpx. The content of configuration is specified under nghttpx-conf key. All nghttpx options can be used to customize behavior of nghttpx. See FILES section of nghttpx(1) manual page for the syntax of configuration file.

apiVersion: v1
kind: ConfigMap
metadata:
  name: nghttpx-ingress-lb
data:
  nghttpx-conf: |
    log-level=INFO
    accesslog-file=/dev/null

nghttpx historically strips an incoming X-Forwarded-Proto header field, and adds its own one. To change this behaviour, use the combination of no-add-x-forwarded-proto and no-strip-incoming-x-forwarded-proto. For example, in order to retain the incoming X-Forwarded-Proto header field, add no-strip-incoming-x-forwarded-proto=yes:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nghttpx-ingress-lb
data:
  nghttpx-conf: |
    no-strip-incoming-x-forwarded-proto=yes

nghttpx ingress controller, by default, overrides the following default configuration:

  • workers: set the number of cores that nghttpx uses.

User can override workers using ConfigMap.

Since mruby-file option takes a path to mruby script file, user has to include mruby script to the image or mount the external volume. In order to make it easier to specify mruby script, user can write mruby script under nghttpx-mruby-file-content key, like so:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nghttpx-ingress-lb
data:
  nghttpx-mruby-file-content: |
    class App
      def on_req(env)
        env.req.path = "/apps#{env.req.path}"
      end
    end

    App.new

The controller saves the content, and mruby-file option which refers to the saved file is added to the configuration. Read MRUBY SCRIPTING section of nghttpx(1) manual page about mruby API.

MRUBY Scripting

In addition to the basic mrbgems included by mruby, this Ingress controller adds the following mrbgems for convenience:

Troubleshooting

TBD

Debug

Using the --v flag it is possible to increase the level of logging. In particular:

  • --v=2 shows details using diff about the changes in the configuration in nghttpx
I0323 04:39:16.552830       8 utils.go:90] nghttpx configuration diff /etc/nghttpx/nghttpx.conf
--- current
+++ new
@@ -1,7 +1,41 @@
-# A very simple nghttpx configuration file that forces nghttpx to start.
+accesslog-file=/dev/stdout
+include=/etc/nghttpx/nghttpx-backend.conf
  • --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nghttpx configuration in JSON format

Limitations

  • When no TLS is configured, nghttpx still listens on port 443 for cleartext HTTP.
  • TLS configuration is not bound to the specific service. In general, all proxied services are accessible via TLS.
  • .spec.rules[*].http.paths[*].pathType is ignored and it is treated as if ImplementationSpecific is specified. Consult nghttpx manual to know the path matching rules.

Building from source

Build nghttpx-ingress-controller binary:

$ make controller

Build and push docker images:

$ make push

LICENSE

The MIT License (MIT)

Copyright (c) 2016  Z Lab Corporation
Copyright (c) 2017  nghttpx Ingress controller contributors

Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:

The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

This repository contains the code which has the following license notice:

Copyright 2015 The Kubernetes Authors. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

nghttpx-ingress-lb's People

Contributors

balboah avatar dependabot[bot] avatar tatsuhiro-t avatar weitzj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nghttpx-ingress-lb's Issues

Use SIGHUP configuration reload feature of upcoming nghttpx 1.14.0

Currently, we use SIGUSR2 + SIGQUIT to restart nghttpx with new configuration.
But it requires some scripting because we have to know that new nghttpx instance is ready before sending SIGQUIT to original nghttpx process.

Now SIGUSR2 + SIGQUIT is overkill, since we don't expect to replace nghttpx binary.
Upcoming nghttpx 1.14.0 has simple configuration reloading feature using traditional SIGHUP. We can use this to simplify the process.

Allow users to specify the location of accesslogs

Currently the access logs are being printed to stdout.

This floods external logging services, such as stackdrive, with access logs, and thus important system logs are lost.

It would be greatly beneficial if users could specify the location of access logs to mitigate this issue.

Failed to start after changing the tls cert

Hi,

We tried to change certificate of ingress controller. Then we rolled back to previous cert.

The pods started for few minutes then failed with the following error
Liveness probe failed: HTTP probe failed with statuscode: 500

I can see from the log the following


2018-05-24T03:34:40.182Z 15 15 2c0d95ed NOTICE (shrpx_log.cc:697) Worker process: [16] exited normally with status 0; exit status 0
2018-05-24T03:34:40.461Z 15 15 2c0d95ed FATAL (shrpx_tls.cc:884) SSL_CTX_check_private_key failed: error:140A80BE:SSL routines:SSL_CTX_check_private_key:no private key assigned
2018-05-24T03:34:40.461Z 15 15 2c0d95ed NOTICE (shrpx_log.cc:697) Worker process: [25] exited normally with status 100; exit status 1
2018-05-24T03:34:40.461Z 15 15 2c0d95ed NOTICE (shrpx.cc:4210) Shutdown momentarily
I0524 03:34:40.462316       1 command.go:63] nghttpx exited
E0524 03:34:41.175608       1 command.go:234] Could not get nghttpx configRevision: Get http://127.0.0.1:10902/api/v1beta1/configrevision: dial tcp 127.0.0.1:10902: getsockopt: connection refused
E0524 03:34:42.175629       1 command.go:234] Could not get nghttpx configRevision: Get http://127.0.0.1:10902/api/v1beta1/configrevision: dial tcp 127.0.0.1:10902: getsockopt: connection refused
E0524 03:34:43.175678       1 command.go:234] Could not get nghttpx configRevision: Get http://127.0.0.1:10902/api/v1beta1/configrevision: dial tcp 127.0.0.1:10902: getsockopt: connection refused

and ..

W0524 03:39:15.273710       1 controller.go:977] service ms-cd4f4acf/ms-5d773a89-b9b2-4e28-8536-dab146ec39ad does no have any active endpoints
E0524 03:39:15.273725       1 controller.go:853] Could not create backend for Ingress ms-cd4f4acf/ms-5d773a89-b9b2-4e28-8536-dab146ec39ad: no backend service port found for service ms-cd4f4acf/ms-5d773a89-b9b2-4e28-8536-dab146ec39ad
I0524 03:39:30.449762       1 main.go:298] Received SIGTERM, shutting down
I0524 03:39:30.449785       1 controller.go:1211] Commencing shutting down
I0524 03:39:30.449989       1 controller.go:1466] Remove this address from all Ingress.Status.LoadBalancer.Ingress.
E0524 03:39:30.450018       1 controller.go:1303] Could not remove address from LoadBalancerIngress: Node ip-172-20-56-52.eu-central-1.compute.internal has no external IP
I0524 03:39:30.450038       1 controller.go:1251] Shutting down nghttpx loadbalancer controller

Since the first log mentioned something about private key. I checked

Do you have any idea on what could be the issue?
Thanks

Does nghttpx ingress intercept errors?

Hello,

I currently have tensorflow serving deployed in a container and I've noticed that where there are any prediction errors, the actual error stack is not returned to the client when using nghttpx ingress. The following are my observations (all aspects/environment is kept constant except for the usage of an intermediate ingress):

1. Client Request --> Load Balancer --> Ingress --> Container (Tensorflow-serving)
Observation: Error is obscured from the client, a generic error message is received
Error Received:
grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INTERNAL, details="Received RST_STREAM with error code 2")

2. Client Request --> Load Balancer --> Container (Tensorflow-serving)
Observation: Detailed error stack is returned to client
Error Received:
grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="Matrix size-incompatible: In[0]: [3592,10], In[1]: [3592,10]
[[Node: MatMul = MatMul[T=DT_FLOAT, _output_shapes=[[?,10]], transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_x_0_0, Variable/read)]]")

Thank you!

echoheaders example on Google Container Engine

Hi,
I've been trying and failing to get the echoheaders example to work on Google Container Engine. I'm at a bit of a loss to explain why - the controller allocates a public IP to the ingress, but when I run curl it just times out. I've run the example on minikube successfully.

I've added the kubernetes.io/ingress.class: "nghttpx" annotation to the pod spec, to prevent conflicts with the glbc controller (it was flipping ip before I did this).

I've attached the log from the controller - I'd be really grateful if someone could take a look and see if something is going wrong there. Controller deployment spec is also attached.

One other question - does nghttpx support ssl passthrough (http2)? I want to terminate ssl in Google Cloud Endpoints.

ingress-gke.txt

rc-default.yaml.txt

Thanks

How to configure SSL Passthrough?

I have recently been using the nghttpx for grpcs. I can secure an Ingress by specifying a secret, but SSL Passthrough
Can't succeed。 What is the function of “ingress.zlab.co.jp/backend-config: '{"peer0-tdpowf36e1": {"7051": {"proto": "h2","tls": true,“sni”: “xxxx"}}}'”? Thanks

Does this lb support traefik's PathPrefixStrip like feature

In traefik, I can annotate with something like traefik.someApi.frontend.rule: 'PathPrefixStrip:/some-prefix', and the load balancer will match on the path with the prefix some-prefix, but will strip it before forwarding it to the container.

Tried this resource but cannot find something related, that I could use with:

data:
  nghttpx-conf: |

For my use case, need to be able to strip filing, public and admin from all three rules below:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hmda-platform-ingress
  annotations:
    ingress.zlab.co.jp/backend-config: '{"hmda-platform-api": {"transport": {"proto": "h2"}}}'
spec:
  rules:
  - http:
      paths:
      - path: /hmda/filing
        backend:
          serviceName: hmda-platform-api
          servicePort: filing
      - path: /hmda/admin
        backend:
          serviceName: hmda-platform-api
          servicePort: admin
      - path: /hmda/public
        backend:
          serviceName: hmda-platform-api
          servicePort: public

Utilize nghttpx configrevision API to know whether reloading has finished

Currently, controller just waits for few seconds to reasonably make sure that nghttpx has finished reloading. Using configrevision API, controller can know whether nghttpx has finished reloading or not, killing too much waiting time.

We should call the API repeatedly until we get new configRevision value. We should also add timeout value to give up to wait for the reloading.

nghttpx health monitor port and API port should be configurable

Because of CNI's inability to handle hostPort at the of this writing, we are forced to use host network to deploy Ingress controller which uses host port.
That means that any ports used by Ingress pod could conflict the ports used in the host.
It would be better to make all ports used in nghttpx Ingress controller configurable so that user can change them to resolve the conflict.

how to debug the routing process

Hi,

I have issue where valid request is routed 404 instead of routed to the corresponding pod.
I can see the pod routing in /etc/nghttpx/nghttpx-backend.conf.

I can see from access log
"POST /tensorflow.serving.PredictionService/Predict HTTP/2" 404 21 "-" "grpc-python/1.4.0 grpc-c/4.0.0 (windows; chttp2; gregarious)"

Do you have tips on how to further debug? any particular log pattern I can search to find out why it went to 404?

Thanks

Permission denied error while running the container as a non root user

@tatsuhiro-t @balboah @weitzj
This is not exactly an issue instead I am looking for some solution for my requirement:
We have enabled the Pod Security Policy(PSP) in the cluster and due to the unprivileged PSP, the nghttpx-controller is forced to run as non-root user.
"securityContext":
{
"runAsUser": 65534,
"fsGroup": 65534
}

container creation failed with the error: mkdir etc/nghttpx permission denied.

Added the config "--nghttpx-conf-dir=/tmp" to use the tmp folder instead of etc/nghttpx. With this change the nghttpxx controller pod creation was successful.

When I tried creating the ingress service, service creation is failing and the error is:
failed to write TLS private key: open /tmp/tls/nghttpx139340113: permission denied

Please note that everything works perfectly fine when it is assigned to privileged pod security policy and run as the root user

Utilize nghttpx redirect-if-no-tls parameter in backend option

Upcoming nghttpx will have redirect-if-no-tls parameter in backend option.
If Ingress resource contains TLS, add this parameter automatically.

Each Ingress resource might not contain TLS because default-tls-secret is used. In that case, all backend option will have redirect-if-no-tls parameter.

DaemonSet pods restarting since unhealthy..

I took the daemonset file as is and created it (after creating the service account as well).
it is running. logs looks good. it reloads new configurations from ingress and is listening on ports and then it restart. on pods describe I get:
Killing Killing container with id docker://a413363a4465f4202234c1f6050e7e06e529cca4596a9d4c01e07d6083ff8a23:pod "nghttpx-ingress-controller-g3j9g_kube-system(906226bb-3d89-11e7-afd4-06d73bd0514b)" container "nghttpx-ingress-lb" is unhealthy, it will be killed and re-created.

I wonder if this was tested? I tried to get the last Tag for the image from dockerhub.
but thats fails as well.
k8s version: 1.6.1
pod net: calico

ingress for grpc service

Hi,

I was able to use this as an ingress controller for my grpc service. It handles tls termination correctly but my rpc subscription which is an indefinite stream disconnect after sometime with the following message on the client-

WARNING: RPC failed: Status{code=INTERNAL, description=Connection closed with unknown cause, cause=null}

Any idea what might be causing this? Do I need to configure something?

Pro/con in description

I see in README "nghttpx ingress controller is created based on nginx ingress controller." which makes sense but looking at this repo and knowing it has less people behind it, it makes me wonder: Why would I want to use nghttpx instead of Nginx Ingress Controller as Nginx also support HTTP/2.

May be you could include a short PRO/CON next to that statement.

pls add ingress.zlab.co.jp/default-backend-config

Currently, default backend config is hardcoded, to use http/1.1.

Imagine if I have hundreds of services with proto h2.
It would be great to have ingress.zlab.co.jp/default-backend-config so that I don't need to keep appending entries in ingress.zlab.co.jp/backend-config.

Consider adding dns to PortBackendConfig

Without DNS activated my understanding would be that scaling services becomes and issue (if we want to bypass the kube-proxy).

apiVersion: v1
kind: Service
metadata:
  name: grpc-server-service
spec:
  clusterIP: None
  ports:
  - name: http
    port: 1984
  selector:
    app: grpc-server

how load balancing works in nghttpx ingress

Hi,

I have deployment with replicaset to 2. I can see that nghttpx-backend.conf has two entries for this deployment, i.e. one for each pod. fyi, it's tensorflow serving configured as h2 proto.

However, every time a request coming it's always served by the first pod.
May I know how nghttpx do load balancing internally? Perhaps there's something missing in my configuration.

Thanks & regards

Docker registry containers broken from version 17

I have testing an ingress rule using the public docker registry containers.

v0.16.0 - successful ingress
v0.17.0 - no ingress
latest - no ingress

Sample rule

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: dashboard-http
  namespace: kube-system
spec:
  tls:
  - hosts:
    - k8.example.com
    secretName: dashboard-ingress-cert
  rules:
  - host: k8.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 80

V17 logs

I0116 01:58:15.014730       1 main.go:98] Using build: https://github.com/zlabjp/nghttpx-ingress-lb.git - git-b85951b
I0116 01:58:15.016609       1 util.go:149] ns=default, podName=nghttpx-ingress-lb-k5cn0
I0116 01:58:16.070461       1 main.go:138] Validated default/default-http-backend as the default backend
I0116 01:58:16.071331       1 controller.go:998] Starting nghttpx loadbalancer controller
I0116 01:58:16.071578       1 command.go:42] Starting nghttpx process...
I0116 01:58:16.081849       1 controller.go:554] Deferring sync till endpoints controller has synced
I0116 01:58:16.081885       1 controller.go:578] Deferring sync till endpoints controller has synced
I0116 01:58:17.085019       1 command.go:78] change in configuration detected. Reloading...
I0116 01:58:17.092263       1 controller.go:609] Updating loadbalancer kube-system/dashboard-http with IP 10.10.2.102

V16 logs

I0116 01:59:30.856956       1 main.go:98] Using build: https://github.com/zlabjp/nghttpx-ingress-lb.git - git-e5e4476
I0116 01:59:30.858759       1 util.go:149] ns=default, podName=nghttpx-ingress-lb-q67h4
I0116 01:59:31.304366       1 main.go:138] Validated default/default-http-backend as the default backend
I0116 01:59:31.305703       1 controller.go:1012] Starting nghttpx loadbalancer controller
I0116 01:59:31.307263       1 command.go:42] Starting nghttpx process...
I0116 01:59:31.321562       1 controller.go:564] Deferring sync till endpoints controller has synced
I0116 01:59:31.323846       1 controller.go:588] Deferring sync till endpoints controller has synced
I0116 01:59:32.337265       1 command.go:78] change in configuration detected. Reloading...
I0116 01:59:32.340710       1 controller.go:619] Updating loadbalancer kube-system/dashboard-http with IP 10.10.2.102

Write nghttpx error log to dedicated file

Currently, nghttpx error log is written to pod's log. Since nghttpx error log is not defined per line, it could confuse log corrector or parser. It would be better to write nghttpx error log to its own file, say, /var/log/nghttpx/error.log.

support protocol H1 and H2

Hi,

I'm trying to serve both grpc and http for same service same port (443).
Basically the backend service will do the following:

  • if the request is grpc, it will pass thru to the grpc server (same pod, different container)
  • if the request is http, it will convert the http request to grpc request and send to that grpc server.

If I set proto to H2. grpc request is fine but request for http 1.1 failed

assert response.successful
       |        |
       |        false
       Response{protocol=http/1.1, code=502, message=Bad Gateway, url=....

If I set proto to http 1.1, http request is fine but grpc request failed

main] ERROR com.sap.mlf.tfs.TfsJavaClient - request RPC failed: {0}
io.grpc.StatusRuntimeException: UNIMPLEMENTED: HTTP status code 404
invalid content-type: text/plain; charset=utf-8
headers: Metadata(:status=404,content-type=text/plain; charset=utf-8,x-content-type-options=nosniff,date=Tue, 03 Jul 2018 16:09:07 GMT,content-length=10,server=nghttpx,via=1.1 nghttpx)
DATA-----------------------------
Not Found

Is there a way to make it work for both H1 and H2?

thanks

request to add publish-service flag

From your implementation (v0.26.0), I see ingress loadbalancer IP is updated with value from pods' cluster IP.
Somehow, this doesn't work with our dev cluster setup. My pod doesn't have this cluster IP.

I still don't understand the whole concept of how the ingress ( status -> loadBalancer -> ingress-> ip/hostname) is used.
But, I tried to modify the code to do similar thing like I found in ingress-nginx setup, which is setting
the loadbalancer hostname from the publish-service , which is an AWS ELB in my case.
And somehow, my setup works with this change.

Can we have same 'publish-service' flag in nghttpx ingress?

https://github.com/kubernetes/ingress-nginx/blob/master/cmd/nginx/flags.go

publishSvc = flags.String("publish-service", "",
			`Service fronting the ingress controllers. Takes the form namespace/name. 
The controller will set the endpoint records on the ingress objects to reflect those on the service.`)

Thanks

OCSP in TLS Secret

Currently, nghttpx performs OCSP query using fetch-ocsp-response script included in the image.
The script contacts the status server described in a certificate directly. So this is not efficient if multiple nghttpx servers perform OCSP query individually. It would be better to share OCSP response between servers.

The idea is that we can get OCSP response somehow, and store it in the same Secret which includes TLS server certificate and private key. Ingress contorller gets OCSP response from the Secret, and configures nghttpx so that it can fetch this OCSP response using customized fetch-ocsp-response.

mruby script via ConfigMap

Currently, we could not provide mruby script via ConfigMap. This is because mruby-file takes a path to file. So we have to mount a volume for this. It is not ideal.
Perhaps, it would be better to add new key to ConfigMap which takes content of mruby script file. Then controller saves it in a file inside the container, and nghttpx can load the file via mruby-file option.

Add PROXY protol support for frontend servers

In case you are running nghttpx-ingress-lb behind a LoadBalancer you might
to preserve the ClientIP addresses accessing your Kubernetes cluster.

As an example we are using a deployment on a Kubernetes on AWS.

In order to use all nghttpx features, especially upstream HTTP/2 forwarding
and TLS SNI, the only way to deploy nghttpx-ingress-lb, is to use an
AWS ELB (Classic LoadBalancer) in TCP mode and let nghttpx do the
TLS-termination, because:

Application Load Balancers provide native support for HTTP/2 with HTTPS listeners. You can send up to 128 requests in parallel using one HTTP/2 connection. The load balancer converts these to individual HTTP/1.1 requests and distributes them across the healthy targets in the target group using the round robin routing algorithm. Because HTTP/2 uses front-end connections more efficiently, you might notice fewer connections between clients and the load balancer. Note that you can't use the server-push feature of HTTP/2.

Therefore using an X-Forward-For header does not work, and you have to rely
on the PROXY-protocol feature.

This can be enabled via https://nghttp2.org/documentation/nghttpx-howto.html#enable-proxy-protocol

Reload configuration when certificates have changed

Currently, controller does not reload configuration if certificate has changed but its secret name stays the same.
This is due to the fact that we determine we should reload configuration by comparing the new configuration and old one, and we use the Secret resource name as a file name, so we cannot detect the changes.
It is desirable that we should reload the configuration in this case as well.

backendconfig API endpoint returned unsuccessful status code 413

I tried to add more ingresses and I got the following

I1130 07:57:22.531942       1 command.go:139] Issuing API request http://127.0.0.1:10902/api/v1beta1/backendconfig
127.0.0.1 - - [30/Nov/2017:07:57:22 +0000] "POST /api/v1beta1/backendconfig HTTP/1.1" 413 31 "-" "Go-http-client/1.1"
E1130 07:57:22.532430       1 controller.go:686] failed to issue backend replace request: backendconfig API endpoint returned unsuccessful status code 413

what's the limit of the entries we can put nghttpx-backend.conf?
is it possible to increase it?

Reload leaves defunct process

We found that when nghttpx reloaded configuration, it left defunct process. The defunct one is neverbleed process. The root cause of this is that nghttpx does not wait for the completion of neverbleed when it terminates its parent process. This bug was fixed with the commit nghttp2/nghttp2@85ba33c.

SNI fails after upgrade - SSL_ERROR_BAD_CERT_DOMAIN

When using multiple ingress rules with different certificates only one certificates is dominant.
Example:
-ingress rule 1: foo.bar.com using certificate *.bar.com
-ingress rule 2: bar.foo.com using certificate *.foo.com

curl bar.foo.com

Client fails with SSL_ERROR_BAD_CERT_DOMAIN and receives certificate for *.bar.com

curl -kv bar.foo.com 

Client receives traffic ignoring certificate validation

This works on 0.18 but the same rules fail on 0.19
Is this a SNI processing issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.