Git Product home page Git Product logo

kube-rbac-proxy's Introduction

kube-rbac-proxy

Docker Repository on Quay

NOTE: This project is alpha stage. Flags, configuration, behavior and design may change significantly in following releases.

The kube-rbac-proxy is a small HTTP proxy for a single upstream, that can perform RBAC authorization against the Kubernetes API using SubjectAccessReview.

In Kubernetes clusters without NetworkPolicies any Pod can perform requests to every other Pod in the cluster. This proxy was developed in order to restrict requests to only those Pods, that present a valid and RBAC authorized token or client TLS certificate.

Current Future and Deprecation of Flags / Features

The project is seeking to be accepted as a k8s project and therefore we need to align tighter with k8s. As a result, we use more of the k8s code and need to deprecate features, while introducing others.

The reasons for deprecation are that k8s doesn't support them anymore and partially because it is not a best practice. An example of "not a best practice" is to offer insecure listening and an example of upstream deprecations are some of the logging flags.

The project states above that it is alpha and "flags, configuration, and behavior" can change significantly. Nevertheless, the project was treated like a production-v1 project: no breaking changes were introduced.

We will introduce a feature branch called sig-auth-acceptance that shows how kube-rbac-proxy will change.

Maintenance

We try to keep the current release secure by making necessary updates when necessary, but this is best effort.

An update of Kubernetes from v0.25.2 to v0.25.5 was rolled back as it removed the --logtostderr flag.

Usage

The kube-rbac-proxy has all glog flags for logging purposes. To use the kube-rbac-proxy there are a few flags you may want to set:

  • --upstream: This is the upstream you want to proxy to.
  • --config-file: This file specifies details on the SubjectAccessReview you want to be performed on a request. For example, this could contain that an entity performing a request has to be allowed to perform a get on the Deployment called my-frontend-app, as well as the ability to configure whether SubjectAccessReviews are rewritten based on requests.

See the examples/ directory for the following examples:

All command line flags:

$ kube-rbac-proxy -h
The kube-rbac-proxy is a small HTTP proxy for a single upstream
that can perform RBAC authorization against the Kubernetes API using SubjectAccessReview.

Usage:
  kube-rbac-proxy [flags]

Kube-rbac-proxy flags:

      --allow-paths strings                         Comma-separated list of paths against which kube-rbac-proxy pattern-matches the incoming request. If the request doesn't match, kube-rbac-proxy responds with a 404 status code. If omitted, the incoming request path isn't checked. Cannot be used with --ignore-paths.
      --auth-header-fields-enabled                  When set to true, kube-rbac-proxy adds auth-related fields to the headers of http requests sent to the upstream
      --auth-header-groups-field-name string        The name of the field inside a http(2) request header to tell the upstream server about the user's groups (default "x-remote-groups")
      --auth-header-groups-field-separator string   The separator string used for concatenating multiple group names in a groups header field's value (default "|")
      --auth-header-user-field-name string          The name of the field inside a http(2) request header to tell the upstream server about the user's name (default "x-remote-user")
      --auth-token-audiences strings                Comma-separated list of token audiences to accept. By default a token does not have to have any specific audience. It is recommended to set a specific audience.
      --client-ca-file string                       If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
      --config-file string                          Configuration file to configure kube-rbac-proxy.
      --http2-disable                               Disable HTTP/2 support
      --http2-max-concurrent-streams uint32         The maximum number of concurrent streams per HTTP/2 connection. (default 100)
      --http2-max-size uint32                       The maximum number of bytes that the server will accept for frame size and buffer per stream in a HTTP/2 request. (default 262144)
      --ignore-paths strings                        Comma-separated list of paths against which kube-rbac-proxy pattern-matches the incoming request. If the requst matches, it will proxy the request without performing an authentication or authorization check. Cannot be used with --allow-paths.
      --insecure-listen-address string              [DEPRECATED] The address the kube-rbac-proxy HTTP server should listen on.
      --kube-api-burst int                          kube-api burst value; needed when kube-api-qps is set
      --kube-api-qps float32                        queries per second to the api, kube-client starts client-side throttling, when breached
      --kubeconfig string                           Path to a kubeconfig file, specifying how to connect to the API server. If unset, in-cluster configuration will be used
      --oidc-ca-file string                         If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
      --oidc-clientID string                        The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
      --oidc-groups-claim string                    Identifier of groups in JWT claim, by default set to 'groups' (default "groups")
      --oidc-groups-prefix string                   If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
      --oidc-issuer string                          The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
      --oidc-sign-alg stringArray                   Supported signing algorithms, default RS256 (default [RS256])
      --oidc-username-claim string                  Identifier of the user in JWT claim, by default set to 'email' (default "email")
      --oidc-username-prefix string                 If provided, the username will be prefixed with this value to prevent conflicts with other authentication strategies.
      --proxy-endpoints-port int                    The port to securely serve proxy-specific endpoints (such as '/healthz'). Uses the host from the '--secure-listen-address'.
      --secure-listen-address string                The address the kube-rbac-proxy HTTPs server should listen on.
      --tls-cert-file string                        File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert)
      --tls-cipher-suites strings                   Comma-separated list of cipher suites for the server. Values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). If omitted, the default Go cipher suites will be used
      --tls-min-version string                      Minimum TLS version supported. Value must match version names from https://golang.org/pkg/crypto/tls/#pkg-constants. (default "VersionTLS12")
      --tls-private-key-file string                 File containing the default x509 private key matching --tls-cert-file.
      --tls-reload-interval duration                The interval at which to watch for TLS certificate changes, by default set to 1 minute. (default 1m0s)
      --upstream string                             The upstream URL to proxy to once requests have successfully been authenticated and authorized.
      --upstream-ca-file string                     The CA the upstream uses for TLS connection. This is required when the upstream uses TLS and its own CA certificate
      --upstream-client-cert-file string            If set, the client will be used to authenticate the proxy to upstream. Requires --upstream-client-key-file to be set, too.
      --upstream-client-key-file string             The key matching the certificate from --upstream-client-cert-file. If set, requires --upstream-client-cert-file to be set, too.
      --upstream-force-h2c                          Force h2c to communiate with the upstream. This is required when the upstream speaks h2c(http/2 cleartext - insecure variant of http/2) only. For example, go-grpc server in the insecure mode, such as helm's tiller w/o TLS, speaks h2c only

Global flags:

  -h, --help                     help for kube-rbac-proxy
      --version version[=true]   --version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version

How to update Go dependencies

To update the Go dependencies run make update-go-deps.

This might be useful to do during a release.

Why?

You may ask yourself, why not just use the Kubernetes apiserver proxy functionality? There are two reasons why this makes sense, the first is to take load off of the Kubernetes API, so it can be used for actual requests serving the cluster components, rather than in order to serve client requests. The second and more important reason is, this proxy is intended to be a sidecar that accepts incoming HTTP requests. This way, one can ensure that a request is truly authorized, instead of being able to access an application simply because an entity has network access to it.

Motivation

I developed this proxy in order to be able to protect Prometheus metrics endpoints. In a scenario, where an attacker might obtain full control over a Pod, that attacker would have the ability to discover a lot of information about the workload as well as the current load of the respective workload. This information could originate for example from the node-exporter and kube-state-metrics. Both of those metric sources can commonly be found in Prometheus monitoring stacks on Kubernetes.

This project was created to specifically solve the above problem, however, I felt there is a larger need for such a proxy in general.

How does it work?

On an incoming request, kube-rbac-proxy first figures out which user is performing the request. The kube-rbac-proxy supports using client TLS certificates, as well as tokens. In case of a client certificates, the certificate is simply validated against the configured CA. In case of a bearer token being presented, the authentication.k8s.io is used to perform a TokenReview.

Once a user has been authenticated, again the authentication.k8s.io is used to perform a SubjectAccessReview, in order to authorize the respective request, to ensure the authenticated user has the required RBAC roles.

Notes on ServiceAccount token security

Note that when using tokens for authentication, the receiving side can use the token to impersonate the client. Only use token authentication, when the receiving side is already higher privileged or the token itself is super low privileged, such as when the only roles bound to it are for authorization purposes with this project. Passing around highly privileged tokens is a security risk, and is not recommended.

This project was built to be used to protect metrics of cluster components. These cluster components are much higher privileged than the Prometheus Pod, so if those Pods were to use the token provided by Prometheus it would actually be lower privileged. It is not recommended to use this method for non infrastructure components.

For better security properties use mTLS for authentication instead, and for user authentication, other methods have yet to be added.

Why are NetworkPolicies not enough?

There are a couple of reasons why the existence of NetworkPolicies may not cover the same use case(s):

  • NetworkPolicies are not available in all providers, installers and distros.
  • NetworkPolicies do not apply to Pods with HostNetworking enabled, the use case I created this project with the Prometheus node-exporter requires this.
  • Once TLS/OIDC is supported, the kube-rbac-proxy can be used to perform AuthN/AuthZ on users.

Differentiation to Envoy/Istio

This projects is not intended to compete with Envoy or IstioMesh. Although on the surface they seem similar, the goals and usage complement each other. It's perfectly ok to use Envoy as the ingress point of traffic of a Pod, which then forwards traffic to the kube-rbac-proxy, which in turn then proxies to the actually serving application.

Additionally, to my knowledge Envoy neither has nor plans Kubernetes specific RBAC/AuthZ support (maybe it shouldn’t even). My knowledge may very well be incomplete, please point out if it is. After all I'm happy if I don't have to maintain more code, but as long as this serves a purpose to me and no other project can provide it, I'll maintain this.

Testing

To run tests locally, you need to have kind installed. By default it uses the default cluster, so be aware that it overrides your default kind cluster.

The command to execute the tests is: make test-local.

Roadmap

PRs are more than welcome!

  • Tests

kube-rbac-proxy's People

Contributors

alanthur avatar andylibrian avatar brancz avatar dependabot[bot] avatar dgrisonnet avatar ibihim avatar jan--f avatar jeffdyoung avatar jpiper avatar marpio avatar metalmatze avatar mumoshu avatar nabokihms avatar nickytd avatar novegit avatar oguzozan avatar paulfantom avatar pensu avatar petersutter avatar retocode avatar s-urbaniak avatar samze avatar simonpasquier avatar squat avatar sthaha avatar stlaz avatar vpnachev avatar wespanther avatar xcoulon avatar yselkowitz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-rbac-proxy's Issues

k8s 1.16 authenticate fialed

I use kube-rbac-proxy as the sidecar of node-expoter,deployed in 1.16 k8s cluster.The following problem appears:

kubectl logs node-exporter-9g5h7 -f kube-rbac-proxy

E0506 10:47:52.120746   39528 auth.go:218] Unable to authenticate the request due to an error: Post https://10.42.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: tls: failed to parse certificate from server: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."
E0506 10:48:07.124592   39528 auth.go:218] Unable to authenticate the request due to an error: Post https://10.42.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: tls: failed to parse certificate from server: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."
E0506 10:48:22.121446   39528 auth.go:218] Unable to authenticate the request due to an error: Post https://10.42.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: tls: failed to parse certificate from server: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."
E0506 10:48:37.122947   39528 auth.go:218] Unable to authenticate the request due to an error: Post https://10.42.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: tls: failed to parse certificate from server: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."
E0506 10:48:52.125219   39528 auth.go:218] Unable to authenticate the request due to an error: Post https://10.42.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: tls: failed to parse certificate from server: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."

The image used is kube-rbac-proxy: v0.3.0.

clusterrole and culsterrolebinding are also correct.

apiserver's runtime-config parameter also includes authentication.k8s.io/v1beta1=true.

This configuration is good in a 1.8 cluster.

May I know what is the reason

rewrites exampale does not work.

I run this example in ocp 4.4.

But I got this error: proxy.go:86] Bad Request. The request or configuration is malformed
And from the code, I found that the client url is not right, and it should have a query parameter as
namespace="default". Or it will raise error here like the one above. Then I am confused that what does this rewrite means? What is the expected behaviour here.

Sweet32 still allowed after upgrading to kube-rbac-proxy 0.4.1 and setting minimum TLS version to 1.2

after upgrading kube-rbac-proxy to version 0.4.1 and setting TLS Minimum Version to 1.2

 --tls-min-version=VersionTLS12 

openssl s_client -connect [redacted]:9100 -cipher DES-CBC3-SHA is still able to connect and verify.

SSL handshake has read 2025 bytes and written 425 bytes
---
New, TLSv1/SSLv3, Cipher is DES-CBC3-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : DES-CBC3-SHA

openssl s_client -connect [redacted]:9100 -cipher ECDHE-RSA-DES-CBC3-SHA has teh same behavior as listed above, of course with the cipher being ECDHE-RSA-DES-CBC3-SHA instead of DES-CBC3-SHA.

For more context on how I am using kube-rbac-proxy, it is being used as part of an openshift monitoring deployment in front of node_exporter.

Adding Power support for image quay.io/brancz/kube-rbac-proxy at quay.io

Hi All,

I have a requirement for using the kube-rbac-proxy image on Power(ppc64le) architecture for knative deployment. However, the images available here - https://quay.io/repository/brancz/kube-rbac-proxy and here https://quay.io/repository/coreos/kube-rbac-proxy?tab=tags have support for "amd64" only as seen below

docker image inspect quay.io/brancz/kube-rbac-proxy:v0.4.0 | grep Arch
"Architecture": "amd64",

I was able to build this image locally on a Power machine using "make container"

docker images | grep kube-rbac-proxy
quay.io/brancz/kube-rbac-proxy v0.4.0 6e9cecf4d991 41 minutes ago 41.2MB

docker image inspect quay.io/brancz/kube-rbac-proxy:v0.4.0 | grep Arch
"Architecture": "ppc64le",

I was looking for help in making the image @ quay.io, multi-arch, thank you.

Support client certificate authenticated upstreams

One usecase we have to transform the client certificate authenticated upstream
into a bearer token authenticated endpoint in order to avoid having to
distribute the client certificates, and instead rely on kubernetes service
accounts for the distribution.

failed to start container "kube-rbac-proxy

kubectl describe pod prometheus-operator-74d99cb4c8-mhft5 -n monitoring show below error,

"Error: failed to start container "kube-rbac-proxy": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "chdir to cwd ("/home/nonroot") set in config.json failed: permission denied": unknown"

my kubernetes version 1.19.3 and docker version 19.03.13, if someone met this issue and pls give me some advise, thanks

Kernel Panic when deploying kube-prometheus

prometheus-operator version: 0.19.0
OS: CentOS 7.2
Kernel: 3.10.0-327.el7.x86_64
Docker: 17.12.0-ce
K8S: v1.9.4
PID: 14683 TASK: ffff88157121ae00 CPU: 1 COMMAND: "kube-rbac-proxy"
PANIC: "BUG: unable to handle kernel NULL pointer dereference at 0000000000000010"

      KERNEL: /usr/lib/debug/lib/modules/3.10.0-327.el7.x86_64/vmlinux
    DUMPFILE: vmcore  [PARTIAL DUMP]
        CPUS: 12
        DATE: Mon May  7 14:14:46 2018
      UPTIME: 6 days, 03:45:15
LOAD AVERAGE: 1.28, 1.14, 1.13
       TASKS: 981
    NODENAME: ***
     RELEASE: 3.10.0-327.el7.x86_64
     VERSION: #1 SMP Thu Nov 19 22:10:57 UTC 2015
     MACHINE: x86_64  (2394 Mhz)
      MEMORY: 159.9 GB
       PANIC: "BUG: unable to handle kernel NULL pointer dereference at 0000000000000010"
         PID: 14683
     COMMAND: "kube-rbac-proxy"
        TASK: ffff88157121ae00  [THREAD_INFO: ffff881500c54000]
         CPU: 1
       STATE: TASK_INTERRUPTIBLE (PANIC)

crash> bt
PID: 14683  TASK: ffff88157121ae00  CPU: 1   COMMAND: "kube-rbac-proxy"
 #0 [ffff881500c578d8] machine_kexec at ffffffff81051beb
 #1 [ffff881500c57938] crash_kexec at ffffffff810f2542
 #2 [ffff881500c57a08] oops_end at ffffffff8163e1a8
 #3 [ffff881500c57a30] no_context at ffffffff8162e2b8
 #4 [ffff881500c57a80] __bad_area_nosemaphore at ffffffff8162e34e
 #5 [ffff881500c57ac8] bad_area at ffffffff8162e6c7
 #6 [ffff881500c57af0] __do_page_fault at ffffffff81641035
 #7 [ffff881500c57b48] do_page_fault at ffffffff81641113
 #8 [ffff881500c57b70] page_fault at ffffffff8163d408
    [exception RIP: rb_next+1]
    RIP: ffffffff812f94b1  RSP: ffff881500c57c28  RFLAGS: 00010046
    RAX: 0000000000000000  RBX: 0000000000000000  RCX: 0000000000000000
    RDX: 0000000000000001  RSI: ffff88140e5dfa28  RDI: 0000000000000010
    RBP: ffff881500c57c70   R8: 0000000000000000   R9: 0000000000000001
    R10: 0000000000000001  R11: 00000000ffffffff  R12: ffff88140e5de600
    R13: 0000000000000000  R14: 0000000000000000  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff881500c57c30] pick_next_task_fair at ffffffff810bf539
#10 [ffff881500c57c78] __schedule at ffffffff8163a10a
#11 [ffff881500c57cd8] schedule at ffffffff8163a909
#12 [ffff881500c57ce8] futex_wait_queue_me at ffffffff810e2464
#13 [ffff881500c57d28] futex_wait at ffffffff810e2fd9
#14 [ffff881500c57e70] do_futex at ffffffff810e506e
#15 [ffff881500c57f08] sys_futex at ffffffff810e55a0
#16 [ffff881500c57f80] system_call_fastpath at ffffffff81645909
    RIP: 0000000000458483  RSP: 000000c42007a000  RFLAGS: 00010202
    RAX: 00000000000000ca  RBX: ffffffff81645909  RCX: 00000000004584de
    RDX: 0000000000000000  RSI: 0000000000000000  RDI: 000000c42006c948
    RBP: 000000c420079e50   R8: 0000000000000000   R9: 0000000000000000
    R10: 0000000000000000  R11: 0000000000000286  R12: 0000000000000000
    R13: 0000000000000011  R14: 00000000000000f1  R15: 000000000042e8e0
    ORIG_RAX: 00000000000000ca  CS: 0033  SS: 002b
crash>

vmcore_dmesg

[365635.516154] IPVS: __ip_vs_del_service: enter
[432594.743524] IPVS: __ip_vs_del_service: enter
[520022.344780] IPVS: __ip_vs_del_service: enter
[520022.344928] IPVS: __ip_vs_del_service: enter
[520063.680327] IPVS: __ip_vs_del_service: enter
[520063.680467] IPVS: __ip_vs_del_service: enter
[520063.680600] IPVS: __ip_vs_del_service: enter
[520063.680728] IPVS: __ip_vs_del_service: enter
[520776.756450] IPVS: __ip_vs_del_service: enter
[520776.756581] IPVS: __ip_vs_del_service: enter
[520776.756705] IPVS: __ip_vs_del_service: enter
[520776.756825] IPVS: __ip_vs_del_service: enter
[521268.152968] IPVS: __ip_vs_del_service: enter
[521268.153112] IPVS: __ip_vs_del_service: enter
[521268.153235] IPVS: __ip_vs_del_service: enter
[521268.153354] IPVS: __ip_vs_del_service: enter
[532348.631722] IPVS: Creating netns size=2040 id=54
[532348.979970] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[532348.996958] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[532349.054506] device vethf0dd1377 entered promiscuous mode
[532349.054553] cni0: port 12(vethf0dd1377) entered forwarding state
[532349.054563] cni0: port 12(vethf0dd1377) entered forwarding state
[532351.484432] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
[532351.484491] IP: [<ffffffff812f94b1>] rb_next+0x1/0x50
[532351.484529] PGD 280b0c0067 PUD 14bcbe4067 PMD 0
[532351.484564] Oops: 0000 [#1] SMP
[532351.484589] Modules linked in: ipt_REJECT vxlan ip6_udp_tunnel udp_tunnel xt_statistic xt_recent ip_vs_sh xt_multiport iptable_mangle cfg80211 rfkill ipip tunnel4 ip_tunnel dummy xt_ipvs ip_set_hash_ip ip_set_hash_net xt_comment xt_mark xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 nf_conntrack_netlink iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype iptable_filter xt_conntrack nf_nat bridge stp llc overlay() ip_vs_rr ip_vs nf_conntrack libcrc32c xt_set ip_set nfnetlink intel_powerclamp coretemp intel_rapl kvm_intel sg kvm iTCO_wdt iTCO_vendor_support pcspkr sb_edac edac_core ipmi_ssif mei_me lpc_ich i2c_i801 mei acpi_pad acpi_power_meter shpchp ipmi_si mfd_core ipmi_msghandler ip_tables ext4 mbcache jbd2 sd_mod crc_t10dif crct10dif_generic crct10dif_pclmul crct10dif_common
[532351.485126]  crc32_pclmul crc32c_intel ast syscopyarea ghash_clmulni_intel sysfillrect sysimgblt i2c_algo_bit drm_kms_helper aesni_intel lrw gf128mul glue_helper ttm ablk_helper cryptd drm megaraid_sas ixgbe dca i2c_core mdio ptp pps_core wmi dm_mirror dm_region_hash dm_log dm_mod
[532351.485316] CPU: 1 PID: 14683 Comm: kube-rbac-proxy Tainted: G               ------------ T 3.10.0-327.el7.x86_64 #1
[532351.485377] Hardware name: Lenovo BJLENOVOV2G3F42-20A/B900G3_10G, BIOS B1.01 08/22/2015
[532351.485425] task: ffff88157121ae00 ti: ffff881500c54000 task.ti: ffff881500c54000
[532351.485469] RIP: 0010:[<ffffffff812f94b1>]  [<ffffffff812f94b1>] rb_next+0x1/0x50
[532351.485517] RSP: 0018:ffff881500c57c28  EFLAGS: 00010046
[532351.485550] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[532351.485592] RDX: 0000000000000001 RSI: ffff88140e5dfa28 RDI: 0000000000000010
[532351.485634] RBP: ffff881500c57c70 R08: 0000000000000000 R09: 0000000000000001
[532351.485677] R10: 0000000000000001 R11: 00000000ffffffff R12: ffff88140e5de600
[532351.485719] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[532351.485762] FS:  000000c42006c890(0000) GS:ffff88142fc40000(0000) knlGS:0000000000000000
[532351.485810] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[532351.485845] CR2: 0000000000000010 CR3: 00000024f0778000 CR4: 00000000001407e0
[532351.485887] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[532351.485930] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[532351.485972] Stack:
[532351.485987]  ffff881500c57c70 ffffffff810bf539 ffff881500c57c60 ffff88142fc54780
[532351.486039]  ffff88157121b3e0 ffff88142fc54780 0000000000000001 ffff881500c57de0
[532351.486090]  ffffc90028ab0f80 ffff881500c57cd0 ffffffff8163a10a ffff88157121ae00
[532351.486142] Call Trace:
[532351.486165]  [<ffffffff810bf539>] ? pick_next_task_fair+0x129/0x1d0
[532351.486207]  [<ffffffff8163a10a>] __schedule+0x12a/0x900
[532351.486243]  [<ffffffff8163a909>] schedule+0x29/0x70
[532351.486278]  [<ffffffff810e2464>] futex_wait_queue_me+0xc4/0x120
[532351.486317]  [<ffffffff810e2fd9>] futex_wait+0x179/0x280
[532351.486353]  [<ffffffff811d15c2>] ? __mem_cgroup_commit_charge+0x152/0x390
[532351.486396]  [<ffffffff810e506e>] do_futex+0xfe/0x5b0
[532351.486433]  [<ffffffff8108fddb>] ? recalc_sigpending+0x1b/0x50
[532351.486471]  [<ffffffff810e55a0>] SyS_futex+0x80/0x180
[532351.486506]  [<ffffffff81645909>] system_call_fastpath+0x16/0x1b
[532351.486542] Code: e5 48 85 c0 75 07 eb 19 66 90 48 89 d0 48 8b 50 10 48 85 d2 75 f4 48 8b 50 08 48 85 d2 75 eb 5d c3 31 c0 5d c3 0f 1f 44 00 00 55 <48> 8b 17 48 89 e5 48 39 d7 74 3b 48 8b 47 08 48 85 c0 75 0e eb
[532351.489908] RIP  [<ffffffff812f94b1>] rb_next+0x1/0x50
[532351.491555]  RSP <ffff881500c57c28>
[532351.493195] CR2: 0000000000000010
crash>

Cannot Debug context canceled Error

Hi,

I have been getting this error from kube-rbac-proxy sidecar and I am really struggling to understand what is the reason behind it. kube-rbac-proxy is a sidecar to this deamonset:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: node-exporter
  name: node-exporter
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      labels:
        app: node-exporter
    spec:
      containers:
      - args:
        - --web.listen-address=127.0.0.1:9101
        - --path.procfs=/host/proc
        - --path.sysfs=/host/sys
        - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
        - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
        - --no-collector.wifi
        image: registry.redhat.io/openshift3/prometheus-node-exporter:v3.11
        imagePullPolicy: Always
        name: node-exporter
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /host/proc
          name: proc
        - mountPath: /host/sys
          name: sys
        - mountPath: /host/root
          mountPropagation: HostToContainer
          name: root
          readOnly: true
      - args:
        - --v=10
        - --logtostderr=true
        - --secure-listen-address=:9100
        - --upstream=http://127.0.0.1:9101/
        - --tls-cert-file=/etc/tls/private/tls.crt
        - --tls-private-key-file=/etc/tls/private/tls.key
        - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
        image: registry.redhat.io/openshift3/ose-kube-rbac-proxy:v3.11
        imagePullPolicy: Always
        name: kube-rbac-proxy
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: https
          protocol: TCP
        resources:
          limits:
            cpu: 20m
            memory: 40Mi
          requests:
            cpu: 10m
            memory: 20Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/tls/private
          name: node-exporter-tls
      dnsPolicy: ClusterFirst
      hostNetwork: true
      hostPID: true
      nodeSelector:
        beta.kubernetes.io/os: linux
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: node-exporter
      serviceAccountName: node-exporter
      terminationGracePeriodSeconds: 30
      tolerations:
      - operator: Exists
      volumes:
      - hostPath:
          path: /proc
          type: ""
        name: proc
      - hostPath:
          path: /sys
          type: ""
        name: sys
      - hostPath:
          path: /
          type: ""
        name: root
      - name: node-exporter-tls
        secret:
          defaultMode: 420
          secretName: node-exporter-tls
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate

The logs I can see in the pod are these:

I0305 15:58:58.660116   52302 main.go:186] Valid token audiences: 
I0305 15:58:58.660216   52302 main.go:248] Reading certificate files
I0305 15:58:58.660264   52302 reloader.go:98] reloading key /etc/tls/private/tls.key certificate /etc/tls/private/tls.crt
I0305 15:58:58.660465   52302 main.go:281] Starting TCP socket on :9100
I0305 15:58:58.857782   52302 main.go:288] Listening securely on :9100
I0305 15:59:23.358521   52302 request.go:1017] Request Body: {"kind":"TokenReview","apiVersion":"authentication.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"token":"TOKEN"},"status":{"user":{}}}
I0305 15:59:23.358722   52302 round_trippers.go:423] curl -k -v -XPOST  -H "User-Agent: kube-rbac-proxy/v0.0.0 (linux/ppc64le) kubernetes/$Format" -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "Authorization: Bearer  BEARER" 'https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews'
I0305 15:59:23.886863   52302 round_trippers.go:443] POST https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews 201 Created in 528 milliseconds
I0305 15:59:23.886914   52302 round_trippers.go:449] Response Headers:
I0305 15:59:23.886922   52302 round_trippers.go:452]     Cache-Control: no-store
I0305 15:59:23.886928   52302 round_trippers.go:452]     Content-Type: application/json
I0305 15:59:23.886934   52302 round_trippers.go:452]     Content-Length: 1287
I0305 15:59:23.886939   52302 round_trippers.go:452]     Date: Thu, 05 Mar 2020 15:59:23 GMT
I0305 15:59:23.887001   52302 request.go:1017] Response Body: {"kind":"TokenReview","apiVersion":"authentication.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"token":"TOKEN"},"status":{"authenticated":true,"user":{"username":"system:serviceaccount:openshift-monitoring:prometheus-k8s","uid":"464a5b41-00a8-11ea-b65b-40f2e95c5cac","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"]}}}
I0305 15:59:23.957965   52302 proxy.go:199] kube-rbac-proxy request attributes: attrs=0
I0305 15:59:23.958155   52302 request.go:1017] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/metrics","verb":"get"},"user":"system:serviceaccount:openshift-monitoring:prometheus-k8s","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"],"uid":"464a5b41-00a8-11ea-b65b-40f2e95c5cac"},"status":{"allowed":false}}
I0305 15:59:23.958291   52302 round_trippers.go:423] curl -k -v -XPOST  -H "User-Agent: kube-rbac-proxy/v0.0.0 (linux/ppc64le) kubernetes/$Format" -H "Authorization: Bearer BEARER" -H "Accept: application/json, */*" -H "Content-Type: application/json" 'https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews'
I0305 15:59:23.959469   52302 round_trippers.go:443] POST https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews 201 Created in 1 milliseconds
I0305 15:59:23.959482   52302 round_trippers.go:449] Response Headers:
I0305 15:59:23.959490   52302 round_trippers.go:452]     Cache-Control: no-store
I0305 15:59:23.959498   52302 round_trippers.go:452]     Content-Type: application/json
I0305 15:59:23.959507   52302 round_trippers.go:452]     Content-Length: 575
I0305 15:59:23.959514   52302 round_trippers.go:452]     Date: Thu, 05 Mar 2020 15:59:23 GMT
I0305 15:59:23.959536   52302 request.go:1017] Response Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/metrics","verb":"get"},"user":"system:serviceaccount:openshift-monitoring:prometheus-k8s","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"],"uid":"464a5b41-00a8-11ea-b65b-40f2e95c5cac"},"status":{"allowed":true,"reason":"RBAC: allowed by ClusterRoleBinding \"prometheus-k8s\" of ClusterRole \"prometheus-k8s\" to ServiceAccount \"prometheus-k8s/openshift-monitoring\""}}
2020/03/05 15:59:33 http: proxy error: context canceled
I0305 15:59:53.357908   52302 proxy.go:199] kube-rbac-proxy request attributes: attrs=0
2020/03/05 16:00:03 http: proxy error: context canceled
I0305 16:00:23.258756   52302 proxy.go:199] kube-rbac-proxy request attributes: attrs=0
2020/03/05 16:00:33 http: proxy error: context canceled
I0305 16:00:53.258463   52302 proxy.go:199] kube-rbac-proxy request attributes: attrs=0
2020/03/05 16:01:03 http: proxy error: context canceled

This error seems to be coming from the golang.org/x/net library, but I cannot understand how to investigate it more.
The amazing thing is that in one of the (identical in terms of setup/configuration) nodes the proxy works without a problem.
Would be very grateful if someone can help me with this one.

exiting because of error: log: cannot create log: open /tmp/kube-rbac-proxy/xxx: permission denied

I had the following errors with v0.4.1 on one of my worker node (Other nodes have no problem). It causes node-exporter to crash. No idea what's the root cause

18770 main.go:213] Generating self signed cert as no cert is provided

2019-11-21T07:35:31.199545675Z log: exiting because of error: log: cannot create log: open /tmp/kube-rbac-proxy.centos01.unknownuser.log.INFO.20191121-073531.18770: permission denied

My deployment yaml is like

kind: DaemonSet
apiVersion: apps/v1
metadata:
  # ...
spec:
  # ...
  template:
    # ...
    spec:
      volumes:
      # ...
      containers:
        - name: node-exporter
          # ...
        - name: kube-rbac-proxy
          image: 'quay.io/coreos/kube-rbac-proxy'
          args:
            - '--secure-listen-address=$(IP):9100'
            - '--upstream=http://127.0.0.1:9100/'
          ports:
            - name: https
              hostPort: 9100
              containerPort: 9100
              protocol: TCP
          env:
            - name: IP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.podIP
          resources:
            limits:
              cpu: 20m
              memory: 40Mi
            requests:
              cpu: 10m
              memory: 20Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      nodeSelector:
        beta.kubernetes.io/os: linux
      serviceAccountName: node-exporter
      serviceAccount: node-exporter
      hostNetwork: true
      hostPID: true
      securityContext:
        runAsUser: 65534
        runAsNonRoot: true
      imagePullSecrets:
        - name: qingcloud
      schedulerName: default-scheduler
      # ...

Could not authenticate with Kube-Rbac-Proxy

Hi All,
I am having a Custom Resource controller on my Kubernetes cluster with Webhook to validate the CRD parameter. I am able to create a CRD resources with this setup. I mean, APIserver is able to send an incoming Request to Webhook to validate and webhook is able to send an response back to API server. Now I have placed the kube-rbac proxy between APIserver and webhook.
Now all the incoming CRD create request is coming to kube-rbac-proxy and upstream to Webhook.
Following are the options enabled on Kube Rbac Proxy
- "--secure-listen-address=0.0.0.0:8444"
- "--upstream=https://127.0.0.1:9000"
- "--config-file=/etc/kube-rbac-proxy/config-file.yaml"
- "--tls-cert-file=/etc/webhook/certs/cert.pem"
- "--tls-private-key-file=/etc/webhook/certs/key.pem"
- "--logtostderr=true"
- "--v=10"

But Now When i create the CRD, Kube-Rbac-Proxy is not authenticating. I am getting the following error from APIserver, "the server has asked for the client to provide credentials" .

I am getting the following logs from kube Rbac Proxy

**************8 new request **********************
request details &{POST /?timeout=30s HTTP/2.0 2 0 map[Accept:[application/json, */*] Accept-Encoding:[gzip] Content-Length:[6385] Content-Type:[application/json] User-Agent:[kube-apiserver-admission]] 0xc0003461e0 <nil> 6385 [] false cicd-template-ac.cicd-template-ac.svc:443 map[] map[] <nil> map[] 10.233.64.0:40972 /?timeout=30s 0xc0001104d0 <nil> <nil> 0xc0004c2100}
request in detail - header map[Accept:[application/json, */*] Accept-Encoding:[gzip] Content-Length:[6385] Content-Type:[application/json] User-Agent:[kube-apiserver-admission]]
request in detail - tls &{771 true false 49199 h2 true cicd-template-ac.cicd-template-ac.svc [] [] [] [] 0x63c190 [60 117 206 32 108 144 236 139 255 80 211 188]}
request in detail - tls peer certs []
request details - tls server name cicd-template-ac.cicd-template-ac.svc
request details -  host cicd-template-ac.cicd-template-ac.svc:443
request details - form map[]
request details - post form map[]
TLS in detail ---------
request in detail - tls peer certs []
request in detail - server name cicd-template-ac.cicd-template-ac.svc
request in detail -cipher suite  49199
request in detail did resume-  false
request in detail handshake complete-  true
request in detail - nego proto h2
request in detail - ocsp resp  []
request in detail - cert stamp  []
request in detail - tls unique [60 117 206 32 108 144 236 139 255 80 211 188]
request in detail - verified chain []
request in detail - version 771
TLS in detail end ---------
inside beaer token auth request--------------------------------
token: auth header nil
websocket auth ---------d000000000000
websocket: not a type request
request authentication status -  <nil> false <nil>

Why Kube-Rbac-Proxy authentication is failing ?
I am using the same key and certificate in which APIserver and webserver is working fine. SO I hope no problem with certificates.
Can you please provide the some thoughts about why I am getting the error that when i apply CRD as "the server has asked for the client to provide credentials". Why kube-rbac-proxy is not accepting the Certificate ?

Thanks,
Kannan V

Protection in case of attacker obtaining full control over a Pod

Hi

Thanks for the great tool. After experimenting with the project, there's one thing i'm not quite sure about. IIUC, kube-rbac-proxy will run as a reverse proxy alongside with the component being protected. If an attacker obtain full control over a Pod (e.g. Prometheus), then it can still use the token inside the pod to access other endpoints (e.g. kube-state-metrics), in which case, the information is leaked anyway. Is there anything I'm missing here? or maybe this introduction only refers to Pods apart from Prom..

I developed this proxy in order to be able to protect Prometheus metrics endpoints. In a scenario, where an attacker might obtain full control over a Pod, that attacker would have the ability to discover a lot of information about the workload as well as the current load of the respective workload.

enhancement: static authorization

To reduce api server load I suggest to implement a static authorization scheme. If configured, then:

  1. token access reviews would still be performed towards API server
  2. subject access reviews would not be issued but rather validated locally against a static configuration

According to @deads2k this helps tremendously in reducing API server load when kube-rbac-proxy is used in high-volume scraping targets like nodes (via node-exporter) or kubelets.

A prototype of a static authorizer is available for initial introspection of the idea at https://github.com/openshift/kube-rbac-proxy/pull/43/files.

The suggestion is to introduce a static array into the current configuration struct, i.e.:

type Config struct {
	Rewrites               *SubjectAccessReviewRewrites `json:"rewrites,omitempty"`
	ResourceAttributes     *ResourceAttributes          `json:"resourceAttributes,omitempty"`
	ResourceAttributesFile string                       `json:"-"`
	Static                 []StaticAuthorizationConfig   `json:"static,omitempty"`
}

type StaticAuthorizationConfig struct {
	User            UserConfig `json:"user,omitempty"`
	Verb            string     `json:"verb,omitempty"`
	Namespace       string     `json:"namespace,omitempty"`
	APIGroup        string     `json:"apiGroup,omitempty"`
	APIVersion      string     `json:"apiVersion,omitempty"`
	Resource        string     `json:"resource,omitempty"`
	Subresource     string     `json:"subresource,omitempty"`
	Name            string     `json:"name,omitempty"`
	ResourceRequest bool       `json:"resourceRequest,omitempty"`
	Path            string     `json:"path,omitempty"`
}

type UserConfig struct {
	Name   string   `json:"name,omitempty"`
	UID    string   `json:"uid,omitempty"`
	Groups []string `json:"groups,omitempty"`
}

This would then allow declaratively to enable static authorization for non-resource requests like so:

  config.yaml: |-
    "authorization":
      "static": [
        {
          "user":
            "name": "system:serviceaccount:monitoring:prometheus-k8s"
          "path": "/metrics"
          "verb": "get"
        }
      ]

The static authorizer could also be useful to enforce a rewrite. I.e. to authorize a static user and enforce authorization to a specific namespace one could configure:

  config.yaml: |-
    "authorization":
      "resourceAttributes":
        "namespace": "{{ .Value }}"
      "rewrites":
        "byQueryParameter":
          "name": "namespace"
      "static": [
        {
          "user":
            "name": "system:serviceaccount:monitoring:prometheus-k8s"
          "namespace": "my-namespace"
        }
      ]

I am happy to file a PR but was wondering if the maintainers have overall objections of adding such a functionality here.

cc @simonpasquier @brancz @paulfantom @lilic @deads2k

Crashes if container filesystem is readonly

I'm trying to run Prometheus in a cluster with a PodSecurityPolicy with readOnlyRootFilesystem: true. This makes the Docker copy-on-write filesystem read-only for all containers. (EmptyDirs and other volumes are still RW).

The issue is that kube-rbac-proxy seems to be trying to write logs to /tmp, and crashes when trying to do so because /tmp is read-only.

$ k -n monitoring logs kube-state-metrics-6f84ccbb7d-k25x6 kube-rbac-proxy-self
I0925 11:08:52.535386       1 main.go:166] Generating self signed cert as no cert is provided
log: exiting because of error: log: cannot create log: open /tmp/kube-rbac-proxy.kube-state-metrics-6f84ccbb7d-k25x6.unknownuser.log.INFO.20180925-110852.1: read-only file system

Ideally kube-rbac-proxy shouldn't log to /tmp, just to stdout/stderr.

kube-rbac-proxy segfaults when no client-ca is configured but client-cert auth is attempted

When a 3rd party attempts to use client-certificate authentication, kube-rbac-proxy will panic if it does not have any client CAs configured.

The observed panic:

goroutine 7 [running]:
net/http.(*conn).serve.func1(0xc00041e000)
	/usr/lib/golang/src/net/http/server.go:1824 +0x153
panic(0x1585460, 0x22749a0)
	/usr/lib/golang/src/runtime/panic.go:971 +0x499
k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).VerifyOptions(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:220 +0x58
k8s.io/apiserver/pkg/authentication/request/x509.(*Authenticator).AuthenticateRequest(0xc000504738, 0xc00042c200, 0xa65, 0x418b00, 0x0, 0x0)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apiserver/pkg/authentication/request/x509/x509.go:116 +0x87
k8s.io/apiserver/pkg/authentication/request/union.(*unionAuthRequestHandler).AuthenticateRequest(0xc00004bb60, 0xc00042c200, 0x414688, 0xc0000197c8, 0x249a006a, 0x31b6d74a11797403)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apiserver/pkg/authentication/request/union/union.go:56 +0xa8
k8s.io/apiserver/pkg/authentication/group.(*AuthenticatedGroupAdder).AuthenticateRequest(0xc000631e80, 0xc00042c200, 0xc0000198e8, 0x14, 0x20, 0x1)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apiserver/pkg/authentication/group/authenticated_group_adder.go:40 +0x55
github.com/brancz/kube-rbac-proxy/pkg/authn.(*DelegatingAuthenticator).AuthenticateRequest(0xc000504750, 0xc00042c200, 0xc0000198e8, 0xc0000198fb, 0xc0000198c0, 0xc0000198d0)
	/go/src/github.com/brancz/kube-rbac-proxy/pkg/authn/delegating.go:69 +0x3e
github.com/brancz/kube-rbac-proxy/pkg/proxy.(*kubeRBACProxy).Handle(0xc000115780, 0x192dbb0, 0xc0005362a0, 0xc00042c200, 0x17ec5b8)
	/go/src/github.com/brancz/kube-rbac-proxy/pkg/proxy/proxy.go:71 +0xa3
main.main.func1(0x192dbb0, 0xc0005362a0, 0xc00042c200)
	/go/src/github.com/brancz/kube-rbac-proxy/main.go:250 +0x132
net/http.HandlerFunc.ServeHTTP(0xc00004bbe0, 0x192dbb0, 0xc0005362a0, 0xc00042c200)
	/usr/lib/golang/src/net/http/server.go:2069 +0x44
net/http.(*ServeMux).ServeHTTP(0xc0001157c0, 0x192dbb0, 0xc0005362a0, 0xc00042c200)
	/usr/lib/golang/src/net/http/server.go:2448 +0x1ad
net/http.serverHandler.ServeHTTP(0xc0005369a0, 0x192dbb0, 0xc0005362a0, 0xc00042c200)
	/usr/lib/golang/src/net/http/server.go:2887 +0xa3
net/http.(*conn).serve(0xc00041e000, 0x1930020, 0xc000474d00)
	/usr/lib/golang/src/net/http/server.go:1952 +0x8cd
created by net/http.(*Server).Serve
	/usr/lib/golang/src/net/http/server.go:3013 +0x39b
2021/07/14 10:05:45 http: panic serving 10.128.2.12:51994: runtime error: invalid memory address or nil pointer dereference

Self generated cert for the kube-rbac-proxy only generates CN with timestamp

On running kube-rbac-proxy with only secure-listen-address and upstream, the generated certificate has CN of the value (only with Unix timestamp. The hostname is missing in the CN):
subject=/CN=@1554404348
issuer=/CN=@1554404348

Expected:
CommonName: fmt.Sprintf("%s-ca@%d", host, time.Now().Unix())

Request for moving this project repo under kubernetes-sigs

We (Kubebuilder team) are planning to use kube-rbac-proxy for adding authn/authz support to the metrics endpoint exposed by the controllers written using Kubebuilder. We would like to avoid forking the repo and maintaining our own container images.

kube-rbac-proxy is a very useful project and having the repo and the container images available under kubernetes will benefit other OSS projects. Have you considered it moving it under kubernetes-sigs ?

Does it supports HTTPS?

From the documentation it seems that it provides the authorization and authentication of the client(Prometheus in this case) but does not provide the end to end TLS communication encryption.
Is my understanding correct ?

Request client certificates

Hello. First of all, thank you for this proxy. It works like a charm.

I have a problem with that there are no client certificates in the request. The problems seem to be related to server tls configuration. I think we need to change tls config option ClientAuth to RequestClientCert (default value is NoClientCert).

srv.TLSConfig.ClientAuth = tls.RequestClientCert

The main idea behind this is to extract every client cert and validate it using x509 authenticator on the application side.

Related PR.
Please forgive me if I get it wrong.

Support regex-based paths ignoring for RBAC authz/authn

Hi there!

I'd like to propose a new feature, which would allow to ignore authn/authz for specific paths, based on a regex. This would allow to handle more complex authn/authz logic than ignore-paths flag does.

I would suggest to introduce ignore-regex flag, which would accept single go-style regex, and exclude any path that matches the regex from authn/authz. This exclusion would be additional to ignore-paths exclusion, meaning that if a path matches one of paths from ignore-paths it will be excluded as well. Currently a path has to equal to one of paths from ignore-paths to be excluded.

Example usage would be specifing --ignore-regex='/path/(.)*' to exclude all subpaths from authn/authz.

Please let me know your thoughts. Once we agree on the solution I'll be happy to submit a PR.

Thanks,
Tomasz

Serviced accounts blocked in mirror pods

Use of ServiceAccounts is blocked in static (mirror) pods such as kube-scheduler. How can kube-rbac-proxy be used in this case? There is mention of using certificates for authentication, but kube-rbac-proxy fails /w main.go:329] cannot find Service Account in pod to build in-cluster rest config

unknown flag: --auth-token-audiences

I deplyed a node-exporter , kube-rbac-proxy is sidecar with node-exporter, but the kube-rbac-porxy log 'unknown flag: --auth-token-audiences' . if my other container wants to get node-exporter metrics, what should i do.

Pass Authorization header with bearer token to upstream app

Is there a way in kube-rbac-proxy to pass Authorization header with bearer token to the upstream application? I can see only option to pass user and groups, but not whole authorization header. I guess it's removed by apiserver authentication https://github.com/kubernetes/apiserver/blob/master/pkg/authentication/request/bearertoken/bearertoken.go#L58

If it is really not possible at the moment, would you be interested in this contribution? Are there any security issues with it?

Support workloads with http probes

To secure application using kube-rbac-proxy application needs to bind on localhost. In cases when pod is not running in host network, application is not reachable for kubelet to perform healthchecks.
To my knowledge there is no way to configure kubelet to use service account tokens when performing http probes, making it impossible to use kube-rbac-proxy with healthchecks without adding yet another proxy just for them.

I see two possible ways to solve it:

  • authorize kubelet http probe to "/healthz" (not sure if it's possible, could be risky to allow access nonResourceURLs on all pods)
  • Allow to skip authorization for whitelisted endpoints in kube-rbac-proxy

Error after compiling to ARM architecture

Since this image is a requirement to deploy some coreos prometheus-operator images, I tried compiling on an ARM64 board. I changed the GOARCH and the compilation went fine but when executing I get:

root@kubemaster1:/home/rock64/go/src/github.com/brancz/kube-rbac-proxy# export PATH=$PATH:/usr/local/go/bin
root@kubemaster1:/home/rock64/go/src/github.com/brancz/kube-rbac-proxy# export GOPATH=/home/rock64/go
root@kubemaster1:/home/rock64/go/src/github.com/brancz/kube-rbac-proxy# export PATH=$PATH:$GOPATH/bin
root@kubemaster1:/home/rock64/go/src/github.com/brancz/kube-rbac-proxy# make build
>> building for linux/arm to _output/linux/arm/kube-rbac-proxy
root@kubemaster1:/home/rock64/go/src/github.com/brancz/kube-rbac-proxy# ./_output/linux/arm/kube-rbac-proxy
F0225 20:59:00.638916    6257 main.go:107] Failed to build Kubernetes rest-config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
goroutine 1 [running]:
github.com/brancz/kube-rbac-proxy/vendor/github.com/golang/glog.stacks(0x442008db00, 0x442017a000, 0xc0, 0x114)
	/home/rock64/go/src/github.com/brancz/kube-rbac-proxy/vendor/github.com/golang/glog/glog.go:766 +0xb0
github.com/brancz/kube-rbac-proxy/vendor/github.com/golang/glog.(*loggingT).output(0x13e2fe0, 0x4400000003, 0x44200c82c0, 0x13415e6, 0x7, 0x6b, 0x0)
	/home/rock64/go/src/github.com/brancz/kube-rbac-proxy/vendor/github.com/golang/glog/glog.go:717 +0x2b8
github.com/brancz/kube-rbac-proxy/vendor/github.com/golang/glog.(*loggingT).printf(0x13e2fe0, 0x3, 0xbdeea1, 0x2a, 0x4420029e08, 0x1, 0x1)
	/home/rock64/go/src/github.com/brancz/kube-rbac-proxy/vendor/github.com/golang/glog/glog.go:655 +0x108
github.com/brancz/kube-rbac-proxy/vendor/github.com/golang/glog.Fatalf(0xbdeea1, 0x2a, 0x4420029e08, 0x1, 0x1)
	/home/rock64/go/src/github.com/brancz/kube-rbac-proxy/vendor/github.com/golang/glog/glog.go:1145 +0x54
main.main()
	/home/rock64/go/src/github.com/brancz/kube-rbac-proxy/main.go:107 +0x678

Same happens on ARM32 or ARM64.

Support audience validation

Kubernetes now has a TokenRequest API, through which a ServiceAccount can create new tokens, that are scoped to audiences. For example this can be used by Prometheus to create a new token, that is purely meant for talking to kubelets. This is desirable, because otherwise the kubelet could simply impersonate the Prometheus server by stealing the ServiceAccount token it handed to it in plaintext.

Passing audiences to the TokenReview API is not implemented yet, so implementing support for this in the kube-rbac-proxy is currently blocked on this merging in Kubernetes. (RE: kubernetes/kubernetes#62692)

Proxy does start listening and then stopps?

I was debating where to put this issue (prometheus-operator or here)
Using kubernetes 1.11.5

I expect the kube-rbac-proxy-main pod to listen to 8443 and the kube-rbac-proxy-self to listen to 9443
It starts listening.. But after a while it just stopps and prints no error

$ kubectl exec -ti --namespace=monitoring kube-state-metrics-7f8cbb777-4jk9b sh

~ $ netstat -anp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:8081          0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:8082          0.0.0.0:*               LISTEN      -
tcp        0      0 10.42.4.17:42178        10.43.0.1:443           ESTABLISHED -
tcp        0      0 10.42.4.17:42174        10.43.0.1:443           ESTABLISHED -
tcp        2      0 :::8443                 :::*                    LISTEN      1/kube-rbac-proxy
tcp        2      0 :::9443                 :::*                    LISTEN      -
tcp      142      0 ::ffff:10.42.4.17:8443  ::ffff:10.42.6.14:57838 ESTABLISHED -
tcp      142      0 ::ffff:10.42.4.17:9443  ::ffff:10.42.6.14:50506 ESTABLISHED -
tcp      142      0 ::ffff:10.42.4.17:9443  ::ffff:10.42.6.14:50594 ESTABLISHED -
tcp      142      0 ::ffff:10.42.4.17:8443  ::ffff:10.42.6.14:57748 ESTABLISHED -
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node PID/Program name    Path
~ $ netstat -anp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:8081          0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:8082          0.0.0.0:*               LISTEN      -
tcp        0      0 10.42.4.17:42856        10.43.0.1:443           ESTABLISHED -
tcp        0      0 10.42.4.17:42174        10.43.0.1:443           ESTABLISHED -
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node PID/Program name    Path

We use pretty standard jsonnet from kube-prometheus with no modifications to kube-state-metrics.
Generated kube-state-metrics-deployment looks like this:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  labels:
    app: kube-state-metrics
  name: kube-state-metrics
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kube-state-metrics
  template:
    metadata:
      labels:
        app: kube-state-metrics
    spec:
      containers:
      - args:
        - --secure-listen-address=:8443
        - --tls-cipher-suites=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
        - --upstream=http://127.0.0.1:8081/
        image: quay.io/coreos/kube-rbac-proxy:v0.4.0
        name: kube-rbac-proxy-main
        ports:
        - containerPort: 8443
          name: https-main
        resources:
          limits:
            cpu: 20m
            memory: 40Mi
          requests:
            cpu: 10m
            memory: 20Mi
      - args:
        - --secure-listen-address=:9443
        - --tls-cipher-suites=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
        - --upstream=http://127.0.0.1:8082/
        image: quay.io/coreos/kube-rbac-proxy:v0.4.0
        name: kube-rbac-proxy-self
        ports:
        - containerPort: 9443
          name: https-self
        resources:
          limits:
            cpu: 20m
            memory: 40Mi
          requests:
            cpu: 10m
            memory: 20Mi
      - args:
        - --host=127.0.0.1
        - --port=8081
        - --telemetry-host=127.0.0.1
        - --telemetry-port=8082
        image: quay.io/coreos/kube-state-metrics:v1.5.0
        name: kube-state-metrics
        resources:
          limits:
            cpu: 100m
            memory: 150Mi
          requests:
            cpu: 100m
            memory: 150Mi
      - command:
        - /pod_nanny
        - --container=kube-state-metrics
        - --cpu=100m
        - --extra-cpu=2m
        - --memory=150Mi
        - --extra-memory=30Mi
        - --threshold=5
        - --deployment=kube-state-metrics
        env:
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: MY_POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: quay.io/coreos/addon-resizer:1.0
        name: addon-resizer
        resources:
          limits:
            cpu: 50m
            memory: 30Mi
          requests:
            cpu: 10m
            memory: 30Mi
      nodeSelector:
        beta.kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 65534
      serviceAccountName: kube-state-metrics

i tried to add --logtostderr
and then i see this during startup

kube-state-metrics-597fc98779-wx84d kube-rbac-proxy-self I0118 14:53:27.868174       1 main.go:209] Generating self signed cert as no cert is provided
kube-state-metrics-597fc98779-wx84d kube-rbac-proxy-main I0118 14:53:27.467248       1 main.go:209] Generating self signed cert as no cert is provided
kube-state-metrics-597fc98779-wx84d kube-rbac-proxy-main I0118 14:53:42.469645       1 main.go:242] Listening securely on :8443
kube-state-metrics-597fc98779-wx84d kube-rbac-proxy-self I0118 14:53:36.268832       1 main.go:242] Listening securely on :9443
kube-state-metrics-597fc98779-wx84d kube-state-metrics I0118 14:53:28.271446       1 main.go:80] Using default collectors
kube-state-metrics-597fc98779-wx84d kube-state-metrics I0118 14:53:28.271542       1 main.go:88] Using all namespace
kube-state-metrics-597fc98779-wx84d kube-state-metrics I0118 14:53:28.271554       1 main.go:124] metric white-blacklisting: blacklisting the following items: 
kube-state-metrics-597fc98779-wx84d kube-state-metrics W0118 14:53:28.271571       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
kube-state-metrics-597fc98779-wx84d kube-state-metrics I0118 14:53:28.272776       1 main.go:166] Testing communication with server
kube-state-metrics-597fc98779-wx84d kube-state-metrics I0118 14:53:28.284837       1 main.go:171] Running with Kubernetes cluster version: v1.11. git version: v1.11.5. git tree state: clean. commit: 753b2dbc622f5cc417845f0ff8a77f539a4213ea. platform: linux/amd64
kube-state-metrics-597fc98779-wx84d kube-state-metrics I0118 14:53:28.284860       1 main.go:173] Communication with server successful
kube-state-metrics-597fc98779-wx84d kube-state-metrics I0118 14:53:28.285139       1 main.go:182] Starting kube-state-metrics self metrics server: 127.0.0.1:8082
kube-state-metrics-597fc98779-wx84d kube-state-metrics I0118 14:53:28.371624       1 builder.go:112] Active collectors: configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,jobs,limitranges,namespaces,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets
kube-state-metrics-597fc98779-wx84d kube-state-metrics I0118 14:53:28.371649       1 main.go:208] Starting metrics server: 127.0.0.1:8081

kube-rbac-proxy keeps restarting

Hi

i have GKE v1.17.14-gke.1600 with node-exporter:v0.18.1 and kube-rbac-proxy:v0.4.1.

recently node-exporter pods started to CrashLoopBack due to kube-rbac-proxy.
from the logs on the kube-rbac-proxy container i see the following:

I 2021-02-17T09:25:46.844609Z Generating self signed cert as no cert is provided 
I 2021-02-17T09:26:46.444021Z Starting TCP socket on [10.132.0.22]:9100 
F 2021-02-17T09:26:46.444366Z failed to listen on secure address: listen tcp 10.132.0.22:9100: bind: cannot assign requested address 
I 2021-02-17T09:26:50.245642Z Generating self signed cert as no cert is provided 
I 2021-02-17T09:27:36.644473Z Starting TCP socket on [10.132.0.57]:9100 
I 2021-02-17T09:27:36.644671Z Listening securely on [10.132.0.57]:9100 
I 2021-02-17T11:25:00.832301Z Generating self signed cert as no cert is provided 
I 2021-02-17T11:25:27.932392Z Starting TCP socket on [10.132.0.57]:9100 
F 2021-02-17T11:25:28.031565Z failed to listen on secure address: listen tcp 10.132.0.57:9100: bind: cannot assign requested address 
I 2021-02-17T11:25:32.433448Z Generating self signed cert as no cert is provided 
I 2021-02-17T11:26:06.232816Z Starting TCP socket on [10.132.0.62]:9100 
I 2021-02-17T11:26:06.331483Z Listening securely on [10.132.0.62]:9100 
I 2021-02-17T11:34:16.441464Z Generating self signed cert as no cert is provided 
I 2021-02-17T11:34:50.542939Z Starting TCP socket on [10.132.0.62]:9100 
F 2021-02-17T11:34:50.640542Z failed to listen on secure address: listen tcp 10.132.0.62:9100: bind: cannot assign requested address 
I 2021-02-17T11:34:53.743324Z Generating self signed cert as no cert is provided 
I 2021-02-17T11:35:21.040676Z Starting TCP socket on [10.132.0.36]:9100 
I 2021-02-17T11:35:21.040981Z Listening securely on [10.132.0.36]:9100 
I 2021-02-17T11:53:09.543923Z Generating self signed cert as no cert is provided 
I 2021-02-17T11:53:39.643516Z Starting TCP socket on [10.132.0.66]:9100 
I 2021-02-17T11:53:39.745283Z Listening securely on [10.132.0.66]:9100 

seems like it tries to bind on port 9100 on the POD IP and failed to do that and keeps restarting and get new IP until it finally works.
any idea?

10x

run the process as no root

Hello thanks for sharing this project. Is there any special requirement for running the process as root?

Can we build the Dockerfile with

USER nonroot:nonroot

safely?

We're running kube-rbac-proxy in an environment with PodSecurityPolicy enabled and want to restrict to no root.

arm64 image pulled shows amd64 as its arch

I pull the arm64 image by digest like below

docker pull quay.io/brancz/kube-rbac-proxy@sha256:a178cadf8a6de431e8a72a08f77e0cf377efcabd3a9bedbd8f9f6501947596e7

image

But docker inspect shows it's amd64 :
image

Token Auth is fine, but User Auth (cert) is not processed

Hey there.

If I understood it right, kube-rbac-proxy works with both token (ServiceAccount directive on RBAC) and signed certificates (User directive on RBAC). I'm having trouble AUTH'ing with just certs.

This proxy was developed in order to restrict requests to only those Pods, that present a valid and RBAC authorized token or client TLS certificate.

Just deployed kube-rbac-proxy as a sidecar to a prometheus-node-exporter. Plan is to protect the /metrics endpoint.

Followed the example, everything is fine enough:

# TOKEN=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )
# curl -k --header "Authorization: Bearer $TOKEN" https://kubernetes-node:9100/metrics
go_gc_duration_seconds{quantile="0"} 4.3349e-05
...

I enabled verbose loggin to see what it does. When I try with tokens like the line above, a lot of messages are generated:

POST https://master.kubernetes/apis/authentication.k8s.io/v1beta1/tokenreviews 
...
POST https://master.kubernetes/apis/authorization.k8s.io/v1beta1/subjectaccessreviews 201 Created in 2 milliseconds

When I try with a simple CA signed certificate with curl it generates no message, and gives me 'Unauthorized':

# curl -k https://kubernetes-node:9100/metrics --cert ./tls.crt --key ./tls.key --cacert ./tls.ca
Unauthorized

No logs at all are generated.

I made sure I'm using the correct certificates and set this parameter:

      - --client-ca-file=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt | openssl x509 -noout -subject
subject= /CN=X-Homologacao

openssl x509 -in ./tls.ca | openssl x509 -noout -issuer 
issuer= /CN=X-Homologacao

My master is protected by that very same CA:

# systemctl cat kube-apiserver.service
...
--client-ca-file=/etc/kubernetes/ssl/ca.pem
...

# md5sum /etc/kubernetes/ssl/ca.pem
2e03ecc30fe8f662760781a59a5941c0  /etc/kubernetes/ssl/ca.pem
# md5sum /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 
2e03ecc30fe8f662760781a59a5941c0  /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

My curl with certs to him works just fine:

curl -k https://master.hom.estaleiro.serpro:443/api --cert tls.crt --key ./tls.key --cacert ./tls.ca
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
...

Am I doing this wrong or is it a bug?

Not able to authenticate by kube rbac proxy

I am developing an admission controller which is protecting by kube rbac proxy. I have admission controller on '/' endpoint and prometheus metrics on '/metrics' endpoint. I am able to get the prometheus metrics data using the bearer token, but getting authentication error on admission controller. The authentication is expecting either a bearer token but the request doesn't contain token. How can I make a kube rbac proxy which prometheus metrics and admission controller endpoint will be available?

x509 - cannot parse dnsName

The current version v0.3.0 on quay.io is compiled with a version of golang that is causing issues with intermediary certs. This effects people running prometheus-operator from the generated manifests

Logs from the proxy running in a node-exporter pod:

config.go:330] Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: error reading /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: x509: cannot parse dnsName "Self-Signed Root CA for all Dev Kubernetes Clusters"

The golang issue: golang/go#23995 (comment)

As a result Kubernetes go client has experienced these issues: kubernetes/client-go#371.

I can confirm rebuilding kube-rbac-proxy with make container (now using go v1.10.3) fixes the issue.

Tested on Kubernetes version:

Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

http/2(h2, h2c) and gRPC support

@brancz Hi, thanks for sharing a very inspiring project 👍

I'm POCing to add support for gRPC but I'm not really sure if it is really feasible.
I'd greatly appreciate if you could leave comments on the state of my POC.
Thanks!

https://github.com/mumoshu/kube-rbac-proxy/tree/grpc-support

In nutshell:

  • It runs a gRPC proxy instead of the http(s) one when a specific flag is passed.
  • In the notion of gRPC we don't have request path/resource name but do have "method name" instead.
    • Given that, my idea is that I could take a gRPC method name as a nonResourceURL in K8S' RBAC policy, so that we could authorize the method call w/ RBAC
  • I'm relying on a bearer token passed via the authorization header(=metadata in gRPC?) to be used for authentication

My end-goal/intended use-case of adding a gRPC support to kube-rbac-proxy is achieve authn/authz for Helm/Tiller. I don't like to implemented a yet another RBAC system inside Helm/Tiller but rather reuse K8S RBAC instead.

helm/helm#1918 (comment)

Ideally, kube-rbac-proxy could authenticate the client and authorize the rpc call w/ RBAC. Once authorized, rbac-proxy could add the authn result to metadata to be used by tiller(upstream of kube-rbac-proxy) to "impersonate" the user. Tiller could then CRUD k8s resources as the user.

WDYT?

Unable to authenticate the request due to an error

info

kubernetes version: 1.12.4
kernel: 3.10.0-514.el7.x86_64
kube-rbac-proxy: 0.4.1(kube-prometheus deploy)

steps

The cluster was stable for several weeks. Today a k8s node reboot, then kube-rbac-proxy in this node started with error log, and prometheus data in this node also in wrong state.

service kubernetes (10.250.0.1)

node export pod kube-rbac-proxy container log:

I0409 03:35:18.299596  100015 main.go:213] Generating self signed cert as no cert is provided
I0409 03:35:18.793646  100015 main.go:243] Starting TCP socket on 10.76.3.28:9100
I0409 03:35:18.793816  100015 main.go:250] Listening securely on 10.76.3.28:9100
E0409 03:37:01.503950  100015 webhook.go:106] Failed to make webhook authenticator request: Post https://10.250.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs
E0409 03:37:01.504454  100015 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.250.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs
E0409 03:37:31.505852  100015 webhook.go:106] Failed to make webhook authenticator request: Post https://10.250.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs
E0409 03:37:31.505891  100015 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.250.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs
E0409 03:38:01.504253  100015 webhook.go:106] Failed to make webhook authenticator request: Post https://10.250.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs
E0409 03:38:01.504282  100015 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.250.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs
E0409 03:38:31.503503  100015 webhook.go:106] Failed to make webhook authenticator request: Post https://10.250.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs

prometheus adapter log:

I0409 08:43:04.366679       1 adapter.go:91] successfully using in-cluster auth
F0409 08:43:04.371165       1 adapter.go:252] unable to install resource metrics API: unable to construct dynamic discovery mapper: unable to populate initial set of REST mappings: Get https://10.250.0.1:443/api?timeout=32s: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs

prometheus log:

level=warn ts=2019-04-09T09:16:54.049769503Z caller=main.go:274 deprecation_notice="'storage.tsdb.retention' flag is deprecated use 'storage.tsdb.retention.time' instead."
level=info ts=2019-04-09T09:16:54.049886154Z caller=main.go:321 msg="Starting Prometheus" version="(version=2.8.0, branch=HEAD, revision=59369491cfdfe8dcb325723d6d28a837887a07b9)"
level=info ts=2019-04-09T09:16:54.049918597Z caller=main.go:322 build_context="(go=go1.11.5, user=root@4c4d5c29b71f, date=20190312-07:46:58)"
level=info ts=2019-04-09T09:16:54.049944177Z caller=main.go:323 host_details="(Linux 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 prometheus-k8s-1 (none))"
level=info ts=2019-04-09T09:16:54.049970323Z caller=main.go:324 fd_limits="(soft=65536, hard=65536)"
level=info ts=2019-04-09T09:16:54.050003044Z caller=main.go:325 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2019-04-09T09:16:54.05173364Z caller=main.go:640 msg="Starting TSDB ..."
level=info ts=2019-04-09T09:16:54.051798186Z caller=web.go:418 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2019-04-09T09:16:54.057115202Z caller=main.go:655 msg="TSDB started"
level=info ts=2019-04-09T09:16:54.057275495Z caller=main.go:724 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-04-09T09:16:54.061502423Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-04-09T09:16:54.062367489Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-04-09T09:16:54.063110591Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-04-09T09:16:54.065357392Z caller=kubernetes.go:191 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
level=error ts=2019-04-09T09:16:54.071039788Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:264: Failed to list *v1.Pod: Get https://10.250.0.1:443/api/v1/namespaces/monitoring/pods?limit=500&resourceVersion=0: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs"
level=error ts=2019-04-09T09:16:54.071265466Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:264: Failed to list *v1.Pod: Get https://10.250.0.1:443/api/v1/namespaces/kube-system/pods?limit=500&resourceVersion=0: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs"
level=error ts=2019-04-09T09:16:54.07145928Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:262: Failed to list *v1.Endpoints: Get https://10.250.0.1:443/api/v1/namespaces/monitoring/endpoints?limit=500&resourceVersion=0: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs"
level=error ts=2019-04-09T09:16:54.071658761Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:263: Failed to list *v1.Service: Get https://10.250.0.1:443/api/v1/namespaces/kube-system/services?limit=500&resourceVersion=0: x509: cannot validate certificate for 10.250.0.1 because it doesn't contain any IP SANs"

Thanks for any Suggestions

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.