Git Product home page Git Product logo

rbac-tool's Introduction

release Go Version Build License Tweet

insightCloudSec | insightCloudSec | RBAC Tool For Kubernetes

Kubernetes RBAC

Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.

Permissions are purely additive (there are no “deny” rules).

A Role always sets permissions within a particular namespace ; when you create a Role, you have to specify the namespace it belongs in. ClusterRole, by contrast, is a non-namespaced resource. ClusterRoles have several uses. You can use a ClusterRole to:

  • define permissions on namespaced resources and be granted within individual namespace(s)
  • define permissions on namespaced resources and be granted across all namespaces
  • define permissions on cluster-scoped resources

If you want to define a role within a namespace, use a Role; if you want to define a role cluster-wide, use a ClusterRole.

rbac-tool simplifies querying and creation RBAC policies.

Install

Standalone

Download the latest from the release page

curl https://raw.githubusercontent.com/alcideio/rbac-tool/master/download.sh | bash

kubectl plugin // krew

$ kubectl krew install rbac-tool

rbac-tool

A collection of Kubernetes RBAC tools to sugar coat Kubernetes RBAC complexity

rbac-tool

Usage:
  rbac-tool [command]

Available Commands:
  analysis        Analyze RBAC permissions and highlight overly permissive principals, risky permissions, etc.
  auditgen        Generate RBAC policy from Kubernetes audit events
  bash-completion Generate bash completion. source <(rbac-tool bash-completion)
  generate        Generate Role or ClusterRole and reduce the use of wildcards
  help            Help about any command
  lookup          RBAC Lookup by subject (user/group/serviceaccount) name
  policy-rules    RBAC List Policy Rules For subject (user/group/serviceaccount) name
  show            Generate ClusterRole with all available permissions from the target cluster
  version         Print rbac-tool version
  visualize       A RBAC visualizer
  who-can         Shows which subjects have RBAC permissions to perform an action
  whoami          Shows the subject for the current context with which one authenticates with the cluster
  
Flags:
  -h, --help      help for rbac-tool
  -v, --v Level   number for the log level verbosity

Use "rbac-tool [command] --help" for more information about a command.

rbac-tool viz

A Kubernetes RBAC visualizer that generate a graph as dot file format or in HTML format.

rbac-tool

By default 'rbac-tool viz' will connect to the local cluster (pointed by kubeconfig) Create a RBAC graph of the actively running workload on all namespaces except kube-system

See run options on how to render specific namespaces, other clusters, etc.

#Render Locally
rbac-tool viz --outformat dot && cat rbac.dot | dot -Tpng > rbac.png  && open rbac.png

# Render Online
https://dreampuf.github.io/GraphvizOnline

Examples:

# Scan the cluster pointed by the kubeconfig context 'myctx'
rbac-tool viz --cluster-context myctx
# Scan and create a PNG image from the graph
rbac-tool viz --outformat dot --exclude-namespaces=soemns && cat rbac.dot | dot -Tpng > rbac.png && google-chrome rbac.png

rbac-tool show

Generate sample ClusterRole with all available permissions from the target cluster.

rbac-tool read from the Kubernetes discovery API the available API Groups and resources, and based on the command line options, generate an explicit ClusterRole with available resource permissions. Examples:

# Generate a ClusterRole with all the available permissions for core and apps api groups
rbac-tool show  --for-groups=,apps

rbac-tool analysis

Analyze RBAC permissions and highlight overly permissive principals, risky permissions. The command allows to use a custom analysis rule set, as well as the ability to define custom exceptions (global and per-rule).

The default rule set can be found here

Examples:

# Analyze the cluster pointed by the kubeconfig context 'myctx' with the internal analysis rule set
rbac-tool analysis --cluster-context myctx
# Analyze the cluster pointed by kubeconfig with the the provided analysis rule set
rbac-tool analysis --config myruleset.yaml

rbac-tool lookup

Lookup of the Roles/ClusterRoles used attached to User/ServiceAccount/Group with or without regex

Examples:

# Search All Service Accounts
rbac-tool lookup
# Search Service Accounts that match myname exactly
rbac-tool lookup myname
# Search All Service Accounts that contain myname
rbac-tool lookup -e '.*myname.*'
# Lookup System Accounts (all accounts that start with system: )
rbac-tool lookup -e '^system:'
  SUBJECT                                         | SUBJECT TYPE | SCOPE       | NAMESPACE   | ROLE                                                                 | BINDING
+-------------------------------------------------+--------------+-------------+-------------+----------------------------------------------------------------------+---------------------------------------------------+
  system:anonymous                                | User         | Role        | kube-public | kubeadm:bootstrap-signer-clusterinfo                                 | kubeadm:bootstrap-signer-clusterinfo
  system:authenticated                            | Group        | ClusterRole |             | system:basic-user                                                    | system:basic-user
  system:authenticated                            | Group        | ClusterRole |             | system:public-info-viewer                                            | system:public-info-viewer
  system:authenticated                            | Group        | ClusterRole |             | system:discovery                                                     | system:discovery
  system:bootstrappers:kubeadm:default-node-token | Group        | ClusterRole |             | kubeadm:get-nodes                                                    | kubeadm:get-nodes
  system:bootstrappers:kubeadm:default-node-token | Group        | ClusterRole |             | system:node-bootstrapper                                             | kubeadm:kubelet-bootstrap
  system:bootstrappers:kubeadm:default-node-token | Group        | ClusterRole |             | system:certificates.k8s.io:certificatesigningrequests:nodeclient     | kubeadm:node-autoapprove-bootstrap
  system:bootstrappers:kubeadm:default-node-token | Group        | Role        | kube-system | kube-proxy                                                           | kube-proxy
  system:bootstrappers:kubeadm:default-node-token | Group        | Role        | kube-system | kubeadm:nodes-kubeadm-config                                         | kubeadm:nodes-kubeadm-config
  system:bootstrappers:kubeadm:default-node-token | Group        | Role        | kube-system | kubeadm:kubelet-config                                               | kubeadm:kubelet-config
  system:kube-controller-manager                  | User         | ClusterRole |             | system:kube-controller-manager                                       | system:kube-controller-manager
...

rbac-tool who-can

Shows which subjects have RBAC permissions to perform an action denoted by VERB on an object denoted as ( KIND | KIND/NAME | NON-RESOURCE-URL)

  • VERB is a logical Kubernetes API verb like 'get', 'list', 'watch', 'delete', etc.
  • KIND is a Kubernetes resource kind. Shortcuts and API groups will be resolved, e.g. 'po' or 'deploy'.
  • NAME is the name of a particular Kubernetes resource.
  • NON-RESOURCE-URL is a partial URL that starts with "/".

Examples:

# Who can read ConfigMap resources
rbac-tool who-can get cm

# Who can watch Deployments
rbac-tool who-can watch deployments.apps

# Who can read the Kubernetes API endpoint /apis
rbac-tool who-can get /apis

# Who can read a secret resource by the name some-secret
rbac-tool who-can get secret/some-secret

rbac-tool policy-rules

List Kubernetes RBAC policy rules for a given User/ServiceAccount/Group with or without regex

Examples:

# List policy rules for system unauthenicated group
rbac-tool policy-rules -e '^system:unauth'

Output:

  TYPE  | SUBJECT                | VERBS | NAMESPACE | API GROUP | KIND | NAMES | NONRESOURCEURI                              
+-------+------------------------+-------+-----------+-----------+------+-------+--------------------------------------------+
  Group | system:unauthenticated | get   | *         | -         | -    | -     | /healthz,/livez,/readyz,/version,/version/  

Leveraging JMESPath to filter and transform RBAC Policy rules.

For example: Who Can Read Secrets

rbac-tool policy-rules -o json  | jp "[? @.allowedTo[? (verb=='get' || verb=='*') && (apiGroup=='core' || apiGroup=='*') && (resource=='secrets' || resource == '*')  ]].{name: name, namespace: namespace, kind: kind}"

See https://jmespath.org/

rbac-tool auditgen

Generate RBAC policy from Kubernetes audit events. Audit source format can be:

  • Kubernetes List Object that contains Audit Events
  • Newline seperated Audit Event objects Audit source can be file, directory or http URL.
rbac-tool auditgen -f audit.log

This command is based on this prior work.

rbac-tool gen

Examples would be simplest way to describe how rbac-tool gen can help:

  • Generate a ClusterRole policy that allows to read everything except secrets and services
  • Generate a Role policy that allows create,update,get,list (read/write) everything except secrets, services, ingresses, networkpolicies
  • Generate a Role policy that allows create,update,get,list (read/write) everything except statefulsets

rbac-tool generate RBAC Role or RBAC ClusterRole resource while reducing the use of wildcards, and support deny semantics for specific Kubernetes clusters.

rbac-tool whoami

Shows the subject for the current context with which one authenticates with the cluster.

Examples:

rbac-tool whoami --cluster-context myctx

How rbac-tool gen works?

rbac-tool reads from the Kubernetes discovery API the available API Groups and resources, which represents the "world" of resources. Based on the command line options, generate an explicit Role/ClusterRole that avoid wildcards by expanding wildcards to the available "world" resources.

Command Line Examples

Examples generated against Kubernetes cluster v1.16 deployed using KIND.

Generate a ClusterRole policy that allows to read everything except secrets and services

rbac-tool  gen  --deny-resources=secrets.,services. --allowed-verbs=get,list

Generate a Role policy that allows create,update,get,list (read/write) everything except secrets, services, networkpolicies in core,apps & networking.k8s.io API groups

rbac-tool  gen --generated-type=Role --deny-resources=secrets.,services.,networkpolicies.networking.k8s.io --allowed-verbs=* --allowed-groups=,extensions,apps,networking.k8s.io

Generate a Role policy that allows create,update,get,list (read/write) everything except statefulsets

rbac-tool  gen --generated-type=Role --deny-resources=apps.statefulsets --allowed-verbs=* 

Example Output

Generate a Role policy that allows create,update,get,list (read/write) everything except secrets, services, networkpolicies in core,apps & networking.k8s.io API groups

rbac-tool  gen --generated-type=Role --deny-resources=secrets.,services.,networkpolicies.networking.k8s.io --allowed-verbs=* --allowed-groups=,extensions,apps,networking.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: custom-role
  namespace: mynamespace
rules:
- apiGroups:
  - ""
  resources:
  - events
  - componentstatuses
  - podtemplates
  - namespaces
  - replicationcontrollers
  - persistentvolumes
  - configmaps
  - persistentvolumeclaims
  - resourcequotas
  - limitranges
  - nodes
  - bindings
  - serviceaccounts
  - pods
  - endpoints
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - replicasets
  - daemonsets
  - deployments
  - controllerrevisions
  - statefulsets
  verbs:
  - '*'
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - '*'

Command Line Reference

Generate Role or ClusterRole resource while reducing the use of wildcards.

rbac-tool read from the Kubernetes discovery API the available API Groups and resources, 
and based on the command line options, generate an explicit Role/ClusterRole that avoid wildcards

Examples:

# Generate a Role with read-only (get,list) excluding secrets (core group) and ingresses (extensions group) 
rbac-tool gen --generated-type=Role --deny-resources=secrets.,ingresses.extensions --allowed-verbs=get,list

# Generate a Role with read-only (get,list) excluding secrets (core group) from core group, admissionregistration.k8s.io,storage.k8s.io,networking.k8s.io
rbac-tool gen --generated-type=ClusterRole --deny-resources=secrets., --allowed-verbs=get,list  --allowed-groups=,admissionregistration.k8s.io,storage.k8s.io,networking.k8s.io

Usage:
  rbac-tool generate [flags]

Aliases:
  generate, gen

Flags:
      --allowed-groups strings   Comma separated list of API groups we would like to allow '*' (default [*])
      --allowed-verbs strings    Comma separated list of verbs to include. To include all use '* (default [*])
  -c, --cluster-context string   Cluster.use 'kubectl config get-contexts' to list available contexts
      --deny-resources strings   Comma separated list of resource.group
  -t, --generated-type string    Role or ClusteRole (default "ClusterRole")
  -h, --help                     help for generate

Contributing

Bugs

If you think you have found a bug please follow the instructions below.

  • Please spend a small amount of time giving due diligence to the issue tracker. Your issue might be a duplicate.
  • Open a new issue if a duplicate doesn't already exist.

Features

If you have an idea to enhance rbac-tool follow the steps below.

  • Open a new issue.
  • Remember users might be searching for your issue in the future, so please give it a meaningful title to helps others.
  • Clearly define the use case, using concrete examples.
  • Feel free to include any technical design for your feature.

Pull Requests

  • Your PR is more likely to be accepted if it focuses on just one change.
  • Please include a comment with the results before and after your change.
  • Your PR is more likely to be accepted if it includes tests.
  • You're welcome to submit a draft PR if you would like early feedback on an idea or an approach.

Stargazers over time

rbac-tool's People

Contributors

abirdcfly avatar austinpray-mixpanel avatar ciiiii avatar cr7258 avatar danielvoros-form3 avatar disasmwinnie avatar gadinaor avatar gadinaor-r7 avatar maxrink avatar ongyiren1994 avatar thomas-maurice avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rbac-tool's Issues

Generate RBAC policy from Kubernetes API Server audit log

What would you like to be added:

  • Add ability to generate RBAC policy from Kubernetes API Server audit log
  • Generate policy for specific user/serviceaccount or multiple users/serviceaccounts

Why is this needed:

Reduce over permissive RBAC policies (star syndrome)

Role & ClusterRole labeling in v1.2.1

@gadinaor The screenshot was handcrafted; however, I've worked on this feature a bit today (from the master branch currently tagged at 1.2.1).

Unrelated: I noticed something a bit odd. The results of the lookup command are different in the 1.2.0 and the 1.2.1 .The 1.2.0 labels ClusterRoles as Roles when they are used on a namespaced (i.e. with a Rolebinding).

Is the change from 1.2.0 to 1.2.1 intentional or a regression?

Screenshot 2021-09-06 at 16 07 40

Add metadata flags for name, namespace and annotations

What would you like to be added:

Add flags to customize:

  • Metadata.Name
  • Metadata.Namespace
  • Metadata.Annotations

Why is this needed:

For the rbac-tool gen and rbac-tool show commands it would be useful for automation to be able to customize the object metadata during role generation.

For example:

# Generate a ClusterRole with all the available permissions for core and apps api groups
rbac-tool show \
  --for-groups=,apps \
  --scope namespace \
  --name foo \
  --namespace bar \
  --annotations argocd.argoproj.io/sync-wave=2,rbac.authorization.kubernetes.io/autoupdate=true

With these flags it would be possible to generate fully functional roles without having to make modifications to the YAML after running the tool.

Visualizing RBAC incorrectly classifies ServiceaAccount as missing

What happened:

See https://imgur.com/a/TpcIyRx

The sa/c-sa exists in the namespace as per this ..

 kubectl get sa,roles,rolebindings -n staranto
NAME                      SECRETS   AGE
serviceaccount/builder    2         5d17h
serviceaccount/c-sa       2         14m
serviceaccount/default    2         5d17h
serviceaccount/deployer   2         5d17h

NAME                                             AGE
role.rbac.authorization.k8s.io/role-core         15h
role.rbac.authorization.k8s.io/role-privileged   7m5s

NAME                                                                AGE
rolebinding.rbac.authorization.k8s.io/admin                         5d17h
rolebinding.rbac.authorization.k8s.io/c-sa-core-rolebinding         13m
rolebinding.rbac.authorization.k8s.io/c-sa-privileged-rolebinding   7m5s
rolebinding.rbac.authorization.k8s.io/system:deployers              5d17h
rolebinding.rbac.authorization.k8s.io/system:image-builders         5d17h
rolebinding.rbac.authorization.k8s.io/system:image-pullers          5d17h````


**What you expected to happen**:

I expect the c-sa subject to be rendered in the namespace and not flagged as missing.

**How to reproduce it (as minimally and precisely as possible)**:
`rbac-tool viz --outformat dot --outfile rbac.dot --include-subjects c-sa`

**Anything else we need to know?**:

**Environment**:
- Kubernetes version (use `kubectl version`): 
Client Version: v1.18.3
Server Version: v1.17.1+912792b
- Cloud provider or configuration:
OpenShift 4.4.9
- Install tools:
rbac-tool version
Version: 0.9.0
Commit: 3b08e35c143a8b7ecf3a43303bca1c7dfe19c837
- Others:
 dot -V
dot - graphviz version 2.43.0 (0)

v0.9.1 renders dot but names the output .html

What happened:

[130] % rbac-tool viz --outformat dot   
[alcide-rbactool] Namespaces included '*'
[alcide-rbactool] Namespaces excluded 'kube-system'
[alcide-rbactool] Connecting to cluster ''
[alcide-rbactool] Generating Graph and Saving as 'rbac.html'

[0] % head -2 rbac.html                                                                                   
digraph  {
        subgraph cluster_s296 {

What you expected to happen:
I expect to name a dot file .dot :-D

Looks like connected to issue 8

How to reproduce it (as minimally and precisely as possible):
rbac-tool viz --outformat dot

Anything else we need to know?:

Dot itself renders the file fine, it looks just like a file nameing error.

It would be also nice to see the used version at least in -h

Environment:

  • Kubernetes version (use kubectl version): irrelevant, happens with 1.15.11 and also 1.18.x

Subresources support for generated rules

What would you like to be added:
It would be nice to add subresources support to RBAC generation fuctional.

Why is this needed:
It can make generation rules useful =)
Now I have to rewrite them manually after generation.

segmentation fault on who-can

What happened:
I'm getting segmentation fault on kubectl rbac-tool who-can create clusterrolebinding

What you expected to happen:
print out who can create clusterrolebinding

How to reproduce it (as minimally and precisely as possible):
not sure

Anything else we need to know?:

unexpected fault address 0x0
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x4631bf]

goroutine 1 [running]:
runtime.throw({0x1535804?, 0x30?})
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/panic.go:1047 +0x5d fp=0xc000510938 sp=0xc000510908 pc=0x435afd
runtime.sigpanic()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/signal_unix.go:842 +0x2c5 fp=0xc000510988 sp=0xc000510938 pc=0x44b505
aeshashbody()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1366 +0x39f fp=0xc000510990 sp=0xc000510988 pc=0x4631bf
runtime.mapiternext(0xc0004f47c0)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/map.go:936 +0x2eb fp=0xc000510a00 sp=0xc000510990 pc=0x40fe2b
runtime.mapiterinit(0x1?, 0x7?, 0x1?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/map.go:863 +0x236 fp=0xc000510a20 sp=0xc000510a00 pc=0x40faf6
reflect.mapiterinit(0x146cd00?, 0xc0001283c0?, 0x4dfdc7?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/map.go:1375 +0x19 fp=0xc000510a48 sp=0xc000510a20 pc=0x45ff99
github.com/modern-go/reflect2.(*UnsafeMapType).UnsafeIterate(...)
	/home/runner/pkg/mod/github.com/modern-go/[email protected]/unsafe_map.go:112
github.com/json-iterator/go.(*sortKeysMapEncoder).Encode(0xc000432060, 0xc0000104f0, 0xc00008b320)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_map.go:291 +0x236 fp=0xc000510bb8 sp=0xc000510a48 pc=0x7c37b6
github.com/json-iterator/go.(*placeholderEncoder).Encode(0x13a5e00?, 0x1767501?, 0xc00008b338?)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect.go:332 +0x22 fp=0xc000510be0 sp=0xc000510bb8 pc=0x7bc3c2
github.com/json-iterator/go.(*structFieldEncoder).Encode(0xc0004324e0, 0x12bd69b?, 0xc00008b320)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:110 +0x56 fp=0xc000510c58 sp=0xc000510be0 pc=0x7d0ff6
github.com/json-iterator/go.(*structEncoder).Encode(0xc000432540, 0x900?, 0xc00008b320)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:158 +0x765 fp=0xc000510d40 sp=0xc000510c58 pc=0x7d1a05
github.com/json-iterator/go.(*OptionalEncoder).Encode(0xc00008b320?, 0xc000130960?, 0xc000510dd0?)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_optional.go:70 +0xb0 fp=0xc000510d90 sp=0xc000510d40 pc=0x7c8b90
github.com/json-iterator/go.(*placeholderEncoder).Encode(0x13a5e00?, 0xc0004f4601?, 0xc00008b338?)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect.go:332 +0x22 fp=0xc000510db8 sp=0xc000510d90 pc=0x7bc3c2
github.com/json-iterator/go.(*structFieldEncoder).Encode(0xc0004e0360, 0x12ed259?, 0xc00008b320)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:110 +0x56 fp=0xc000510e30 sp=0xc000510db8 pc=0x7d0ff6
github.com/json-iterator/go.(*structEncoder).Encode(0xc0004e0420, 0xc0001306c0?, 0xc00008b320)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:158 +0x765 fp=0xc000510f18 sp=0xc000510e30 pc=0x7d1a05
github.com/json-iterator/go.(*placeholderEncoder).Encode(0x13a5e00?, 0x7d0801?, 0xc00008b338?)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect.go:332 +0x22 fp=0xc000510f40 sp=0xc000510f18 pc=0x7bc3c2
github.com/json-iterator/go.(*structFieldEncoder).Encode(0xc0004e06c0, 0x12bd603?, 0xc00008b320)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:110 +0x56 fp=0xc000510fb8 sp=0xc000510f40 pc=0x7d0ff6
github.com/json-iterator/go.(*structEncoder).Encode(0xc0004e0720, 0x135e5e0?, 0xc00008b320)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:158 +0x765 fp=0xc0005110a0 sp=0xc000510fb8 pc=0x7d1a05
github.com/json-iterator/go.(*sliceEncoder).Encode(0xc0003768d0, 0xc0000df448, 0xc00008b320)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_slice.go:38 +0x2e4 fp=0xc000511158 sp=0xc0005110a0 pc=0x7c9644
github.com/json-iterator/go.(*structFieldEncoder).Encode(0xc0004e14d0, 0x12c3059?, 0xc00008b320)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:110 +0x56 fp=0xc0005111d0 sp=0xc000511158 pc=0x7d0ff6
github.com/json-iterator/go.(*structEncoder).Encode(0xc0004e1620, 0x0?, 0xc00008b320)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:158 +0x765 fp=0xc0005112b8 sp=0xc0005111d0 pc=0x7d1a05
github.com/json-iterator/go.(*OptionalEncoder).Encode(0xc000202f00?, 0x0?, 0x0?)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_optional.go:70 +0xb0 fp=0xc000511308 sp=0xc0005112b8 pc=0x7c8b90
github.com/json-iterator/go.(*onePtrEncoder).Encode(0xc0003ce670, 0xc0000df3f0, 0xc0004e08d0?)
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect.go:219 +0x82 fp=0xc000511340 sp=0xc000511308 pc=0x7bb982
github.com/json-iterator/go.(*Stream).WriteVal(0xc00008b320, {0x14098c0, 0xc0000df3f0})
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect.go:98 +0x166 fp=0xc0005113b0 sp=0xc000511340 pc=0x7baca6
github.com/json-iterator/go.(*frozenConfig).Marshal(0xc000202f00, {0x14098c0, 0xc0000df3f0})
	/home/runner/pkg/mod/github.com/json-iterator/[email protected]/config.go:299 +0xc9 fp=0xc000511448 sp=0xc0005113b0 pc=0x7b1f29
k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0x12a470f?, {0x175b5a0?, 0xc0000df3f0?}, {0x1752e20, 0xc00009fe30})
	/home/runner/pkg/mod/k8s.io/[email protected]/pkg/runtime/serializer/json/json.go:305 +0x6d fp=0xc0005114e0 sp=0xc000511448 pc=0xbe9c6d
k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc0003a2aa0, {0x175b5a0, 0xc0000df3f0}, {0x1752e20, 0xc00009fe30})
	/home/runner/pkg/mod/k8s.io/[email protected]/pkg/runtime/serializer/json/json.go:300 +0xfc fp=0xc000511540 sp=0xc0005114e0 pc=0xbe9b9c
k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc000381400, {0x175b550?, 0xc00008b260}, {0x1752e20, 0xc00009fe30})
	/home/runner/pkg/mod/k8s.io/[email protected]/pkg/runtime/serializer/versioning/versioning.go:244 +0x946 fp=0xc0005118c8 sp=0xc000511540 pc=0xbf7b86
k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc000381400, {0x175b550, 0xc00008b260}, {0x1752e20, 0xc00009fe30})
	/home/runner/pkg/mod/k8s.io/[email protected]/pkg/runtime/serializer/versioning/versioning.go:184 +0x106 fp=0xc000511928 sp=0xc0005118c8 pc=0xbf71e6
k8s.io/apimachinery/pkg/runtime.Encode({0x7fb15533bad8, 0xc000381400}, {0x175b550, 0xc00008b260})
	/home/runner/pkg/mod/k8s.io/[email protected]/pkg/runtime/codec.go:50 +0x64 fp=0xc000511968 sp=0xc000511928 pc=0x80f164
k8s.io/client-go/tools/clientcmd.Write(...)
	/home/runner/pkg/mod/k8s.io/[email protected]/tools/clientcmd/loader.go:469
k8s.io/client-go/tools/clientcmd.WriteToFile({{0x0, 0x0}, {0x0, 0x0}, {0x0, 0xc000543c20}, 0xc000543c50, 0xc000543c80, 0xc000543cb0, {0xc0005500b0, ...}, ...}, ...)
	/home/runner/pkg/mod/k8s.io/[email protected]/tools/clientcmd/loader.go:422 +0xa8 fp=0xc0005119e0 sp=0xc000511968 pc=0x1019aa8
k8s.io/client-go/tools/clientcmd.ModifyConfig({0x1769560, 0xc0003a3720}, {{0x0, 0x0}, {0x0, 0x0}, {0x0, 0xc000542ea0}, 0xc000542ed0, 0xc000542f00, ...}, ...)
	/home/runner/pkg/mod/k8s.io/[email protected]/tools/clientcmd/config.go:291 +0xcf8 fp=0xc000512108 sp=0xc0005119e0 pc=0x1015c78
k8s.io/client-go/tools/clientcmd.(*persister).Persist(0xc0004de240, 0xc000542210)
	/home/runner/pkg/mod/k8s.io/[email protected]/tools/clientcmd/config.go:374 +0x11a fp=0xc0005121f8 sp=0xc000512108 pc=0x101661a
k8s.io/client-go/plugin/pkg/client/auth/oidc.(*oidcAuthProvider).idToken(0xc00012ab10)
	/home/runner/pkg/mod/k8s.io/[email protected]/plugin/pkg/client/auth/oidc/oidc.go:282 +0x966 fp=0xc0005123f8 sp=0xc0005121f8 pc=0xfe7666
k8s.io/client-go/plugin/pkg/client/auth/oidc.(*roundTripper).RoundTrip(0xc000182b10, 0xc00054c400)
	/home/runner/pkg/mod/k8s.io/[email protected]/plugin/pkg/client/auth/oidc/oidc.go:200 +0x67 fp=0xc000512500 sp=0xc0005123f8 pc=0xfe69a7
k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0000544e0, 0xc00054c300)
	/home/runner/pkg/mod/k8s.io/[email protected]/transport/round_trippers.go:159 +0x350 fp=0xc0005125f8 sp=0xc000512500 pc=0xf52b90
net/http.send(0xc00054c200, {0x1755600, 0xc0000544e0}, {0x14d7960?, 0x4c0301?, 0x21de500?})
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:251 +0x5f7 fp=0xc0005127f0 sp=0xc0005125f8 pc=0x731f77
net/http.(*Client).send(0xc0004f8000, 0xc00054c200, {0x0?, 0xc000512898?, 0x21de500?})
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:175 +0x9b fp=0xc000512868 sp=0xc0005127f0 pc=0x7317fb
net/http.(*Client).do(0xc0004f8000, 0xc00054c200)
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:715 +0x8fc fp=0xc000512a58 sp=0xc000512868 pc=0x733b7c
net/http.(*Client).Do(...)
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:581
k8s.io/client-go/rest.(*Request).request(0xc0001484b0, {0x1767c50, 0xc00004c320}, 0x1?)
	/home/runner/pkg/mod/k8s.io/[email protected]/rest/request.go:881 +0x51e fp=0xc000512c48 sp=0xc000512a58 pc=0xf7147e
k8s.io/client-go/rest.(*Request).Do(0x153570a?, {0x1767c50?, 0xc00004c320?})
	/home/runner/pkg/mod/k8s.io/[email protected]/rest/request.go:954 +0xc7 fp=0xc000512cf8 sp=0xc000512c48 pc=0xf72087
k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroups(0xc000054540)
	/home/runner/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:159 +0xae fp=0xc000512fd8 sp=0xc000512cf8 pc=0xf76a2e
k8s.io/client-go/discovery.ServerPreferredResources({0x176e1a0, 0xc000054540})
	/home/runner/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:287 +0x42 fp=0xc0005137a8 sp=0xc000512fd8 pc=0xf77da2
k8s.io/client-go/discovery.(*DiscoveryClient).ServerPreferredResources.func1()
	/home/runner/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:387 +0x25 fp=0xc0005137c8 sp=0xc0005137a8 pc=0xf78f65
k8s.io/client-go/discovery.withRetries(0x2, 0xc0005137f0)
	/home/runner/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:450 +0x72 fp=0xc0005137e0 sp=0xc0005137c8 pc=0xf797b2
k8s.io/client-go/discovery.(*DiscoveryClient).ServerPreferredResources(0xc0003a3770?)
	/home/runner/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:386 +0x3a fp=0xc000513810 sp=0xc0005137e0 pc=0xf78efa
github.com/alcideio/rbac-tool/pkg/kube.NewClient({0x0, 0x0})
	/home/runner/work/rbac-tool/rbac-tool/pkg/kube/client.go:60 +0x1a5 fp=0xc0005138e0 sp=0xc000513810 pc=0x101e225
github.com/alcideio/rbac-tool/cmd.NewCommandWhoCan.func1(0xc0004cf600?, {0xc0004de2a0?, 0x2?, 0x2?})
	/home/runner/work/rbac-tool/rbac-tool/cmd/whocan_cmd.go:122 +0x1fc fp=0xc000513da8 sp=0xc0005138e0 pc=0x129685c
github.com/spf13/cobra.(*Command).execute(0xc0004cf600, {0xc0004de260, 0x2, 0x2})
	/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:842 +0x67c fp=0xc000513e80 sp=0xc000513da8 pc=0x11966dc
github.com/spf13/cobra.(*Command).ExecuteC(0xc0004ce000)
	/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x39d fp=0xc000513f38 sp=0xc000513e80 pc=0x1196cbd
github.com/spf13/cobra.(*Command).Execute(...)
	/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main()
	/home/runner/work/rbac-tool/rbac-tool/main.go:65 +0x1e fp=0xc000513f80 sp=0xc000513f38 pc=0x1297bbe
runtime.main()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:250 +0x212 fp=0xc000513fe0 sp=0xc000513f80 pc=0x438352
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000513fe8 sp=0xc000513fe0 pc=0x465c81

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000068fb0 sp=0xc000068f90 pc=0x438716
runtime.goparkunlock(...)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:369
runtime.forcegchelper()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:302 +0xad fp=0xc000068fe0 sp=0xc000068fb0 pc=0x4385ad
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000068fe8 sp=0xc000068fe0 pc=0x465c81
created by runtime.init.6
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:290 +0x25

goroutine 3 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000069790 sp=0xc000069770 pc=0x438716
runtime.goparkunlock(...)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:369
runtime.bgsweep(0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgcsweep.go:297 +0xd7 fp=0xc0000697c8 sp=0xc000069790 pc=0x424e37
runtime.gcenable.func1()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:178 +0x26 fp=0xc0000697e0 sp=0xc0000697c8 pc=0x419a86
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000697e8 sp=0xc0000697e0 pc=0x465c81
created by runtime.gcenable
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:178 +0x6b

goroutine 4 [GC scavenge wait]:
runtime.gopark(0xc000088000?, 0x1750558?, 0x0?, 0x0?, 0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000069f70 sp=0xc000069f50 pc=0x438716
runtime.goparkunlock(...)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:369
runtime.(*scavengerState).park(0x21de720)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgcscavenge.go:389 +0x53 fp=0xc000069fa0 sp=0xc000069f70 pc=0x422e93
runtime.bgscavenge(0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgcscavenge.go:622 +0x65 fp=0xc000069fc8 sp=0xc000069fa0 pc=0x423485
runtime.gcenable.func2()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:179 +0x26 fp=0xc000069fe0 sp=0xc000069fc8 pc=0x419a26
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000069fe8 sp=0xc000069fe0 pc=0x465c81
created by runtime.gcenable
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:179 +0xaa

goroutine 5 [finalizer wait]:
runtime.gopark(0x438a97?, 0x49?, 0xe8?, 0xda?, 0xc000068770?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000068628 sp=0xc000068608 pc=0x438716
runtime.goparkunlock(...)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:369
runtime.runfinq()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mfinal.go:180 +0x10f fp=0xc0000687e0 sp=0xc000068628 pc=0x418b8f
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000687e8 sp=0xc0000687e0 pc=0x465c81
created by runtime.createfing
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mfinal.go:157 +0x45

goroutine 6 [chan receive]:
runtime.gopark(0xc00006a6d8?, 0x43e57b?, 0x20?, 0xa7?, 0x454245?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc00006a6c8 sp=0xc00006a6a8 pc=0x438716
runtime.chanrecv(0xc000180000, 0xc00006a7a0, 0x1)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/chan.go:583 +0x49b fp=0xc00006a758 sp=0xc00006a6c8 pc=0x406cdb
runtime.chanrecv2(0x12a05f200?, 0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/chan.go:447 +0x18 fp=0xc00006a780 sp=0xc00006a758 pc=0x406818
k8s.io/klog.(*loggingT).flushDaemon(0x0?)
	/home/runner/pkg/mod/k8s.io/[email protected]/klog.go:1010 +0x6a fp=0xc00006a7c8 sp=0xc00006a780 pc=0x50964a
k8s.io/klog.init.0.func1()
	/home/runner/pkg/mod/k8s.io/[email protected]/klog.go:411 +0x26 fp=0xc00006a7e0 sp=0xc00006a7c8 pc=0x507326
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00006a7e8 sp=0xc00006a7e0 pc=0x465c81
created by k8s.io/klog.init.0
	/home/runner/pkg/mod/k8s.io/[email protected]/klog.go:411 +0xef

goroutine 7 [chan receive]:
runtime.gopark(0x1b17c9725b8?, 0x0?, 0x20?, 0xaf?, 0x454245?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc00006aec8 sp=0xc00006aea8 pc=0x438716
runtime.chanrecv(0xc000114000, 0xc00006afa0, 0x1)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/chan.go:583 +0x49b fp=0xc00006af58 sp=0xc00006aec8 pc=0x406cdb
runtime.chanrecv2(0x12a05f200?, 0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/chan.go:447 +0x18 fp=0xc00006af80 sp=0xc00006af58 pc=0x406818
k8s.io/klog/v2.(*loggingT).flushDaemon(0x0?)
	/home/runner/pkg/mod/k8s.io/klog/[email protected]/klog.go:1131 +0x6a fp=0xc00006afc8 sp=0xc00006af80 pc=0x6279ea
k8s.io/klog/v2.init.0.func1()
	/home/runner/pkg/mod/k8s.io/klog/[email protected]/klog.go:416 +0x26 fp=0xc00006afe0 sp=0xc00006afc8 pc=0x625646
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00006afe8 sp=0xc00006afe0 pc=0x465c81
created by k8s.io/klog/v2.init.0
	/home/runner/pkg/mod/k8s.io/klog/[email protected]/klog.go:416 +0xef

goroutine 8 [GC worker (idle)]:
runtime.gopark(0x5d8781874f?, 0x0?, 0x0?, 0x0?, 0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc00006b750 sp=0xc00006b730 pc=0x438716
runtime.gcBgMarkWorker()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1235 +0xf1 fp=0xc00006b7e0 sp=0xc00006b750 pc=0x41bbd1
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00006b7e8 sp=0xc00006b7e0 pc=0x465c81
created by runtime.gcBgMarkStartWorkers
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1159 +0x25

goroutine 17 [GC worker (idle)]:
runtime.gopark(0x5d8784fecc?, 0x0?, 0x0?, 0x0?, 0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000064750 sp=0xc000064730 pc=0x438716
runtime.gcBgMarkWorker()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1235 +0xf1 fp=0xc0000647e0 sp=0xc000064750 pc=0x41bbd1
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000647e8 sp=0xc0000647e0 pc=0x465c81
created by runtime.gcBgMarkStartWorkers
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1159 +0x25

goroutine 33 [GC worker (idle)]:
runtime.gopark(0x5d86fd0988?, 0x0?, 0x0?, 0x0?, 0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc00019a750 sp=0xc00019a730 pc=0x438716
runtime.gcBgMarkWorker()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1235 +0xf1 fp=0xc00019a7e0 sp=0xc00019a750 pc=0x41bbd1
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00019a7e8 sp=0xc00019a7e0 pc=0x465c81
created by runtime.gcBgMarkStartWorkers
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1159 +0x25

goroutine 34 [GC worker (idle)]:
runtime.gopark(0x5d87845fec?, 0x0?, 0x0?, 0x0?, 0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc00019af50 sp=0xc00019af30 pc=0x438716
runtime.gcBgMarkWorker()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1235 +0xf1 fp=0xc00019afe0 sp=0xc00019af50 pc=0x41bbd1
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00019afe8 sp=0xc00019afe0 pc=0x465c81
created by runtime.gcBgMarkStartWorkers
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1159 +0x25

goroutine 9 [select]:
runtime.gopark(0xc000064fa0?, 0x3?, 0x0?, 0x0?, 0xc000064f82?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000064e08 sp=0xc000064de8 pc=0x438716
runtime.selectgo(0xc000064fa0, 0xc000064f7c, 0x0?, 0x0, 0x0?, 0x1)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/select.go:328 +0x7bc fp=0xc000064f48 sp=0xc000064e08 pc=0x447a9c
net/http.setRequestCancel.func4()
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:397 +0x8b fp=0xc000064fe0 sp=0xc000064f48 pc=0x732e2b
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000064fe8 sp=0xc000064fe0 pc=0x465c81
created by net/http.setRequestCancel
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:396 +0x44a

goroutine 21 [IO wait]:
runtime.gopark(0x1d21?, 0xb?, 0x0?, 0x0?, 0x3?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000079618 sp=0xc0000795f8 pc=0x438716
runtime.netpollblock(0x4b2f85?, 0xa?, 0x0?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/netpoll.go:526 +0xf7 fp=0xc000079650 sp=0xc000079618 pc=0x4312d7
internal/poll.runtime_pollWait(0x7fb1554a5ef8, 0x72)
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/netpoll.go:305 +0x89 fp=0xc000079670 sp=0xc000079650 pc=0x4608e9
internal/poll.(*pollDesc).wait(0xc00011ca00?, 0xc000018a00?, 0x0)
	/opt/hostedtoolcache/go/1.19.9/x64/src/internal/poll/fd_poll_runtime.go:84 +0x32 fp=0xc000079698 sp=0xc000079670 pc=0x4cd0b2
internal/poll.(*pollDesc).waitRead(...)
	/opt/hostedtoolcache/go/1.19.9/x64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00011ca00, {0xc000018a00, 0x2500, 0x2500})
	/opt/hostedtoolcache/go/1.19.9/x64/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc000079718 sp=0xc000079698 pc=0x4ce41a
net.(*netFD).Read(0xc00011ca00, {0xc000018a00?, 0xc0002bb280?, 0xc0000191df?})
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/fd_posix.go:55 +0x29 fp=0xc000079760 sp=0xc000079718 pc=0x5e5b29
net.(*conn).Read(0xc00011a0a0, {0xc000018a00?, 0x4b5?, 0xc0002bb280?})
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/net.go:183 +0x45 fp=0xc0000797a8 sp=0xc000079760 pc=0x5f3905
crypto/tls.(*atLeastReader).Read(0xc00063ff38, {0xc000018a00?, 0x0?, 0x479008?})
	/opt/hostedtoolcache/go/1.19.9/x64/src/crypto/tls/conn.go:787 +0x3d fp=0xc0000797f0 sp=0xc0000797a8 pc=0x6df53d
bytes.(*Buffer).ReadFrom(0xc000536978, {0x1752f20, 0xc00063ff38})
	/opt/hostedtoolcache/go/1.19.9/x64/src/bytes/buffer.go:202 +0x98 fp=0xc000079848 sp=0xc0000797f0 pc=0x4794d8
crypto/tls.(*Conn).readFromUntil(0xc000536700, {0x1755820?, 0xc00011a0a0}, 0x1d26?)
	/opt/hostedtoolcache/go/1.19.9/x64/src/crypto/tls/conn.go:809 +0xe5 fp=0xc000079888 sp=0xc000079848 pc=0x6df725
crypto/tls.(*Conn).readRecordOrCCS(0xc000536700, 0x0)
	/opt/hostedtoolcache/go/1.19.9/x64/src/crypto/tls/conn.go:616 +0x116 fp=0xc000079c10 sp=0xc000079888 pc=0x6dcb76
crypto/tls.(*Conn).readRecord(...)
	/opt/hostedtoolcache/go/1.19.9/x64/src/crypto/tls/conn.go:582
crypto/tls.(*Conn).Read(0xc000536700, {0xc000666000, 0x1000, 0x744380?})
	/opt/hostedtoolcache/go/1.19.9/x64/src/crypto/tls/conn.go:1315 +0x16f fp=0xc000079c80 sp=0xc000079c10 pc=0x6e2aef
bufio.(*Reader).Read(0xc000323920, {0xc0000faf20, 0x9, 0x7527c5?})
	/opt/hostedtoolcache/go/1.19.9/x64/src/bufio/bufio.go:237 +0x1bb fp=0xc000079cb8 sp=0xc000079c80 pc=0x4fccfb
io.ReadAtLeast({0x1752dc0, 0xc000323920}, {0xc0000faf20, 0x9, 0x9}, 0x9)
	/opt/hostedtoolcache/go/1.19.9/x64/src/io/io.go:332 +0x9a fp=0xc000079d00 sp=0xc000079cb8 pc=0x471afa
io.ReadFull(...)
	/opt/hostedtoolcache/go/1.19.9/x64/src/io/io.go:351
net/http.http2readFrameHeader({0xc0000faf20?, 0x9?, 0xc000542030?}, {0x1752dc0?, 0xc000323920?})
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:1565 +0x6e fp=0xc000079d50 sp=0xc000079d00 pc=0x73c32e
net/http.(*http2Framer).ReadFrame(0xc0000faee0)
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:1829 +0x95 fp=0xc000079e00 sp=0xc000079d50 pc=0x73cb95
net/http.(*http2clientConnReadLoop).run(0xc000079f98)
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:8874 +0x130 fp=0xc000079f60 sp=0xc000079e00 pc=0x74f670
net/http.(*http2ClientConn).readLoop(0xc000538000)
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:8770 +0x6f fp=0xc000079fc8 sp=0xc000079f60 pc=0x74eb8f
net/http.(*http2Transport).newClientConn.func1()
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:7477 +0x26 fp=0xc000079fe0 sp=0xc000079fc8 pc=0x747866
runtime.goexit()
	/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000079fe8 sp=0xc000079fe0 pc=0x465c81
created by net/http.(*http2Transport).newClientConn
	/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:7477 +0xaaa

It also creates a config.lock file which is not getting removed after seg-fault
Config file contains only 1 cluster, regular access to the cluster via kubectl work without any noticable issues.

I actually have no clue yet where to start debugging

Environment:

  • Kubernetes version (use kubectl version): Client: 1.27, Server 1.23
  • Cloud provider or configuration: OICD with openunison Tremolo
  • rbac-tool v. 1.14.4

sha256 mismatch

What happened:

Using the install via curl option, it fails to validate the checksum for rbac-tool_v1.1.1_linux_amd64:

$ curl https://raw.githubusercontent.com/alcideio/rbac-tool/master/download.sh | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  9331  100  9331    0     0  64798      0 --:--:-- --:--:-- --:--:-- 64798
alcideio/rbac-tool info checking GitHub for latest tag
alcideio/rbac-tool info found version: 1.1.1 for v1.1.1/linux/amd64
alcideio/rbac-tool err hash_sha256_verify checksum for '/tmp/tmp.rSW6XMKhUl/rbac-tool_v1.1.1_linux_amd64' did not verify 6916b6f609b027ccd7d6573a40f62492a84bc7445592805d6d3fc838f3e34dc4
ecdc8b365b8f9bb4303d194e777a9e7fdf3376158e3a2fb78cf7425007118a1d vs 6916b6f609b027ccd7d6573a40f62492a84bc7445592805d6d3fc838f3e34dc4

What you expected to happen:
The binary should match the checksum

How to reproduce it (as minimally and precisely as possible):
See above

Anything else we need to know?:
Probably not.

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or configuration:
  • Install tools:
  • Others:

question: custom kubeconfig from cli argument

Hello,
for my setup I use multiple kubeconfig files a'la
kubectl --kubeconfig test-context.yaml get ns or kubectl --kubeconfig dev-context.yaml get ns, each defining own set of contexts. There might be aliases set a'la kubectltest or kubectldev to speed things up, as I use different contexts on regular basis. Reason for not putting them into single default kubeconfig is that clusters get regenerated sometimes and it is easier for me to download current config from Rancher in case of update rather then trying to merge them into single file (default homedir kubeconfig has configs for e.g. my local k8s context/cluster etc).
I am trying to use rbac-tool and cannot combine it with specific kubeconfig. There is --cluster-context cli switch, but it works inside current (default) contexts, and I want my specific config.
If I use kubectl --kubeconfig some.yaml rbac-tool viz it says flags cannot be placed before plugin name: --kubeconfig.
What am I doing wrong and how can I make it work?

E0816 17:50:50.905695 32074 run.go:120] "command failed" err="unknown command \"rbac-tool\" for \"kubectl\""

What happened:

$ kubectl krew install rbac-tool
Updated the local copy of plugin index.
Installing plugin: rbac-tool
Installed plugin: rbac-tool
\
 | Use this plugin:
 | 	kubectl rbac-tool
 | Documentation:
 | 	https://github.com/alcideio/rbac-tool
/
WARNING: You installed plugin "rbac-tool" from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.
   
bash-5.0$ kubectl rbac-tool
E0816 17:50:50.905695   32074 run.go:120] "command failed" err="unknown command \"rbac-tool\" for \"kubectl\""

What you expected to happen:

Expected kubectl rbac-tool to run.

How to reproduce it (as minimally and precisely as possible):

Follow the steps I followed.

Anything else we need to know?:

Environment:

MacOS Monterey 12.2

  • Kubernetes version (use kubectl version):
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or configuration:
  • Install tools:
  • Others:

Add Json output for the 'rbac-tool lookup' command

What would you like to be added:
Json output, preferably in the following structure:

{
  "User": "User",
              "authorizedFor":
                {
                  "objectName":"objectName",
                  "objectType":"objectType",
                  "Permission":"Permisson"                  
                }
}



Why is this needed:
So it can be used in other systems to reflect permissions of users.

Binary seems broken resulting in segmentation fault on invocation (MacOS)

The utility throws a segmentation fault on a MacBook Pro (darwin/amd64).

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  9446  100  9446    0     0  33843      0 --:--:-- --:--:-- --:--:-- 34727
alcideio/rbac-tool info checking GitHub for latest tag
alcideio/rbac-tool info found version: 1.13.0 for v1.13.0/darwin/amd64
alcideio/rbac-tool info installed ./bin/rbac-tool

❯ ./bin/rbac-tool version
[1]    20107 segmentation fault  ./bin/rbac-tool version

❯ ./bin/rbac-tool help
[1]    20243 segmentation fault  ./bin/rbac-tool help

)❯ ./bin/rbac-tool
[1]    20310 segmentation fault  ./bin/rbac-tool

Cluster Analysis | Report | Report which resources are related to rule violations

What would you like to be added:

For each rule violations, provide the list of resources (Pod, Deployment, Job,...) that use that service account.

Why is this needed:

It enables users to see actual risks associated with a rule violation and not only the configuration based violation.
It also helps users to prioritize which rule/issue they'd like to attend first.

rbac-tool vis , with empty rulesText causes nil pointer dereference / crash. rbac-tool/pkg/visualize/rbacviz.go:302 +

What happened:

I installed
[trutledge@localhost viscrash]$ rbac-tool version
Version: 0.10.0
Commit: 35e5db8
[trutledge@localhost viscrash]$

and ran

rbac-tool vis --cluster-context MYCLUSTER

And got

`[trutledge@localhost viscrash]$ rbac-tool vis --cluster-context --redact--
[alcide-rbactool] Namespaces included '*'
[alcide-rbactool] Namespaces excluded 'kube-system'
[alcide-rbactool] Connecting to cluster '--redact--'
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1261885]

goroutine 1 [running]:
github.com/alcideio/rbac-tool/pkg/visualize.(*RbacViz).newRoleAndRulesNodePair(0xc000139c80, 0xc0003c24e0, 0xc00046c510, 0x9, 0xc00051eae0, 0x19, 0xc00046c5b0, 0x4, 0xc00051eb00, 0x13, ...)
/home/runner/work/rbac-tool/rbac-tool/pkg/visualize/rbacviz.go:302 +0x1f5
github.com/alcideio/rbac-tool/pkg/visualize.(*RbacViz).renderGraph(0xc000139c80, 0xc0002ca600)
/home/runner/work/rbac-tool/rbac-tool/pkg/visualize/rbacviz.go:204 +0x425
github.com/alcideio/rbac-tool/pkg/visualize.CreateRBACGraph(0xc0002ca600, 0x2a, 0xc00013dd30)
/home/runner/work/rbac-tool/rbac-tool/pkg/visualize/rbacviz.go:38 +0xef
github.com/alcideio/rbac-tool/cmd.NewCommandVisualize.func1(0xc000318b00, 0xc0001ef540, 0x0, 0x2, 0x0, 0x0)
/home/runner/work/rbac-tool/rbac-tool/cmd/visualize_cmd.go:66 +0x1da
github.com/spf13/cobra.(*Command).execute(0xc000318b00, 0xc0001ef500, 0x2, 0x2, 0xc000318b00, 0xc0001ef500)
/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:840 +0x460
github.com/spf13/cobra.(*Command).ExecuteC(0xc000318000, 0xc000072750, 0xc00013df50, 0x40576f)
/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:945 +0x317
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:885
main.main()
/home/runner/work/rbac-tool/rbac-tool/main.go:61 +0x2b
[trutledge@localhost viscrash]$
[trutledge@localhost viscrash]$
`

What you expected to happen:

not crashing

How to reproduce it (as minimally and precisely as possible):

Unsure.

Anything else we need to know?:

The nil comes from : rbac-tool/src/rbac-tool/pkg/visualize/rbacviz.go:

360         if rulesText == "" {
361                 return nil
362         }

I don't have enough context to share beyond that.

Environment:

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.6", GitCommit:"7015f71e75f670eb9e7ebd4b5749639d42e20079", GitTreeState:"clean", BuildDate:"2019-11-13T11:11:50Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or configuration:

On premises install.

  • Install tools:
  • Others:

no access to psp == viz fail even though showpsp=false passed

What happened:
I don't have access to psp. I ran viz with showpsp=false and still it failed with error about my lack of access to psp

What you expected to happen:
viz should work as normal for above scenario

How to reproduce it (as minimally and precisely as possible):
Use viz when you're user doesn't have psp access

Anything else we need to know?:
PR with fix here #51

Environment:

  • Kubernetes version (use kubectl version): 1.23.0 client, 1.21.10 server
  • Cloud provider or configuration: AWS and Azure
  • Install tools:
  • Others:

policy-rules | Add CLI flag that enables merging duplicate or overlapping rules

The first 3 rules should can be collapsed into 1 rule

  TYPE           | SUBJECT       | VERBS | NAMESPACE   | API GROUP | KIND    | NAMES       | NONRESOURCEURI | ORIGINATED FROM                 
+----------------+---------------+-------+-------------+-----------+---------+-------------+----------------+--------------------------------+
  ServiceAccount | the-test-user | get   | policyrules | core      | *       |             |                | Roles>>policyrules/some-rules   
  ServiceAccount | the-test-user | get   | policyrules | core      | *       |             |                | Roles>>policyrules/more-rules   
  ServiceAccount | the-test-user | get   | policyrules | core      | secrets | some-secret |                | Roles>>policyrules/some-rules   
  ServiceAccount | the-test-user | get   | policyrules | core      | secrets |             |                | Roles>>policyrules/more-rules   
  ServiceAccount | the-test-user | list  | policyrules | core      | secrets | some-secret |                | Roles>>policyrules/some-rules   
  ServiceAccount | the-test-user | watch | policyrules | core      | secrets | some-secret |                | Roles>>policyrules/some-rules 

Why is this needed:
Having that functionality can reduce the # of rules one needs to review. It only refers to the actual and effective policy

"show" command does not deduplicate apigroups with different versions

What happened:
When running "show" against an 1.23 cluster ive noticed some rbac rules are duplicated.
Namely
"autoscaling" and "policy"
I took a look at why this is happening and found basically the same groups with different versions get iterated over

What you expected to happen:
Groups with different versions get merged
How to reproduce it (as minimally and precisely as possible):
run against a cluster that has multiple versions of resources
Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): 1.23.13
  • Cloud provider or configuration:
  • Install tools:
  • Others:

Add subresources to generate

What would you like to be added:
Add the option to add subcommands like pods/exec into generated RBAC files
Why is this needed:
Sometimes you want to give people more granular permissions on certain things and having a complete list on all avaiable subcommands in your rbac so you can easily do so would be nice

Allow fitering for clusterscoped/namespaced resources

What would you like to be added:
Add a flag to just generate RBAC for namespaced or clusterscoped resources
e.g. rbac-tool show --scope=cluster or rbac-tool show --scope=namespace
Why is this needed:
to be able to just grant all possible rights for a specific namespace but prevent usage of those resources in other namespaces
This would allow for more granular usage of the generated roles

kubectl rbac-tool gen is written to stderr

What happened:

I tried to redirect the file with the following command, but the result was displayed on the screen and test.yaml was empty.

kubectl rbac-tool gen --deny-resources=secrets. --allowed-verbs=get,list,watch > test.yaml

However, the following command works fine.

kubectl rbac-tool gen --deny-resources=secrets. --allowed-verbs=get,list,watch 2> test.yaml

I thought that outputting the result to stderr might be a bug, but there might be circumstances that I don't know about. Why is it being output to the error?

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or configuration:
  • Install tools:
  • Others:

Show [Cluster]RoleBinding in rbac-tool lookup

Not sure if a question or an enhancement request, but I was a bit surprised to see that the rbac-tool lookup output doesn't show the corresponding [Cluster]RoleBindings associating the given ServiceAccount with the outputted [Cluster]Roles. I've looked at rbac-tool lookup --help, but didn't see anything relevant. Is this not possible currently?

My use case is that I already know what [Cluster]Roles the ServiceAccount is associated with, but I don't know from which [Cluster]RoleBindings, if that makes sense.

Generate policy with allow instead of deny

Is there a way to generate a policy with something like --allowed-objects? I'd like to create a role with just 1 resource instead of putting a list of things to deny? For example it seems like if I only want a policy with 1 allowed resource, I would have to feed in a list of every other resource to deny.

Ex -

rbac-tool  gen  --allowed-resources=pods. --allowed-verbs=get,list
rbac-tool  gen  --allowed-resources=pods.,services --allowed-verbs=get,list

instead of...

rbac-tool  gen  --deny-resources=secrets.,services.,serviceaccount.,pvc.,pv.,...(on and on) --allowed-verbs=get,list

rbac tool support for Mac M1

I tried to install the plugin today and I got this error message on my M1 Mac

kubectl krew install rbac-tool
Updated the local copy of plugin index.
Installing plugin: rbac-tool
W0719 13:57:13.752477   51960 install.go:164] failed to install plugin "rbac-tool": plugin "rbac-tool" does not offer installation for this platform
F0719 13:57:13.752552   51960 root.go:79] failed to install some plugins: [rbac-tool]: plugin "rbac-tool" does not offer installation for this platform

It would be great if the rbac tool would support the M1 Mac Arm64 platform.

No visualization when run on openshift cluster

What happened:
The page generated seems to have data however the data is not visualized. Only the Legend can be seen.

What you expected to happen:
Visualized rbac controls.

How to reproduce it (as minimally and precisely as possible):
./bin/rbac-tool visualize

Anything else we need to know?:
No.

Environment:

  • Kubernetes version (use kubectl version):
    Server Version: 4.10.42
    Kubernetes Version: v1.23.12+8a6bfe4
  • Cloud provider or configuration:
    On-prem
  • Install tools:
    ?
  • Others:
    ?

ExclusionCount Stats without explaination

What would you like to be added:
Add reasons and detailed information of ExclusionCount if possible

Why is this needed:
get the ExclusionCount: info. in Stats, but have no idea why and what.
For Why, is it because i miss some of the permission? If yes, which permission in detail
For What, what exactly the exclusion is?

analyze: Failed to evaluate rules - no such key: allowedTo

What happened:

I'm trying to run kubectl rbac-tool analyze, but it is failing on all the rules that have allowedTo in them (which is all of the default ones).

What you expected to happen:

I expect it to analyze without erroring.

How to reproduce it (as minimally and precisely as possible):

$ kubectl rbac-tool analyze       
E0126 12:46:45.944039   30783 analysis.go:316] Failed to evaluate rule 'Secret Readers' - no such key: allowedTo
E0126 12:46:45.947001   30783 analysis.go:316] Failed to evaluate rule 'Workload Creators & Editors' - no such key: allowedTo
E0126 12:46:45.949413   30783 analysis.go:316] Failed to evaluate rule 'Identify Privileges Escalators - via impersonate' - no such key: allowedTo
E0126 12:46:45.952296   30783 analysis.go:316] Failed to evaluate rule 'Identify Privileges Escalators - via bind or escalate' - no such key: allowedTo
E0126 12:46:45.957127   30783 analysis.go:316] Failed to evaluate rule 'Storage & Data - Manipulate Cluster Shared Resources' - no such key: allowedTo
E0126 12:46:45.960619   30783 analysis.go:316] Failed to evaluate rule 'Networking - Manipulate Networking and Network Access related resources' - no such key: allowedTo
E0126 12:46:45.963584   30783 analysis.go:316] Failed to evaluate rule 'Installing or Modifying Admission Controllers' - no such key: allowedTo
E0126 12:46:45.967046   30783 analysis.go:316] Failed to evaluate rule 'Installing or Modifying Cluster Extensions (CRDs)' - no such key: allowedTo
E0126 12:46:45.973053   30783 analysis.go:316] Failed to evaluate rule 'Open Policy Agent (OPA) GateKeeper Administration' - no such key: allowedTo
AnalysisConfigInfo:
  Description: Rapid7 InsightCloudSec default RBAC analysis rules
  Name: InsightCloudSec
  Uuid: 9371719c-1031-468c-91ed-576fdc9e9f59
CreatedOn: "2022-01-26T12:46:45+09:00"
Findings: []
Stats:
  ExclusionCount: 0
  RuleCount: 9

Anything else we need to know?:

I also tried it on a vanilla 1.22 cluster and it worked, so thinking this might be something with the cluster version. I'm not aware of a change to RBAC models in those versions that would cause this, but of course I might have missed something.

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-eks-f17b81", GitCommit:"f17b810c9e5a82200d28b6210b458497ddfcf31b", GitTreeState:"clean", BuildDate:"2021-10-15T21:46:21Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.23) and server (1.20) exceeds the supported minor version skew of +/-1
  • Cloud provider or configuration:
    AWS/EKS

Some resources are not covered by "show", but exist in cluster

What happened:
Currently some resources, like events.events.k8s.io or nodes.metrics.k8s.io down show ub in the "show" output, despite existing in cluster

What you expected to happen:
"show" includes those ApiGroups and resources
How to reproduce it (as minimally and precisely as possible):
Have an plain upstream cluster for events.events.k8s.io, use the metrics-sever for metrics.k8s.io
Anything else we need to know?:
n/a
Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or configuration:
  • Install tools:
  • Others:

rbac-tool who-can create <custom_resource> fails with `memory budget exceeded` (1.3 GB usage)

What happened:

Running the following command within a k8s container fails:

$ rbac-tool who-can create  mysqlinstances.database.orange.com
[...]
Failed to run program - memory budget exceeded (6:24)
|        {  .Verb     in [Verb, "*"] and 
| .......................^

within htop, I see 6 processes with VIRT to 1.3 GB prior to the crash

What you expected to happen:

  • rbac-tool taking longer to produce output but not not crash
  • a stack trace is displayed to helm diagnostic

How to reproduce it (as minimally and precisely as possible):

  • an openshift cluster with a large number of crds

Anything else we need to know?:

$ rbac-tool who-can create  mysqlinstances.database.orange.com -v 9
[...]
I0301 11:09:54.444305    1881 subject_permissions.go:72] {Kind:ServiceAccount APIGroup: Name:deployer [...]
Failed to run program - memory budget exceeded (6:24)
 |        {  .Verb     in [Verb, "*"] and 
 | .......................^

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or configuration:
  • Install tools:
  • Others:

rbac-tool crashes on Mac M1 (ARM)

What happened:
I installed rbac-tool using both krew and binary download, both failed with same error:

➜  ~ rbac-tool who-can
[1]    86368 killed     rbac-tool who-can
➜  ~ kubectl rbac-tool who-can
[1]    86841 killed     kubectl rbac-tool who-can
➜  ~ kubectl rbac-tool --help
[1]    86883 killed     kubectl rbac-tool --help
➜  ~ kubectl rbac-tool --help
[1]    86922 killed     kubectl rbac-tool --help
➜  ~ rbac-tool who-can
[1]    86936 killed     rbac-tool who-can

What you expected to happen:
rbac-tool should run successfully.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
➜  ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:58:09Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or configuration:
  • Install tools: krew and binary download.
  • Others:

Issues in rbac-tool visualize

  • ./rbac-tool viz creates an empty HTML file
  • ./rbac-tool viz --outformat dot still creates an HTML file (see output below):
[alcide-rbactool] Namespaces included '*'
[alcide-rbactool] Namespaces excluded 'kube-system'
[alcide-rbactool] Connecting to cluster ''
[alcide-rbactool] Generating Graph and Saving as 'rbac.html'

policy-rules | Add Roles and ClusterRoles columns

What would you like to be added:

When performing a
rbac-tool policy-rules {serviceAccount}
I would like to have 2 columns at the end, where for each action it shows from which (cluster)role it gets the right to do it. For example:

image

Why is this needed:
If you want to manage (specifically remove for my case) an action that a SA can perform on a resource, it would be neat to see from which (cluster)roles this service account gets its rights.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.