Git Product home page Git Product logo

kubeaudit's Introduction

Build Status Go Report Card GoDoc

It is now a requirement for clusters to run Kubernetes >=1.19.

override labels with unregistered kubernetes.io annotations will be deprecated. It'll soon be a requirement to use kubeaudit.io instead. Refer to this discussion for additional context.

kubeaudit ☁️ πŸ”’ πŸ’ͺ

kubeaudit is a command line tool and a Go package to audit Kubernetes clusters for various different security concerns, such as:

  • run as non-root
  • use a read-only root filesystem
  • drop scary capabilities, don't add new ones
  • don't run privileged
  • and more!

tldr. kubeaudit makes sure you deploy secure containers!

Package

To use kubeaudit as a Go package, see the package docs.

The rest of this README will focus on how to use kubeaudit as a command line tool.

Command Line Interface (CLI)

Installation

Brew

brew install kubeaudit

Download a binary

Kubeaudit has official releases that are blessed and stable: Official releases

DIY build

Main may have newer features than the stable releases. If you need a newer feature not yet included in a release, make sure you're using the latest Go and run the following:

go get -v github.com/Shopify/kubeaudit

Start using kubeaudit with the Quick Start or view all the supported commands.

Kubectl Plugin

Prerequisite: kubectl v1.12.0 or later

With kubectl v1.12.0 introducing easy pluggability of external functions, kubeaudit can be invoked as kubectl audit by

  • running make plugin and having $GOPATH/bin available in your path.

or

  • renaming the binary to kubectl-audit and having it available in your path.

Docker

We no longer release images to Docker Hub (since Docker Hub sunset Free Team organizations). For the time being, old images are still available but may stop being available at any time. We will start publishing images to the Github Container registry soon.

To run kubeaudit as a job in your cluster see Running kubeaudit in a cluster.

Quick Start

kubeaudit has three modes:

  1. Manifest mode
  2. Local mode
  3. Cluster mode

Manifest Mode

If a Kubernetes manifest file is provided using the -f/--manifest flag, kubeaudit will audit the manifest file.

Example command:

kubeaudit all -f "/path/to/manifest.yml"

Example output:

$ kubeaudit all -f "internal/test/fixtures/all_resources/deployment-apps-v1.yml"

---------------- Results for ---------------

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: deployment
    namespace: deployment-apps-v1

--------------------------------------------

-- [error] AppArmorAnnotationMissing
   Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/container' should be added.
   Metadata:
      Container: container
      MissingAnnotation: container.apparmor.security.beta.kubernetes.io/container

-- [error] AutomountServiceAccountTokenTrueAndDefaultSA
   Message: Default service account with token mounted. automountServiceAccountToken should be set to 'false' or a non-default service account should be used.

-- [error] CapabilityShouldDropAll
   Message: Capability not set to ALL. Ideally, you should drop ALL capabilities and add the specific ones you need to the add list.
   Metadata:
      Container: container
      Capability: AUDIT_WRITE
...

If no errors with a given minimum severity are found, the following is returned:

All checks completed. 0 high-risk vulnerabilities found

Autofix

Manifest mode also supports autofixing all security issues using the autofix command:

kubeaudit autofix -f "/path/to/manifest.yml"

To write the fixed manifest to a new file instead of modifying the source file, use the -o/--output flag.

kubeaudit autofix -f "/path/to/manifest.yml" -o "/path/to/fixed"

To fix a manifest based on custom rules specified on a kubeaudit config file, use the -k/--kconfig flag.

kubeaudit autofix -k "/path/to/kubeaudit-config.yml" -f "/path/to/manifest.yml" -o "/path/to/fixed"

Cluster Mode

Kubeaudit can detect if it is running within a container in a cluster. If so, it will try to audit all Kubernetes resources in that cluster:

kubeaudit all

Local Mode

Kubeaudit will try to connect to a cluster using the local kubeconfig file ($HOME/.kube/config). A different kubeconfig location can be specified using the --kubeconfig flag. To specify a context of the kubeconfig, use the -c/--context flag.

kubeaudit all --kubeconfig "/path/to/config" --context my_cluster

For more information on kubernetes config files, see https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

Audit Results

Kubeaudit produces results with three levels of severity:

  • Error: A security issue or invalid kubernetes configuration
  • Warning: A best practice recommendation
  • Info: Informational, no action required. This includes results that are overridden

The minimum severity level can be set using the --minSeverity/-m flag.

By default kubeaudit will output results in a human-readable way. If the output is intended to be further processed, it can be set to output JSON using the --format json flag. To output results as logs (the previous default) use --format logrus. Some output formats include colors to make results easier to read in a terminal. To disable colors (for example, if you are sending output to a text file), you can use the --no-color flag.

You can generate a kubeaudit report in SARIF using the --format sarif flag. To write the SARIF results to a file, you can redirect the output with >. For example:

kubeaudit all -f path-to-my-file.yaml --format="sarif" > example.sarif

If there are results of severity level error, kubeaudit will exit with exit code 2. This can be changed using the --exitcode/-e flag.

For all the ways kubeaudit can be customized, see Global Flags.

Commands

Command Description Documentation
all Runs all available auditors, or those specified using a kubeaudit config. docs
autofix Automatically fixes security issues. docs
version Prints the current kubeaudit version.

Auditors

Auditors can also be run individually.

Command Description Documentation
apparmor Finds containers running without AppArmor. docs
asat Finds pods using an automatically mounted default service account docs
capabilities Finds containers that do not drop the recommended capabilities or add new ones. docs
deprecatedapis Finds any resource defined with a deprecated API version. docs
hostns Finds containers that have HostPID, HostIPC or HostNetwork enabled. docs
image Finds containers which do not use the desired version of an image (via the tag) or use an image without a tag. docs
limits Finds containers which exceed the specified CPU and memory limits or do not specify any. docs
mounts Finds containers that have sensitive host paths mounted. docs
netpols Finds namespaces that do not have a default-deny network policy. docs
nonroot Finds containers running as root. docs
privesc Finds containers that allow privilege escalation. docs
privileged Finds containers running as privileged. docs
rootfs Finds containers which do not have a read-only filesystem. docs
seccomp Finds containers running without Seccomp. docs

Global Flags

Short Long Description
--format The output format to use (one of "sarif", "pretty", "logrus", "json") (default is "pretty")
--kubeconfig Path to local Kubernetes config file. Only used in local mode (default is $HOME/.kube/config)
-c --context The name of the kubeconfig context to use
-f --manifest Path to the yaml configuration to audit. Only used in manifest mode. You may use - to read from stdin.
-n --namespace Only audit resources in the specified namespace. Not currently supported in manifest mode.
-g --includegenerated Include generated resources in scan (such as Pods generated by deployments). If you would like kubeaudit to produce results for generated resources (for example if you have custom resources or want to catch orphaned resources where the owner resource no longer exists) you can use this flag.
-m --minseverity Set the lowest severity level to report (one of "error", "warning", "info") (default is "info")
-e --exitcode Exit code to use if there are results with severity of "error". Conventionally, 0 is used for success and all non-zero codes for an error. (default is 2)
--no-color Don't use colors in the output (default is false)

Configuration File

The kubeaudit config can be used for two things:

  1. Enabling only some auditors
  2. Specifying configuration for auditors

Any configuration that can be specified using flags for the individual auditors can be represented using the config.

The config has the following format:

enabledAuditors:
  # Auditors are enabled by default if they are not explicitly set to "false"
  apparmor: false
  asat: false
  capabilities: true
  deprecatedapis: true
  hostns: true
  image: true
  limits: true
  mounts: true
  netpols: true
  nonroot: true
  privesc: true
  privileged: true
  rootfs: true
  seccomp: true
auditors:
  capabilities:
    # add capabilities needed to the add list, so kubeaudit won't report errors
    allowAddList: ['AUDIT_WRITE', 'CHOWN']
  deprecatedapis:
    # If no versions are specified and the'deprecatedapis' auditor is enabled, WARN
    # results will be genereted for the resources defined with a deprecated API.
    currentVersion: '1.22'
    targetedVersion: '1.25'
  image:
    # If no image is specified and the 'image' auditor is enabled, WARN results
    # will be generated for containers which use an image without a tag
    image: 'myimage:mytag'
  limits:
    # If no limits are specified and the 'limits' auditor is enabled, WARN results
    # will be generated for containers which have no cpu or memory limits specified
    cpu: '750m'
    memory: '500m'

For more details about each auditor, including a description of the auditor-specific configuration in the config, see the Auditor Docs.

Note: The kubeaudit config is not the same as the kubeconfig file specified with the --kubeconfig flag, which refers to the Kubernetes config file (see Local Mode). Also note that only the all and autofix commands support using a kubeaudit config. It will not work with other commands.

Note: If flags are used in combination with the config file, flags will take precedence.

Override Errors

Security issues can be ignored for specific containers or pods by adding override labels. This means the auditor will produce info results instead of error results and the audit result name will have Allowed appended to it. The labels are documented in each auditor's documentation, but the general format for auditors that support overrides is as follows:

An override label consists of a key and a value.

The key is a combination of the override type (container or pod) and an override identifier which is unique to each auditor (see the docs for the specific auditor). The key can take one of two forms depending on the override type:

  1. Container overrides, which override the auditor for that specific container, are formatted as follows:
container.kubeaudit.io/[container name].[override identifier]
  1. Pod overrides, which override the auditor for all containers within the pod, are formatted as follows:
kubeaudit.io/[override identifier]

If the value is set to a non-empty string, it will be displayed in the info result as the OverrideReason:

$ kubeaudit asat -f "auditors/asat/fixtures/service-account-token-true-allowed.yml"

---------------- Results for ---------------

  apiVersion: v1
  kind: ReplicationController
  metadata:
    name: replicationcontroller
    namespace: service-account-token-true-allowed

--------------------------------------------

-- [info] AutomountServiceAccountTokenTrueAndDefaultSAAllowed
   Message: Audit result overridden: Default service account with token mounted. automountServiceAccountToken should be set to 'false' or a non-default service account should be used.
   Metadata:
      OverrideReason: SomeReason

As per Kubernetes spec, value must be 63 characters or less and must be empty or begin and end with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), underscores (_), dots (.), and alphanumerics between.

Multiple override labels (for multiple auditors) can be added to the same resource.

See the specific auditor docs for the auditor you wish to override for examples.

To learn more about labels, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/

Contributing

If you'd like to fix a bug, contribute a feature or just correct a typo, please feel free to do so as long as you follow our Code of Conduct.

  1. Create your own fork!
  2. Get the source: go get github.com/Shopify/kubeaudit
  3. Go to the source: cd $GOPATH/src/github.com/Shopify/kubeaudit
  4. Add your forked repo as a fork: git remote add fork https://github.com/you-are-awesome/kubeaudit
  5. Create your feature branch: git checkout -b awesome-new-feature
  6. Install Kind
  7. Run the tests to see everything is working as expected: USE_KIND=true make test (to run tests without Kind: make test)
  8. Commit your changes: git commit -am 'Adds awesome feature'
  9. Push to the branch: git push fork
  10. Sign the Contributor License Agreement
  11. Submit a PR (All PR must be labeled with πŸ› (Bug fix), ✨ (New feature), πŸ“– (Documentation update), or ⚠️ (Breaking changes) )
  12. ???
  13. Profit

Note that if you didn't sign the CLA before opening your PR, you can re-run the check by adding a comment to the PR that says "I've signed the CLA!"!

kubeaudit's People

Contributors

aslafy-z avatar bvwells avatar catherinejones avatar csgregorian avatar cursedcoder avatar dani-santos-code avatar dependabot[bot] avatar genevieveluyt avatar hazcod avatar itsgarcia avatar jcbbc avatar jerr avatar jinankjain avatar johscheuer avatar jonpulsifer avatar josedonizetti avatar klautcomputing avatar knisbet avatar lrakai avatar natalysheinin avatar nobletrout avatar nschhina avatar raffis avatar rxbchen avatar schnatterer avatar ser87ch avatar spiffyy99 avatar superbrothers avatar tmlayton avatar withshubh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubeaudit's Issues

Audit PodSecurityPolicy, AppArmor, and Seccomp

Running pods (if they're using psp/apparmor/seccomp) will bare n of the fol annotations:

metadata:
  annotations:
    # podsecuritypolicy
    kubernetes.io/psp: name

    # seccomp
    seccomp.security.alpha.kubernetes.io/pod: <profile>
    container.seccomp.security.alpha.kubernetes.io/<container name>: <profile>

    # apparmor
    apparmor.security.beta.kubernetes.io/pod: <profile>
    container.apparmor.security.beta.kubernetes.io/<container name>: <profile>

possible seccomp profiles:

  • docker/default
  • localhost/customprofilename
  • unconfined

possible apparmor profiles:

  • runtime/default
  • localhost/customprofilename
  • unconfined

pod security policies are referenced by their metadata.name

Wrong default config on Linux?

Versions:

Kubectl version

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Kubeaudit

v0.3.0
URL: https://github.com/Shopify/kubeaudit/releases/download/v0.3.0/kubeaudit_0.3.0_linux_amd64.tar.gz

When running kubectl-audit all
the following error is observed:

ERRO[0000] unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined                                                           
panic: runtime error: invalid memory address or nil pointer dereference                                                                                                           
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x457818]                                                                                                            
                                                                                                                                                                                  
goroutine 1 [running]:                                                                                                                                                            
github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes.NewForConfig(0x0, 0x1, 0x1, 0x116de20)                                                                            
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes/clientset.go:399 +0x4e                                                          
github.com/Shopify/kubeaudit/cmd.kubeClient(0x0, 0x0, 0xc0000f39f0, 0x4d6fdd, 0x1049d60)                                                                                          
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/kubernetes.go:40 +0xe5                                                                                         
github.com/Shopify/kubeaudit/cmd.getResources(0xefb520, 0xc0002864c0, 0x0, 0x0, 0xffffffffffffffff)                                                                               
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/util.go:226 +0x9a                                                                                              
github.com/Shopify/kubeaudit/cmd.runAudit.func1(0x1a4fd20, 0x1a78dd0, 0x0, 0x0)                                                                                                   
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/util.go:294 +0x75                                                                                              
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).execute(0x1a4fd20, 0x1a78dd0, 0x0, 0x0, 0x1a4fd20, 0x1a78dd0)                                               
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:760 +0x2cc                                                                
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1a514e0, 0x1a514e0, 0xc0000f3f30, 0x1)                                                           
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:846 +0x2fd                                                                
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).Execute(0x1a514e0, 0x4056a0, 0xc000086058)                                                                  
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:794 +0x2b                                                                 
github.com/Shopify/kubeaudit/cmd.Execute()                                                                                                                                        
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/root.go:37 +0x2d                                                                                               
main.main()                                                                                                                                                                       
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/main.go:6 +0x20 

The solution is to define the configuration file location:
kubectl-audit all -c ~/.kube/config

NetworkPolicy check not implemented?

In https://github.com/Shopify/kubeaudit#audit-network-policies is the following describe:

It checks that every namespace should have a default deny network policy installed. 
See Kubernetes Network Policies for more information:

But actually the code https://github.com/Shopify/kubeaudit/blob/master/cmd/networkPolicies.go only iterates over existing networkPolicies and doesn't check if the default-deny policy is set. Also currently only the default allow all policy is checked (which leads to an warning).

Client Version Not Available

version attempts to print the Kubernetes client version, but that information isn't available so it only prints:

INFO[0000] Kubernetes client version                     Major= Minor= Platform=darwin/amd64

This is because client-go reports its version with the function https://github.com/kubernetes/client-go/blob/03bfb9bdcfe5482795b999f39ca3ed9ad42ce5bb/pkg/version/version.go#L28-L30. We don't use the k8s builder that would set those at build time, so the values fall back to https://github.com/kubernetes/client-go/blob/03bfb9bdcfe5482795b999f39ca3ed9ad42ce5bb/pkg/version/base.go#L42-L62.

Since the imported client-go will always be the same for any build of kubeaudit, I think we should add build and platform info to the kubeaudit version and stop attempting to report the kubernetes client version.

Version info should move to version command

This

{"Major":"1","Minor":"7+","Platform":"linux/amd64","level":"info","msg":"Kubernetes server version","time":"2017-10-21T15:35:26-04:00"}
{"Major":"","Minor":"","Platform":"darwin/amd64","level":"info","msg":"Kubernetes client version","time":"2017-10-21T15:35:26-04:00"}

should only be shown when kubeaudit version is called and not every time kubeaudit -l is invoked.

RunAsNonRoot can be inherited from PodSecurityContext

Current check covers only Container SecurityContext, but RunAsNonRoot, RunAsUser, RunAsGroup and SELinuxOptions are all inherited from the PodSecurityContext unless they are defined explicitly container-wise.

Please consider adding PodSecurityContext to the list of checked values.

Ref:

func checkRunAsNonRoot(container Container, result *Result) {

Version Command Panics

kubeaudit version returns the wrong version number and then panics.

INFO[0000] Kubeaudit                                     Version=0.1.0
Running inside cluster, using the cluster config
ERRO[0000] unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1059878]

#38 breaks :GoRename

#38 breaks :GoRename I haven't figured out why but since it was merged GoRename fails with:

/github.com/Shopify/kubeaudit/cmd/types.go|11| 10: expected type, found '=' (and 10 more errors)
/github.com/Shopify/kubeaudit/cmd/util.go|87| 16: undeclared name: Capability
/github.com/Shopify/kubeaudit/cmd/util.go|89| 16: undeclared name: Capability
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|57| 55: undeclared name: DeploymentList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|66| 56: undeclared name: StatefulSetList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|75| 54: undeclared name: DaemonSetList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|84| 48: undeclared name: PodList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|93| 66: undeclared name: ReplicationControllerList

We should find out why :)

Extra newline generated by autofix on manifest starting with comment after yaml separator

ISSUE TYPE
  • Bug Report
  • Feature Idea

BUG REPORT

SUMMARY

Extra newline is generated by Autofix on manifest starting with comment after yaml separator.

ENVIRONMENT
  • Kubeaudit version: 0.4.1 (branch autofix)
  • Kubeaudit install method: -
STEPS TO REPRODUCE

Create a manifest file with the following structure

---
#This is a comment 3
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  creationTimestamp: null
spec:
  rules: #This is a comment 1
  - http:
      paths:
      - backend:
          serviceName: test
          servicePort: 80
        path: /testpath
status:
  loadBalancer: {}
#This is a comment 5

run

kubeaudit autofix -f /path/to/manifest.yml
EXPECTED RESULTS

There should not be extra newline after yaml separator

ACTUAL RESULTS

changes the file to

---
  
#This is a comment 3
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  creationTimestamp: null
spec:
  rules: #This is a comment 1
  - http:
      paths:
      - backend:
          serviceName: test
          servicePort: 80
        path: /testpath
status:
  loadBalancer: {}
#This is a comment 5

Update all dependencies

Kubeaudit currently has a couple of older dependencies. Would be great to make Kubeaudit run on the never versions of all the dependencies.

New (binary) release?

I would be interested in kubeaudit, but the latest release is from November 2017. Do you have plans to have a new release cut in the near future and providing the binary to download?

Refactoring of tests

Once #19 is merged we could get rid of all fakeaudit/fakeResource.go files with one helper function in utils.go that just traverses the test folder and builds very thing it needs on the fly. What do you think about this @jinankjain

Enhance autofix's yaml handling

When running autofix it reads the yaml file correctly, the resource gets fixed and the write works as well. Yet, there are 3 things that could be improved upon:

Empty resources get added to the yaml file:

status:
  loadBalancer: {}

Yaml comments disappear

This is a know issue in Yaml and has been an open issue in go-yaml for more than 2 years, see:

There is an initial version of a patch out there but it was never finished:

Seems like there is one parser in python that can preserve comments

Order is not preserved

Support for yaml.MapSlice was added in go-yaml.v2 see:

The whole discussion about the feature can be found here

Autofix broken for multiple containers

When autofixing

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  name: cababilitiesAdded
  namespace: fakeDeploymentSC
spec:
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        apps: fakeSecurityContext
    spec:
      containers:
      - name: fakeContainerSC1
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - AUDIT_WRITE
      - name: fakeContainerSC2

The resulting YAML is

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  name: cababilitiesAdded
  namespace: fakeDeploymentSC
spec:
  selector: null
  strategy: {}
  template:
    metadata:
      annotations:
        container.apparmor.security.beta.kubernetes.io/fakeContainerSC1: runtime/default
        container.apparmor.security.beta.kubernetes.io/fakeContainerSC2: runtime/default
        seccomp.security.alpha.kubernetes.io/pod: runtime/default
      creationTimestamp: null
      labels:
        apps: fakeSecurityContext
    spec:
      automountServiceAccountToken: false
      containers:
      - name: fakeContainerSC1
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - AUDIT_WRITE
            - CHOWN
            - DAC_OVERRIDE
            - FOWNER
            - FSETID
            - KILL
            - MKNOD
            - NET_BIND_SERVICE
            - NET_RAW
            - SETFCAP
            - SETGID
            - SETPCAP
            - SETUID
            - SYS_CHROOT
          privileged: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
      - name: fakeContainerSC2
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - AUDIT_WRITE
            - CHOWN
            - DAC_OVERRIDE
            - FOWNER
            - FSETID
            - KILL
            - MKNOD
            - NET_BIND_SERVICE
            - NET_RAW
            - SETFCAP
            - SETGID
            - SETPCAP
            - SETUID
            - SYS_CHROOT
status: {}

which has

privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true

added for the first container but not the second. The example above is actually the test file fixtures/autofix_v1.yml. It tests against the expected output fixtures/autofix-fixed_v1.yml. The expected output has

privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true

for both containers yet the test still passes...

open config/capabilities-drop-list.yml: no such file or directory

running

kubeaudit -l -n test all

I get:

ERRO[0000] This should not have happened, if you are on kubeaudit master please consider to report: open config/capabilities-drop-list.yml: no such file or directory KubeType=pod Name=test-775c4c6459-wwjbf Namespace=test

Installed from master just today

3a363010d61aecd9d8c26fe7b26763facb956f97

relevant config part for the given pod:

          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1000
            runAsNonRoot: true
            privileged: false
            capabilities:
              drop:
                - all

unknown command "all" for "kubeaudit"

Running kubeaudit all should perform all available audits, but the command is not recognized.

➜  ./kubeaudit -l all
Error: unknown command "all" for "kubeaudit"

Did you mean this?
    allowpe

Feature Request: Filter dropped caps

In the current situation kubeaudit just audit for the fact that any capability is dropped or not. It does not take into account any specific capability.

This feature will introduce a flag through which a user would be able to specify that which caps should be dropped necessarily. And kubeaudit will error if those caps are not dropped instead of just giving a warning.

What do you say @jonpulsifer @klautcomputing ?

CI

Do we want it? And if yes what do we want? Travis or Circle?

Json is sad

When logging to JSON the output gets mangled and instead of getting nice info like this:

ERRO[0000] Not all of the recommended capabilities were dropped! Please drop the mentioned capabiliites. CapsNotDropped="[NET_BIND_SERVICE]" KubeType=deployment Name=foo

only the following is shown:

{"CapsNotDropped":{},"KubeType":{},"Name":{},"level":"error","msg":"Not all of the recommended capabilities were dropped! Please drop the mentioned capabiliites.","time":"2017-10-30T13:44:14-04:00"}

2x your capapility drops!

Currently, autofix does not detect that caps have already been dropeed, so it drops them again.
I haven't had a look at why, but this is the result:

          capabilities:
            drop:
            - AUDIT_WRITE
            - CHOWN
            - DAC_OVERRIDE
            - FOWNER
            - FSETID
            - KILL
            - MKNOD
            - NET_BIND_SERVICE
            - SETGID
            - SETFCAP
            - SETPCAP
            - SETUID
            - SYS_CHROOT
            - AUDIT_WRITE
            - CHOWN
            - DAC_OVERRIDE
            - FOWNER
            - FSETID
            - KILL
            - MKNOD
            - NET_BIND_SERVICE
            - NET_RAW
            - SETFCAP
            - SETGID
            - SETPCAP
            - SETUID
            - SYS_CHROOT

Authenticate to cluster

I am trying to use kubeaudit with my kubernetes cluster. How do I specify an OIDC token in the header for authentication or is this capability not supported at this time?

kubeaudit_0.2.0_darwin_amd64 shenoyk$ ./kubeaudit -l rootfs
ERRO[0000] No Auth Provider found for name "oidc"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1a71aa7]

goroutine 1 [running]:
github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes.(*Clientset).AppsV1beta1(...)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes/clientset.go:154
github.com/Shopify/kubeaudit/cmd.getDeployments(0x0, 0xc420112c00)
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/kubernetes.go:48 +0x37
github.com/Shopify/kubeaudit/cmd.getKubeResources(0x0, 0x1, 0x1, 0x2341320)
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/util.go:320 +0x40
github.com/Shopify/kubeaudit/cmd.runAudit.func1(0x23a9620, 0xc420321170, 0x0, 0x1)
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/util.go:409 +0x4ce
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).execute(0x23a9620, 0xc420321140, 0x1, 0x1, 0x23a9620, 0xc420321140)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:702 +0x2c6
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x23a9840, 0x23a9840, 0xc4203bbf18, 0x1)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:783 +0x30e
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).Execute(0x23a9840, 0x0, 0x1b2dcc0)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:736 +0x2b
github.com/Shopify/kubeaudit/cmd.Execute()
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/root.go:32 +0x31
main.main()
/Users/leex/go/src/github.com/Shopify/kubeaudit/main.go:6 +0x20

`-c` is ignored when `-l` is used

When both -c and -l are used, the expected behaviour is to use the config file specified by -c. That switch is currently ignored and -l forces the use of the default $HOME/.kube/config.

./kubeaudit -l version
INFO[0000] Kubeaudit                                     Version=0.1.0
INFO[0000] Kubernetes server version                     Major=1 Minor=10+ Platform=linux/amd64
INFO[0000] Kubernetes client version                     Major= Minor= Platform=darwin/amd64

./kubeaudit -l -c /notarealfile version
INFO[0000] Kubeaudit                                     Version=0.1.0
INFO[0000] Kubernetes server version                     Major=1 Minor=10+ Platform=linux/amd64
INFO[0000] Kubernetes client version                     Major= Minor= Platform=darwin/amd64

labels don't seem to be working?

Relevant config:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-nfs-server
  labels:
    name: test-nfs-server
    kubeaudit.allow.privilegeEscalation: "true"
    kubeaudit.allow.privileged: "true"
    kubeaudit.allow.capability: "true"
    kubeaudit.allow.runAsRoot: "true"
    kubeaudit.allow.readOnlyRootFilesystemFalse: "true"
spec:
  selector:
    matchLabels:
      name: t test-nfs-server
  replicas: 1
  template:
    metadata:
      labels:
        name:  test-nfs-server
        kubeaudit.allow.privilegeEscalation: "true"
        kubeaudit.allow.privileged: "true"
        kubeaudit.allow.capability: "true"
        kubeaudit.allow.runAsRoot: "true"
        kubeaudit.allow.readOnlyRootFilesystemFalse: "true"

running

kubeaudit -l -v ERROR -n test all

Gives output:

time="2018-07-27T13:19:34+03:00" level=error msg="AllowPrivilegeEscalation not set which allows privilege escalation, please set to false" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="ReadOnlyRootFilesystem not set which results in a writable rootFS, please set to true" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="RunAsNonRoot is not set, which results in root user being allowed!" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="Privileged set to true! Please change it to false!" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="This should not have happened, if you are on kubeaudit master please consider to report: open config/capabilities-drop-list.yml: no such file or directory" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="AllowPrivilegeEscalation not set which allows privilege escalation, please set to false" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="ReadOnlyRootFilesystem not set which results in a writable rootFS, please set to true" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="RunAsNonRoot is not set, which results in root user being allowed!" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="Privileged set to true! Please change it to false!" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="This should not have happened, if you are on kubeaudit master please consider to report: open config/capabilities-drop-list.yml: no such file or directory" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test

am I missing something?

Check that all capabilities are dropped

with #34 we now have a list of all the capabilities that are dropped, now we should establish a way of making sure all possible capabilities are dropped. #33 would give us the functionality to specify that a cap wasn't dropped and that's intentional.

Bug: auditing yml does not work

There is a bug in auditing yml as auditSecurityContext is invoked everywhere in image.go, runAsNonRoot.go etc instead of specific function

Support PodSecurityPolicies

Several things which kubeaudit checks for (such as privileges and capabilities) can also be controlled using PodSecurityPolicies (PSPs). Add support for auditing PSPs which takes into account override order with annotations and security contexts.

Some notes:

  • PSPs are cluster wide and as such will require different logic than is currently used for all of the resource specific settings.
  • As an additional side effect, PSPs may already live in a cluster unbeknownst to someone adding resources to that cluster. Kubeaudit should have the option to account for this even when auditing in "local mode" using kubernetes configuration files (which currently does not connect to a live cluster).

Introduce logging level in kubeaudit

In the current scenario kubeaudit emit logs when there is an error/warning but there might be other use cases.

Like getting more information about the healthy k8s resources like the once which are not violating any security policies laid out by kubeaudit.

So for this we need to have different log level for example:

INFO: This would be the most verbose log level
ERROR/WARNING: This would be default log level

@klautcomputing

Another refactor issue

The code here https://github.com/Shopify/kubeaudit/blob/master/cmd/runAsNonRoot.go#L78-L80
should be refactored to something like this:

	var results []Result
	for _, resource := range resources {
		results = append(results, auditRunAsNonRoot(resource))
	}

Why I am I saying something like that? Because we want to keep the go and that might require channels.
We want to do this because the print here https://github.com/Shopify/kubeaudit/blob/master/cmd/runAsNonRoot.go#L36-L38 is totally out of place and the audit functions get used in other places where the printing doesn't make sense.
Obviously, the print needs be put back in, e.g.

	var results []Result
	for _, resource := range resources {
		results = append(results, auditRunAsNonRoot(resource))
	}
	for _, result := range results {
		result.Print()
	}

pardon my pseudo code

False positive when all capabilities dropped

Running kubeaudit caps returns a lot of ERRO[0003] Capability not dropped messages for the pods with the following effective settings:

    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - all
      readOnlyRootFilesystem: true

From inside the container everything is clearly OK:

grep ^Cap /proc/1/status
CapInh:    0000000000000000
CapPrm:    0000000000000000
CapEff:    0000000000000000
CapBnd:    0000000000000000
CapAmb:    0000000000000000

Autofix skips and drops resources

When kubeaudit is run with autofix -f file.yaml and the to be autofixed file contains resources that kubeaudit doesn't know about like e.g. ingress and service the following happens:

WARN[0000] Skipping unsupported resource type extensions/v1beta1, Kind=Ingress
WARN[0000] Skipping unsupported resource type /v1, Kind=Service

Kubeaudit skips and drops them. What it should do is skip and keep.

Which Kubernetes versions are supported?

I want to know which specific Kubernetes versions are supported

For example, I am using the apps/v1 (from kubernetes v1.9) resource type for Deployments but I can not check it because the tool don't support it...

Will it be supported soon?

Thank you very much for this helpful tool!

Versioning/Releases and Tags

πŸ‘‹ @jinankjain @jonpulsifer

Do we want to version kubeaudit?

  • Create releases and tags in Github
  • add a flag to the kubeaudit binary that reports the current version.

So that you can install specific version via e.g. glide and check which version is currently install with e.g. --version.

Audit for (nix) namespaces

The use of host's networking | hostNetwork
The use of host’s PID namespace | hostPID
The use of host’s IPC namespace | hostIPC

Allow labels don't support multiple containers

Problem: The current implementation implementation of labels doesn't allow to specify for which container the deviation is allowed. E.g. kubeaudit.allow.capability.chown: "true" has no information whether it is the first or second container if we have more than one container in a resource.

      containers:
      - name: frist
      - name: second

Solution Change labels to have a container that they refer to.

Multi-* tests

Learning from #88 #87 we need to introduce more tests. Especially, some that test multiple resources per config file and multiple containers per resource.

Build Fails on Alpine Linux

The version sort feature was added to GNU sort relatively recently. Busybox and older versions of Linux don't support it.

sort: unrecognized option: V
BusyBox v1.28.4 (2018-07-17 15:21:40 UTC) multi-call binary.

Usage: sort [-nrugMcszbdfiokt] [-o FILE] [-k start[.offset][opts][,end[.offset][opts]] [-t CHAR] [FILE]...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.