Git Product home page Git Product logo

openclarity / kubeclarity Goto Github PK

View Code? Open in Web Editor NEW

This project forked from openclarity/openclarity

2.0 0.0 1.0 7.25 MB

KubeClarity is a tool for detection and management of Software Bill Of Materials (SBOM) and vulnerabilities of container images and filesystems

License: Apache License 2.0

Shell 0.12% JavaScript 18.39% Go 75.68% Makefile 0.77% HTML 0.11% SCSS 4.67% Mustache 0.27%
kubernetes kubernetes-security sbom scanner security supply-chain vulnerabilities

kubeclarity's Introduction

KubeClarity Logo

KubeClarity is a tool for detection and management of Software Bill Of Materials (SBOM) and vulnerabilities of container images and filesystems. It scans both runtime K8s clusters and CI/CD pipelines for enhanced software supply chain security.

Table of Contents

Why?

SBOM & Vulnerability Detection Challenges

  • Effective vulnerability scanning requires an accurate Software Bill Of Materials (SBOM) detection:
    • Various programming languages and package managers
    • Various OS distributions
    • Package dependency information is usually stripped upon build
  • Which one is the best scanner/SBOM analyzer?
  • What should we scan: Git repos, builds, container images or runtime?
  • Each scanner/analyzer has its own format - how to compare the results?
  • How to manage the discovered SBOM and vulnerabilities?
  • How are my applications affected by a newly discovered vulnerability?

Solution

  • Separate vulnerability scanning into 2 phases:
    • Content analysis to generate SBOM
    • Scan the SBOM for vulnerabilities
  • Create a pluggable infrastructure to:
    • Run several content analyzers in parallel
    • Run several vulnerability scanners in parallel
  • Scan and merge results between different CI stages using KubeClarity CLI
  • Runtime K8s scan to detect vulnerabilities discovered post-deployment
  • Group scanned resources (images/directories) under defined applications to navigate the object tree dependencies (applications, resources, packages, vulnerabilities)

Features

  • Dashboard
    • Fixable vulnerabilities per severity
    • Top 5 vulnerable elements (applications, resources, packages)
    • New vulnerabilities trends
    • Package count per license type
    • Package count per programming language
    • General counters
  • Applications
    • Automatic application detection in K8s runtime
    • Create/edit/delete applications
    • Per application, navigation to related:
      • Resources (images/directories)
      • Packages
      • Vulnerabilities
      • Licenses in use by the resources
  • Application Resources (images/directories)
    • Per resource, navigation to related:
      • Applications
      • Packages
      • Vulnerabilities
  • Packages
    • Per package, navigation to related:
      • Applications
      • Linkable list of resources and the detecting SBOM analyzers
      • Vulnerabilities
  • Vulnerabilities
    • Per vulnerability, navigation to related:
      • Applications
      • Resources
      • List of detecting scanners
  • K8s Runtime scan
    • On-demand or scheduled scanning
    • Automatic detection of target namespaces
    • Scan progress and result navigation per affected element (applications, resources, packages, vulnerabilities)
    • CIS Docker benchmark
  • CLI (CI/CD)
    • SBOM generation using multiple integrated content analyzers (Syft, cyclonedx-gomod)
    • SBOM/image/directory vulnerability scanning using multiple integrated scanners (Grype, Dependency-track)
    • Merging of SBOM and vulnerabilities across different CI/CD stages
    • Export results to KubeClarity backend
  • API
    • The API for KubeClarity can be found here

Integrated SBOM generators and vulnerability scanners

KubeClarity content analyzer integrates with the following SBOM generators:

KubeClarity vulnerability scanner integrates with the following scanners:

Architecture

Getting Started

KubeClarity Backend

Install using Helm:

  1. Add Helm repo

    helm repo add kubeclarity https://openclarity.github.io/kubeclarity
  2. Save KubeClarity default chart values

    helm show values kubeclarity/kubeclarity > values.yaml
  3. Check the configuration in values.yaml and update the required values if needed. To enable and configure the supported SBOM generators and vulnerability scanners, please check the "analyzer" and "scanner" config under the "vulnerability-scanner" section in Helm values.

  4. Deploy KubeClarity with Helm

    helm install --values values.yaml --create-namespace kubeclarity kubeclarity/kubeclarity -n kubeclarity

    or for OpenShift Restricted SCC compatible install:

    helm install --values values.yaml --create-namespace kubeclarity kubeclarity/kubeclarity -n kubeclarity --set global.openShiftRestricted=true \
      --set kubeclarity-postgresql.securityContext.enabled=false --set kubeclarity-postgresql.containerSecurityContext.enabled=false \
      --set kubeclarity-postgresql.volumePermissions.enabled=true --set kubeclarity-postgresql.volumePermissions.securityContext.runAsUser="auto" \
      --set kubeclarity-postgresql.shmVolume.chmod.enabled=false
  5. Port forward to KubeClarity UI:

    kubectl port-forward -n kubeclarity svc/kubeclarity-kubeclarity 9999:8080
  6. Open KubeClarity UI in the browser: http://localhost:9999/

NOTE
KubeClarity requires these K8s permissions:

Permission Reason
Read secrets in CREDS_SECRET_NAMESPACE (default: kubeclarity) This is allow you to configure image pull secrets for scanning private image repositories.
Read config maps in the KubeClarity deployment namespace. This is required for getting the configured template of the scanner job.
List pods in cluster scope. This is required for calculating the target pods that need to be scanned.
List namespaces. This is required for fetching the target namespaces to scan in K8s runtime scan UI.
Create & delete jobs in cluster scope. This is required for managing the jobs that will scan the target pods in their namespaces.

Uninstall using Helm:

  1. Helm uninstall

    helm uninstall kubeclarity -n kubeclarity
  2. Clean resources

    By default, Helm will not remove the PVCs and PVs for the StatefulSets. Run the following command to delete them all:

    kubectl delete pvc -l app.kubernetes.io/instance=kubeclarity -n kubeclarity

Build and Run Locally with Demo Data

  1. Build UI & backend and start the backend locally (2 options):

    1. Using docker:
      1. Build UI and backend (the image tag is set using VERSION):
        VERSION=test make docker-backend
      2. Run the backend using demo data:
        docker run -p 8080:8080 -e FAKE_RUNTIME_SCANNER=true -e FAKE_DATA=true -e ENABLE_DB_INFO_LOGS=true -e DATABASE_DRIVER=LOCAL ghcr.io/openclarity/kubeclarity:test run
    2. Local build:
      1. Build UI and backend
        make ui && make backend
      2. Copy the built site:
        cp -r ./ui/build ./site
      3. Run the backend locally using demo data:
        FAKE_RUNTIME_SCANNER=true DATABASE_DRIVER=LOCAL FAKE_DATA=true ENABLE_DB_INFO_LOGS=true ./backend/bin/backend run
  2. Open KubeClarity UI in the browser: http://localhost:8080/

CLI

KubeClarity includes a CLI that can be run locally and especially useful for CI/CD pipelines. It allows to analyze images and directories to generate SBOM, and scan it for vulnerabilities. The results can be exported to KubeClarity backend.

Installation

Binary Distribution

Download the release distribution for your OS from the releases page

Unpack the kubeclarity-cli binary, add it to your PATH, and you are good to go!

Docker Image

A Docker image is available at ghcr.io/openclarity/kubeclarity-cli with list of available tags here.

Local Compilation

make cli

Copy ./cli/bin/cli to your PATH under kubeclarity-cli.

SBOM Generation

Usage:

kubeclarity-cli analyze <image/directory name> --input-type <dir|file|image(default)> -o <output file or stdout>

Example:

kubeclarity-cli analyze --input-type image nginx:latest -o nginx.sbom

Optionally a list of the content analyzers to use can be configured using the ANALYZER_LIST env variable seperated by a space (e.g ANALYZER_LIST="<analyzer 1 name> <analyzer 2 name>")

Example:

ANALYZER_LIST="syft gomod" kubeclarity-cli analyze --input-type image nginx:latest -o nginx.sbom

Vulnerability Scanning

Usage:

kubeclarity-cli scan <image/sbom/directoty/file name> --input-type <sbom|dir|file|image(default)> -f <output file>

Example:

kubeclarity-cli scan nginx.sbom --input-type sbom

Optionally a list of the vulnerability scanners to use can be configured using the SCANNERS_LIST env variable seperated by a space (e.g SCANNERS_LIST="<Scanner1 name> <Scanner2 name>")

Example:

SCANNERS_LIST="grype trivy" kubeclarity-cli scan nginx.sbom --input-type sbom

Exporting Results to KubeClarity Backend

To export CLI results to the KubeClarity backend, need to use an application ID as defined by the KubeClarity backend. The application ID can be found in the Applications screen in the UI or using the KubeClarity API.

Exporting SBOM

# The SBOM can be exported to KubeClarity backend by setting the BACKEND_HOST env variable and the -e flag.
# Note: Until TLS is supported, BACKEND_DISABLE_TLS=true should be set.
BACKEND_HOST=<KubeClarity backend address> BACKEND_DISABLE_TLS=true kubeclarity-cli analyze <image> --application-id <application ID> -e -o <SBOM output file>

# For example:
BACKEND_HOST=localhost:9999 BACKEND_DISABLE_TLS=true kubeclarity-cli analyze nginx:latest --application-id 23452f9c-6e31-5845-bf53-6566b81a2906 -e -o nginx.sbom

Exporting Vulnerability Scan Results

# The vulnerability scan result can be exported to KubeClarity backend by setting the BACKEND_HOST env variable and the -e flag.
# Note: Until TLS is supported, BACKEND_DISABLE_TLS=true should be set.

BACKEND_HOST=<KubeClarity backend address> BACKEND_DISABLE_TLS=true kubeclarity-cli scan <image> --application-id <application ID> -e

# For example:
SCANNERS_LIST="grype" BACKEND_HOST=localhost:9999 BACKEND_DISABLE_TLS=true kubeclarity-cli scan nginx.sbom --input-type sbom  --application-id 23452f9c-6e31-5845-bf53-6566b81a2906 -e

Advanced Configuration

SBOM generation using local docker image as input

# Local docker images can be analyzed using the LOCAL_IMAGE_SCAN env variable

# For example:
LOCAL_IMAGE_SCAN=true kubeclarity-cli analyze nginx:latest -o nginx.sbom

Vulnerability scanning using local docker image as input

# Local docker images can be scanned using the LOCAL_IMAGE_SCAN env variable

# For example:
LOCAL_IMAGE_SCAN=true kubeclarity-cli scan nginx.sbom

Private registry support For CLI

The KubeClarity cli can read a config file that stores credentials for private registries.

Example registry section of the config file:

registry:
  auths:
    - authority: <registry 1>
      username: <username for registry 1>
      password: <password for registry 1>
    - authority: <registry 2>
      token: <token for registry 2>

Example registry config without authority: (in this case these credentials will be used for all registries)

registry:
  auths:
    - username: <username>
      password: <password>

Specify config file for CLI

# The default config path is $HOME/.kubeclarity or it can be specified by `--config` command line flag.
# kubeclarity <scan/analyze> <image name> --config <kubeclarity config path>

# For example:
kubeclarity scan registry/nginx:private --config $HOME/own-kubeclarity-config

Private registries support for K8s runtime scan

Kubeclarity is using k8schain of google/go-containerregistry for authenticating to the registries. If the necessary service credentials are not discoverable by the k8schain, they can be defined via secrets described below.

In addition, if service credentials are not located in "kubeclarity" Namespace, please set CREDS_SECRET_NAMESPACE to kubeclarity Deployment. When using helm charts, CREDS_SECRET_NAMESPACE is set to the release namespace installed kubeclarity.

Amazon ECR

Create an AWS IAM user with AmazonEC2ContainerRegistryFullAccess permissions.

Use the user credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION) to create the following secret:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: ecr-sa
  namespace: kubeclarity
type: Opaque
data:
  AWS_ACCESS_KEY_ID: $(echo -n 'XXXX'| base64 -w0)
  AWS_SECRET_ACCESS_KEY: $(echo -n 'XXXX'| base64 -w0)
  AWS_DEFAULT_REGION: $(echo -n 'XXXX'| base64 -w0)
EOF

Note:

  1. Secret name must be ecr-sa
  2. Secret data keys must be set to AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION

Google GCR

Create a Google service account with Artifact Registry Reader permissions.

Use the service account json file to create the following secret

kubectl -n kubeclarity create secret generic --from-file=sa.json gcr-sa

Note:

  1. Secret name must be gcr-sa
  2. sa.json must be the name of the service account json file when generating the secret
  3. KubeClarity is using application default credentials. These only work when running KubeClarity from GCP.

Merging of SBOM and vulnerabilities across different CI/CD stages

# Additional SBOM will be merged into the final results when '--merge-sbom' is defined during analysis. The input SBOM can be CycloneDX XML or CyclonDX json format.
# For example:
ANALYZER_LIST="syft" kubeclarity-cli analyze nginx:latest -o nginx.sbom --merge-sbom inputsbom.xml

Output Different SBOM Formats

The kubeclarity-cli analyze command can format the resulting SBOM into different formats if required to integrate with another system. The supported formats are:

Format Configuration Name
CycloneDX JSON (default) cyclonedx-json
CycloneDX XML cyclonedx-xml
SPDX JSON spdx-json
SPDX Tag Value spdx-tv
Syft JSON syft-json

WARNING
KubeClarity processes CycloneDX internally, the other formats are supported through a conversion. The conversion process can be lossy due to incompatibilities between formats, therefore not all fields/information are promised to be present in the resulting output.

To configure the kubeclarity-cli to use a format other than the default, the ANALYZER_OUTPUT_FORMAT environment variable can be used with the configuration name from above:

ANALYZER_OUTPUT_FORMAT="spdx-json" kubeclarity-cli analyze nginx:latest -o nginx.sbom

Remote Scanner Servers For CLI

When running the kubeclarity CLI to scan for vulnerabilties, the CLI will need to download the relevant vulnerablity DBs to the location where the kubeclarity CLI is running. Running the CLI in a CI/CD pipeline will result in downloading the DBs on each run, wasting time and bandwidth. For this reason several of the supported scanners have a remote mode in which a server is responsible for the DB management and possibly scanning of the artifacts.

Note

The examples below are for each of the scanners, but they can be combined to run together the same as they can be in non-remote mode.

Trivy

Trivy scanner supports remote mode using the Trivy server. The trivy server can be deployed as documented here: trivy client-server mode. Instructions to install the Trivy CLI are available here: trivy install. The Aqua team provide an offical container image that can be used to run the server in kubernetes/docker which we'll use in the examples here.

To start the server:

docker run -p 8080:8080 --rm aquasec/trivy:0.41.0 server --listen 0.0.0.0:8080

To run a scan using the server:

SCANNERS_LIST="trivy" SCANNER_TRIVY_SERVER_ADDRESS="http://<trivy server address>:8080" ./kubeclarity_cli scan --input-type sbom nginx.sbom

The trivy server also provides token based authentication to prevent unauthorized use of a trivy server instance. You can enable it by running the server with the extra flag:

docker run -p 8080:8080 --rm aquasec/trivy:0.41.0 server --listen 0.0.0.0:8080 --token mytoken

and passing the token to the scanner:

SCANNERS_LIST="trivy" SCANNER_TRIVY_SERVER_ADDRESS="http://<trivy server address>:8080" SCANNER_TRIVY_SERVER_TOKEN="mytoken" ./kubeclarity_cli scan --input-type sbom nginx.sbom

Grype

Grype supports remote mode using grype-server a RESTful grype wrapper which provides an API that receives an SBOM and returns the grype scan results for that SBOM. Grype-server ships as a container image so can be run in kubernetes or via docker standalone.

To start the server:

docker run -p 9991:9991 --rm gcr.io/eticloud/k8sec/grype-server:v0.1.5

To run a scan using the server:

SCANNERS_LIST="grype" SCANNER_GRYPE_MODE="remote" SCANNER_REMOTE_GRYPE_SERVER_ADDRESS="<grype server address>:9991" SCANNER_REMOTE_GRYPE_SERVER_SCHEMES="https" ./kubeclarity_cli scan --input-type sbom nginx.sbom

If Grype server is deployed with TLS you can override the default URL scheme like:

SCANNERS_LIST="grype" SCANNER_GRYPE_MODE="remote" SCANNER_REMOTE_GRYPE_SERVER_ADDRESS="<grype server address>:9991" SCANNER_REMOTE_GRYPE_SERVER_SCHEMES="https" ./kubeclarity_cli scan --input-type sbom nginx.sbom

Dependency Track

See example configuration here

Limitations

  1. Supports Docker Image Manifest V2, Schema 2 (https://docs.docker.com/registry/spec/manifest-v2-2/). It will fail to scan earlier versions.

Roadmap

  • Integration with additional content analyzers (SBOM generators)
  • Integration with additional vulnerability scanners
  • CIS Docker benchmark in UI
  • Image signing using Cosign
  • CI/CD metadata signing and attestation using Cosign and in-toto (supply chain security)
  • System settings and user management

Contributing

Pull requests and bug reports are welcome.

For larger changes please create an Issue in GitHub first to discuss your proposed changes and possible implications.

More more details please see the Contribution guidelines for this project

License

Apache License, Version 2.0

kubeclarity's People

Contributors

akpsgit avatar b-abderrahmane avatar bauerjs1 avatar boris257 avatar chgl avatar dependabot[bot] avatar erezf-p avatar fhirscher avatar fishkerez avatar frimidan avatar galiail avatar j-zimnowoda avatar jmueller42 avatar justaugustus avatar lelia avatar masayaaoyama avatar mesh33 avatar milvito avatar mtcolman avatar nostra avatar oborys avatar pbalogh-sa avatar portshift-admin avatar rafiportshift avatar ramizpolic avatar raoudhalagha avatar rmedvedo avatar tehsmash avatar tgip-work avatar yossicohn avatar

Stargazers

 avatar  avatar

kubeclarity's Issues

KubeClarity unable to start in VSphere Tanzu Kubernetes Cluster without additional securityContext (pss restricted)

We are hosting our kubernetes clusters with vmware vsphere with tanzu and are currently upgrading our infrastructure to v1.26 from v1.24.

This results in a rather harsh change from psp to pss and everything in this regard.

The provided securityContext provides most of the required fields for a successful deployment but sadly not the seccompProfile type. This results in error events unable to scale the deployments properly.

Involved Object:
  API Version:       apps/v1
  Kind:              ReplicaSet
  Name:              kubeclarity-kubeclarity-74564b8bd6
  Namespace:         kubeclarity
  Resource Version:  13480120
  UID:               116330d6-e76a-4795-ae03-557b5e20ffd2
Kind:                Event
Last Timestamp:      2024-02-22T07:58:35Z
Message:             Error creating: pods "kubeclarity-kubeclarity-74564b8bd6-ln5dz" is forbidden: violates PodSecurity "restricted:latest": seccompProfile (pod or containers "kubeclarity-kubeclarity-wait-for-pg-db", "kubeclarity-kubeclarity-wait-for-sbom-db", "kubeclarity-kubeclarity-wait-for-grype-server", "kubeclarity" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

A possible solution could be adding configurable fields within the global area and apply them accordingly if set. For example:

global:
  securityContext:
    seccompProfile: 
      # options: Undefined / RuntimeDefault / Localhost
      type: 
      # only required when type = localhost
      localhostProfile:

Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-seccomp-profile-for-a-container

What happened:

Upgrades on underlying kubernetes cluster and therefore stricter policies requiring more securityContext configuration are blocking successful scale of deployments.

What you expected to happen:

Successfully scaling deployments to configured replica size.

Are there any error messages in KubeClarity logs?

None - Deployment is not scaled

Environment:

  • Kubernetes version (use kubectl version --short): 1.26
  • KubeClarity Helm Chart version (use helm -n kubeclarity list) v2.23.1
  • Cloud provider or hardware configuration: onprem - vsphere with tanzu kubernetes

Add IRSA Support for Pod Access to ECR

Is your feature request related to a problem? Please describe.
Currently the ECR support requires management of separate IAM credentials and kubernetes secrets. Using an IAM Roles for Service Accounts (IRSA) approach would allow the reuse of IAM policies, and remove the need to manage IAM users and Kubernetes secrets.

Describe the solution you'd like
Remove the use of static secrets and use IRSA instead. This approach is supported by Amazon EKS and non-EKS Kubernetes on AWS, with the Amazon EKS Pod Identity Webhook. The approach is described in this blog post.

Describe alternatives you've considered
I used kiam and kube2iam in the past, but both solutions required pod-level access to host-level instance metadata service (IMDS) to use the AWS EC2 host instance profile. Preventing the pods from accessing the AWS EC2 IMDS is considered a best practice, and prevents the pods from gaining access to permission meant for the host.

Additional context
The use of IRSA is considered a best practice when integrating Kubernetes pods to AWS IAM.

Allow to define non-default serviceAccount for the runtime-scan-job

Is your feature request related to a problem? Please describe.
At the moment, we cannot set the serviceAccount for runtime-scan-jobs using Helm. Moreover, it always uses the default serviceAccount. This limitation doesn't allow us to use existing serviceAccounts to provide access to artifactRegistry using Workload Identity instead of keeping google service account in kubernetes secret (gcr-sa).

Describe the solution you'd like
I would like to have the ability to set the serviceAccount for runtime-scan-jobs using a Helm variable .Values.kubeclarity-runtime-scan.serviceAccount.name.
Here is an example of how it could be implemented in scanner-template-configmap.yaml:

          {{- if index .Values "kubeclarity-runtime-scan" "serviceAccount" "name"}}
          serviceAccount: {{ index .Values "kubeclarity-runtime-scan" "serviceAccount" "name" | quote} }
          {{- end}}

Describe alternatives you've considered
We can modify the kubeclarity-kubeclarity-scanner-template ConfigMap by hand, but this approach is inconvenient and error-prone. We also can

Additional context
By adding this feature, users would be able to utilize existing serviceAccounts, which is particularly useful for providing necessary permissions, such as access to artifactRegistry in GCP.

certifi dependency false positive.

What happened:

In Cisco Code Exchange, the following vulnerability was found.

certifi | 2023.7.22 | 2023.07.22 | requirements.txt | GHSA-xqr8-7jwr-rhp7 |

Certifi is a curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. Certifi prior to version 2023.07.22 recognizes "e-Tugra" root certificates. e-Tugra's root certificates were subject to an investigation prompted by reporting of security issues in their systems. Certifi 2023.07.22 removes root certificates from "e-Tugra" from the root store.
-- | -- | -- | -- | -- | --

What you expected to happen:

User updated certifi to 2023.07.22. Repo was rescanned and vulnerability alert still exisits.

How to reproduce it (as minimally and precisely as possible):

Scan GitHUb repo for Code Exchange and set certifi version to 2023.7.22 or 2023.07.22

Are there any error messages in KubeClarity logs?

(e.g. kubectl logs -n kubeclarity --selector=app=kubeclarity)

Unknown

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version --short):
  • Helm version (use helm version):
  • KubeClarity version (use kubectl -n kubeclarity exec deploy/kubeclarity -- ./backend version)
  • KubeClarity Helm Chart version (use helm -n kubeclarity list)
  • Cloud provider or hardware configuration:
  • Others:

Scanning Limitation with private registries

Is your feature request related to a problem? Please describe.

We are trying to run kubeclarity in a very locked down network environment. All of our image pulls except ones coming from AWS Private registries are force to run through a harbor image proxy, that requires authentication. This image rewrite, and the attaching of the imagePullSecret is done automatically by kyverno for all namespaces except kube-system and kyverno. This means whether I run the scan in the kubeclarity namespace, a custom namespace or let the scan happen in the namespace of the pod it wants to scan, there ends up being situations where kubeclarity does not use the required imagepullsecret

Describe the solution you'd like
It would be preferable to tell kubeclarity to scan in either kubeclarity namespace, or a custom namespace, and to require all images to use an imagepullsecret that would already exist in that namespace

Describe alternatives you've considered
We currently do not scan images in kube-system, or in kyverno's namespaces.

Additional context
This is an EKS environment, in a special part of AWS Govcloud, with no outbound internet access except for AWS endpoints, and a few whitelisted proxies like harbor.

FeatureRequest: remote write to another instance

We have a lot of k8s clusters, which means we have a separate kubeclarity instance per cluster, which does scheduled scanning.

It would be nice if each of these kubeclarity instances could push its results to a central kubeclarity instance, such that one could review the results of all clusters in one place. This feature could be like the remote write feature that prometheus offers.

Documentation: Kubeclarity Install fails on EKS v1.23 or later because it cannot bind volume if CSI Driver add ons are not installed

Is your feature request related to a problem? Please describe.
Tried installing Kubeclarity on EKS 1.24, kubeclarity postgresql pod fails to start and gets stuck in pending state

Describe the solution you'd like
Add trouble shooting documentation to cover step by step process to install and get Kubeclarity up and running on latest versions on EKS clusters.

Describe alternatives you've considered
A troubleshooting section under readme or a separate documentation guide will be helpful

To resolve this issue it needs Amazon EBS CSI driver as an Amazon EKS add-on and setting up the driver with relevant IAMServiceAccount roles and policies. Post eks1.23 it is required to install add ons. Some useful references
https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html
https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html

Not working with kube-image-keeper mutating webhook

What happened:

kubeclarity-runtime-k8s-scanner throws error trying to scan docker image.

What you expected to happen:

Expect the docker image in the cluster to get scanned successfully.

How to reproduce it (as minimally and precisely as possible):

  1. Install kube-image-keeper
  2. Run KubeClarity Run-Time scan against a Namespace that has images cached by kube-image-keeper
  3. Scan will error out because it's not able to scan the docker image due to the mutating webhook as it rewrites the image URL to localhost:7439/

Are there any error messages in KubeClarity logs?

kubeclarity-kubeclarity-wait-for-pg-db kubeclarity-kubeclarity-postgresql:5432 - accepting connections
kubeclarity 
kubeclarity 2024/05/28 21:07:14 /build/backend/pkg/database/scheduler.go:58 record not found
kubeclarity [1.032ms] [rows:0] SELECT * FROM "scheduler" ORDER BY "scheduler"."id" LIMIT 1
kubeclarity 2024/05/28 21:07:14 Serving kube clarity runtime scan a p is at http://:8888
kubeclarity 2024/05/28 21:07:14 Serving kube clarity a p is at http://:8080
kubeclarity time="2024-05-28T21:07:46Z" level=warning msg="Vulnerabilities scan of imageID \"localhost:7439/typesense/typesense@sha256:035ccfbc3fd8fb9085ea205fdcb62de63eaefdbebd710e88e57f978a30f2090d\" has failed: &{failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: unable to load image: unable to use OciRegistry source: failed to get image descriptor from registry: Get \"https://localhost:7439/v2/\": dial tcp [::1]:7439: connect: connection refused; Get \"http://localhost:7439/v2/\": dial tcp [::1]:7439: connect: connection refused TBD}" func="github.com/openclarity/kubeclarity/runtime_scan/pkg/scanner.(*Scanner).HandleScanResults" file="/build/runtime_scan/pkg/scanner/scanner.go:415" scanner id=24da9132-749b-4e9d-943d-327af7a67275
kubeclarity time="2024-05-28T21:09:30Z" level=warning msg="Vulnerabilities scan of imageID \"localhost:7439/typesense/typesense@sha256:035ccfbc3fd8fb9085ea205fdcb62de63eaefdbebd710e88e57f978a30f2090d\" has failed: &{failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: unable to load image: unable to use OciRegistry source: failed to get image descriptor from registry: Get \"https://localhost:7439/v2/\": dial tcp [::1]:7439: connect: connection refused; Get \"http://localhost:7439/v2/\": dial tcp [::1]:7439: connect: connection refused TBD}" func="github.com/openclarity/kubeclarity/runtime_scan/pkg/scanner.(*Scanner).HandleScanResults" file="/build/runtime_scan/pkg/scanner/scanner.go:415" scanner id=776eb73a-263a-423d-aa5f-01f25a901dee
kubeclarity 
kubeclarity 2024/05/28 21:09:36 /build/backend/pkg/database/refresh_materialized_views.go:155 SLOW SQL >= 200ms
kubeclarity [1907.530ms] [rows:0] REFRESH MATERIALIZED VIEW CONCURRENTLY packages_view;
kubeclarity 
kubeclarity 2024/05/28 21:09:36 /build/backend/pkg/database/refresh_materialized_views.go:155 SLOW SQL >= 200ms
kubeclarity [1912.167ms] [rows:0] REFRESH MATERIALIZED VIEW CONCURRENTLY vulnerabilities_view;
kubeclarity 
kubeclarity 2024/05/28 21:09:37 /build/backend/pkg/database/application.go:236 record not found
kubeclarity [6.892ms] [rows:0] SELECT * FROM "applications" WHERE applications.id = 'b17e8e84-3330-5f16-93aa-3b425dd46e40' ORDER BY "applications"."id" LIMIT 1
kubeclarity-kubeclarity-wait-for-sbom-db + curl -sw '%{http_code}' http://kubeclarity-kubeclarity-sbom-db:8081/healthz/ready -o /dev/null
kubeclarity-kubeclarity-wait-for-sbom-db + '[' 200 -ne 200 ]
kubeclarity-kubeclarity-wait-for-grype-server + curl -sw '%{http_code}' http://kubeclarity-kubeclarity-grype-server:8080/healthz/ready -o /dev/null
kubeclarity-kubeclarity-wait-for-grype-server + '[' 200 -ne 200 ]
Stream closed EOF for kubeclarity-test/kubeclarity-kubeclarity-6ddcd445b8-pnvdt (kubeclarity-kubeclarity-wait-for-pg-db)
Stream closed EOF for kubeclarity-test/kubeclarity-kubeclarity-6ddcd445b8-pnvdt (kubeclarity-kubeclarity-wait-for-sbom-db)
Stream closed EOF for kubeclarity-test/kubeclarity-kubeclarity-6ddcd445b8-pnvdt (kubeclarity-kubeclarity-wait-for-grype-server)

Anything else we need to know?:

Environment:

  • Kubernetes version: EKS 1.28
  • Helm version (use helm version): v3.14.4
  • KubeClarity version: latest
  • KubeClarity Helm Chart version: latest
  • Cloud provider or hardware configuration: AWS

Kubeclarity startup dependencies can be Improved to a graceful handling and prevent pods from crashing

Is your feature request related to a problem? Please describe.
kubeclarity pods crash during installation due to postgress dependency. This can be improved to gracefully handle dependency and prevent pods from crashing

Describe the solution you'd like
kubeclarity kubeclarity-kubeclarity-8994b7966-vr4t6 ● 0/1 0 Init:2/3 10.0.158.68 ip-10-0-151-209.us-east-2.compute.internal 2m47s │
│ kubeclarity kubeclarity-kubeclarity-grype-server-c8fc5847f-zk8h4 ● 0/1 4 CrashLoopBackOff 10.0.102.137 ip-10-0-73-177.us-east-2.compute.internal 2m47s │
│ kubeclarity kubeclarity-kubeclarity-postgresql-0 ● 1/1 0 Running 10.0.148.40 ip-10-0-151-209.us-east-2.compute.internal 2m47s │
│ kubeclarity kubeclarity-kubeclarity-sbom-db-687d5df5f5-7ltxl ● 1/1 0 Running 10.0.74.175 ip-10-0-73-177.us-east-2.compute.internal 2m47s │
│ │

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[kubeclarity] Fresh Install - Unable to parse image id

Hi,
I have deployed kubeclarity via helm as a test. Pods are running and healthy, I can access the UI but whenever I try to scan anything (using the default grype scanner) I get the following:

logs.txt

This is a fresh install of the latest version (v2.23.1)
Kubernetes installed via kubeadm (v1.29)
CRIO runtime v1.28

DNS and networking is fine on my cluster, there are no network policies applied to the kubeclarity namespace
Do you have any pointers or suggestions?

Thanks!

Use existing serviceAccount within Helm deployment

Is your feature request related to a problem? Please describe.
Currently, the Helm chart doesn't allow the use of an existing serviceAccount because a new serviceAccount is unconditionally created during the Helm deployment.

Describe the solution you'd like
We should add a serviceAccount.create boolean variable to give users the option not to create a serviceAccount.

Describe alternatives you've considered
None.

Additional context
This feature would be helpful in situations where serviceAccounts are created separately from Helm charts.

Failed to fetch helm charts.

What happened:

Failed to fetch helm charts.

  • Error: failed to fetch https://github.com/openclarity/kubeclarity/releases/download/kubeclarity-v2.15.1-helm/kubeclarity-v2.15.1.tgz : 404 Not Found

What you expected to happen:

Successful fetch.

How to reproduce it (as minimally and precisely as possible):

debug_kubeclarity/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: kubeclarity
  namespace: ops
  includeCRDs: false
  releaseName: kubeclarity
  version: v2.15.1
  repo: https://openclarity.github.io/kubeclarity

kustomize build --enable-helm ./debug_kubeclarity

Are there any error messages in KubeClarity logs?

(e.g. kubectl logs -n kubeclarity --selector=app=kubeclarity)

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version --short):
  • Helm version (use helm version):
    • v3.15.3
  • KubeClarity version (use kubectl -n kubeclarity exec deploy/kubeclarity -- ./backend version)
  • KubeClarity Helm Chart version (use helm -n kubeclarity list)
    • v2.15.1
  • Cloud provider or hardware configuration:
  • Others:
    • kustomize: v5.3.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.