Git Product home page Git Product logo

datadog / kubehound Goto Github PK

View Code? Open in Web Editor NEW
658.0 12.0 30.0 20.67 MB

Kubernetes Attack Graph

Home Page: https://kubehound.io

License: Apache License 2.0

Go 83.45% Shell 3.22% Makefile 0.82% Dockerfile 0.73% Groovy 1.64% Smarty 0.03% Batchfile 0.19% Java 2.65% Python 1.26% HCL 1.96% Jupyter Notebook 4.05%
adversary-emulation attack-graph attack-paths cloud-native-security exploit kubernetes kubernetes-security mitre-attack purple-team red-team

kubehound's Introduction

KubeHound

KubeHound

A Kubernetes attack graph tool allowing automated calculation of attack paths between assets in a cluster

Quick Links

Sample Attack Path

Example Path

Contents

Requirements

Application

Test (Development only)

Quick Start

Prebuilt Releases

Release binaries are available for Linux / Windows / Mac OS via the releases page. These provide access to core KubeHound functionality but lack support for the make commands detailed in subsequent sections. Once the release archive is downloaded and extracted start the backend via:

./kubehound.sh backend-up

NOTE:

  • If downloading the releases via a browser you must run e.g xattr -d com.apple.quarantine KubeHound_Darwin_arm64.tar.gz before running to prevent MacOS blocking execution

Next choose a target Kubernetes cluster, either:

  • Select the targeted cluster via kubectx (need to be installed separately)
  • Use a specific kubeconfig file by exporting the env variable: export KUBECONFIG=/your/path/to/.kube/config

Finally run the compiled binary with packaged configuration (config.yaml):

./kubehound.sh run

From Source

Clone this repository via git:

git clone https://github.com/DataDog/KubeHound.git

KubeHound ships with a sensible default configuration designed to get new users up and running quickly. The first step is to prepare the application:

cd KubeHound
make kubehound

This will do the following:

  • Start the backend services via docker compose (wiping any existing data)
  • Compile the kubehound binary from source

Next choose a target Kubernetes cluster, either:

  • Select the targeted cluster via kubectx (need to be installed separately)
  • Use a specific kubeconfig file by exporting the env variable: export KUBECONFIG=/your/path/to/.kube/config

Finally run the compiled binary with default configuration:

bin/kubehound

To view the generated graph see the Using KubeHound Data section.

Sample Data

To view a sample graph demonstrating attacks in a very, very vulnerable cluster you can generate data via running the app against the provided kind cluster:

make sample-graph

To view the generated graph see the Using KubeHound Data section.

Advanced Usage

Infrastructure Setup

First create and populate a .env file with the required variables:

cp deployments/kubehound/.env.tpl deployments/kubehound/.env

Edit the variables (datadog env DD_* related and KUBEHOUND_ENV):

Note:

  • KUBEHOUND_ENV=dev will build the images locally
  • KUBEHOUND_ENV=release will use prebuilt images from ghcr.io

Running Kubehound

To replicate the automated command and run KubeHound step-by-step. First build the application:

make build

Next spawn the backend infrastructure

make backend-up

Next create a configuration file:

collector:
  type: live-k8s-api-collector
telemetry:
  enabled: true

A tailored sample configuration file can be found here, a full configuration reference containing all possible parameters here.

Finally run the KubeHound binary, passing in the desired configuration:

bin/kubehound -c <config path>

Remember the targeted cluster must be set via kubectx or setting the KUBECONFIG environment variable. Additional functionality for managing the application can be found via:

make help

Using KubeHound Data

To query the KubeHound graph data requires using the Gremlin query language via an API call or dedicated graph query UI. A number of fully featured graph query UIs are available (both commercial and open source), but we provide an accompanying Jupyter notebook based on the AWS Graph Notebook,to quickly showcase the capabilities of Kubehound. To access the UI:

  • Visit http://localhost:8888/notebooks/KubeHound.ipynb in your browser
  • Use the default password admin to login (note: this can be changed via the Dockerfile or by setting the NOTEBOOK_PASSWORD environment variable in the .env file)
  • Follow the initial setup instructions in the notebook to connect to the Kubehound graph and configure the rendering
  • Start running the queries and exploring the graph!

Example queries

We have documented a few sample queries to execute on the database in our documentation.

Query data from your scripts

Python

You can query the database data in your python script by using the following snippet:

#!/usr/bin/env python
import sys
from gremlin_python.driver.client import Client

KH_QUERY = "kh.containers().count()"
c = Client("ws://127.0.0.1:8182/gremlin", "kh")
results = c.submit(KH_QUERY).all().result()

You'll need to install gremlinpython as a dependency via: pip install gremlinpython

Development

Build

Build the application via:

make build

All binaries will be output to the bin folder

Release build

Build the release packages locally using goreleaser:

make local-release

Unit Testing

The full suite of unit tests can be run locally via:

make test

System Testing

The repository includes a suite of system tests that will do the following:

  • create a local kubernetes cluster
  • collect kubernetes API data from the cluster
  • run KubeHound using the file collector to create a working graph database
  • query the graph database to ensure all expected vertices and edges have been created correctly

The cluster setup and running instances can be found under test/setup

If you need to manually access the system test environment with kubectl and other commands, you'll need to set (assuming you are at the root dir):

cd test/setup/ && export KUBECONFIG=$(pwd)/.kube-config

Environment variable:

  • DD_API_KEY (optional): set to the datadog API key used to submit metrics and other observability data.

Setup

Setup the test kind cluster (you only need to do this once!) via:

make local-cluster-deploy

Then run the system tests via:

make system-test

To cleanup the environment you can destroy the cluster via:

make local-cluster-destroy

To list all the available commands, run:

make help

Note: if you are running on Linux but you dont want to run sudo for kind and docker command, you can overwrite this behavior by editing the following var in test/setup/.config:

  • DOCKER_CMD="docker" for docker command
  • KIND_CMD="kind" for kind command

CI Testing

System tests will be run in CI via the system-test github action

Acknowledgements

KubeHound was created by the Adversary Simulation Engineering (ASE) team at Datadog:

With additional support from:

We would also like to acknowledge the BloodHound team for pioneering the use of graph theory in offensive security and inspiring us to create this project.

kubehound's People

Contributors

christophetd avatar d0g0x01 avatar dependabot[bot] avatar edznux-dd avatar jaybeale avatar jt-dd avatar larsbingbong avatar minosity-vr avatar raesene avatar zestysoft avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubehound's Issues

Azure Kubernetes not supported?

Hello,

thanks for amazing tool. Unfortunately, I cannot use it in my current assessment. My kubeconfig works fine, I can get all information from kubectl I want, but when I run './kubehound.sh run', I am getting this error:

INFO[0000] Starting KubeHound (run_id: 6d31917e-4ebe-4797-8169-2b50b18e35a8)  component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Initializing launch options                   component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loading application configuration from file config.yaml  component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Initializing application telemetry            component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loading cache provider                        component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loaded MemCacheProvider cache provider        component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loading store database provider               component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loaded MongoProvider store provider           component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loading graph database provider               component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loaded JanusGraphProvider graph provider      component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0001] Starting Kubernetes raw data ingest           component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0001] Loading Kubernetes data collector client      component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
Error: raw data ingest: collector client creation: getting kubernetes config: no Auth Provider found for name "azure"
Usage:
  kubehound-local [flags]

Flags:
  -c, --config string   application config file
  -h, --help            help for kubehound-local

FATA[0001] raw data ingest: collector client creation: getting kubernetes config: no Auth Provider found for name "azure"  component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound


Any suggestion, please?

thanks

Kubehound on AWS EKS

Hi!

Thank you for a great tool! I've tested this out on my local Kubernetes cluster (minikube) with Kubernetes goat setup on it and it worked fine. However, for my current assessment I need to test Kubernetes cluster which is setup on AWS EKS and the applications are managed via ArgoCD. Basically the target organization uses GitOps approach to manage their applications and infrastructure.

Given this context I have a few questions. I've made several attempts to deploy and use Kubehound from within a Linux image in a Kubernetes Pod, however I've faced multiple docker issues within the Pod. This definitely felt like I took wrong approach to deploy and run Kubehound.

Is it possible to run KubeHound against AWS EKS cluster? If yes, how it should be deployed? If we were to deploy it using ArgoCD with Helm packages, do you have any examples for that?

Can't start KubeHound 2.0.0

Describe the bug
Can't start KubeHound by following the recommended steps from README.md.

To Reproduce
Steps to reproduce the behavior:

  1. Clone the repo (requirements are installed)
  2. Run make kubehound
  3. I fixed mongodb healthcheck... also, it doesn't matter, since without fixing it, it doesn't work either
  4. Run bin/kubehound
  5. See error

Expected behavior
KubeHound starts and runs a scan without any manual intervention needed.

Output
No need for screenshots, so here's my output:

$ make kubehound
WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
DOCKER_HOSTNAME=NB-PF3E0CZE docker compose -f deployments/kubehound/docker-compose.yaml -f deployments/kubehound/docker-compose.dev.yaml -f deployments/kubehound/docker-compose.ui.yaml --profile infra up --force-recreate --build -d
WARN[0000] /home/wisefrog/git/KubeHound/deployments/kubehound/docker-compose.yaml: `version` is obsolete
WARN[0000] /home/wisefrog/git/KubeHound/deployments/kubehound/docker-compose.dev.yaml: `version` is obsolete
WARN[0000] /home/wisefrog/git/KubeHound/deployments/kubehound/docker-compose.ui.yaml: `version` is obsolete
[+] Building 1.4s (31/31) FINISHED                                                                                                                                                                docker:default
 => [notebook internal] load build definition from Dockerfile                                                                                                                                               0.0s
 => => transferring dockerfile: 2.84kB                                                                                                                                                                      0.0s
 => [notebook internal] load metadata for docker.io/library/amazonlinux:2                                                                                                                                   1.0s
 => [kubegraph internal] load build definition from Dockerfile                                                                                                                                              0.0s
 => => transferring dockerfile: 3.20kB                                                                                                                                                                      0.0s
 => [kubegraph internal] load metadata for docker.io/janusgraph/janusgraph:1.0.0                                                                                                                            1.0s
 => [kubegraph internal] load metadata for docker.io/library/maven:3-openjdk-11-slim                                                                                                                        0.9s
 => [notebook internal] load .dockerignore                                                                                                                                                                  0.0s
 => => transferring context: 2B                                                                                                                                                                             0.0s
 => [notebook 1/8] FROM docker.io/library/amazonlinux:2@sha256:85825c659f9d0d51218492aab1f71a1d5adae074e95019b5518c071249a9ec95                                                                             0.0s
 => [notebook internal] load build context                                                                                                                                                                  0.0s
 => => transferring context: 175B                                                                                                                                                                           0.0s
 => CACHED [notebook 2/8] RUN mkdir -p "/root" &&     mkdir -p "/root/notebooks" &&     yum update -y &&     yum install tar gzip git amazon-linux-extras which -y &&     curl --silent --location https:/  0.0s
 => CACHED [notebook 3/8] ADD KubeHound.ipynb /root/notebooks/KubeHound.ipynb                                                                                                                               0.0s
 => CACHED [notebook 4/8] ADD RedTeam.ipynb /root/notebooks/RedTeam.ipynb                                                                                                                                   0.0s
 => CACHED [notebook 5/8] ADD BlueTeam.ipynb /root/notebooks/BlueTeam.ipynb                                                                                                                                 0.0s
 => CACHED [notebook 6/8] ADD SecurityPosture.ipynb /root/notebooks/SecurityPosture.ipynb                                                                                                                   0.0s
 => CACHED [notebook 7/8] ADD ./service.sh /usr/bin/service.sh                                                                                                                                              0.0s
 => CACHED [notebook 8/8] RUN chmod +x /usr/bin/service.sh                                                                                                                                                  0.0s
 => [notebook] exporting to image                                                                                                                                                                           0.0s
 => => exporting layers                                                                                                                                                                                     0.0s
 => => writing image sha256:ac062a6fd18e4c0caaf52ecf09d56cffe4fa62ba7a421eccd2e125ae90a1aa3d                                                                                                                0.0s
 => => naming to docker.io/library/kubehound-dev-notebook                                                                                                                                                   0.0s
 => [kubegraph internal] load .dockerignore                                                                                                                                                                 0.0s
 => => transferring context: 2B                                                                                                                                                                             0.0s
 => [kubegraph internal] load build context                                                                                                                                                                 0.0s
 => => transferring context: 1.55kB                                                                                                                                                                         0.0s
 => [kubegraph stage-1 1/8] FROM docker.io/janusgraph/janusgraph:1.0.0@sha256:164893be6d2bb20d07729413fbae7e844d26ddbb2ebdad4cd3bad9187f464faa                                                              0.0s
 => [kubegraph build 1/4] FROM docker.io/library/maven:3-openjdk-11-slim@sha256:2cb7c73ba2fd0f7ae64cfabd99180030ec85841a1197b4ae821d21836cb0aa3b                                                            0.0s
 => CACHED [kubegraph stage-1 2/8] COPY --chown=janusgraph:janusgraph kubehound-db-init.groovy /docker-entrypoint-initdb.d/                                                                                 0.0s
 => CACHED [kubegraph stage-1 3/8] COPY --chown=janusgraph:janusgraph lib/jmx_prometheus_javaagent-0.18.0.jar /opt/janusgraph/lib/jmx_prometheus_javaagent-0.18.0.jar                                       0.0s
 => CACHED [kubegraph stage-1 4/8] COPY --chown=janusgraph:janusgraph lib/exporter-config.yaml /opt/janusgraph/lib/exporter-config.yaml                                                                     0.0s
 => CACHED [kubegraph stage-1 5/8] COPY --chown=janusgraph:janusgraph conf/jvm.options /opt/janusgraph/conf/jvm.options                                                                                     0.0s
 => CACHED [kubegraph build 2/4] COPY dsl/kubehound/src /home/app/src                                                                                                                                       0.0s
 => CACHED [kubegraph build 3/4] COPY dsl/kubehound/pom.xml /home/app                                                                                                                                       0.0s
 => CACHED [kubegraph build 4/4] RUN mvn -f /home/app/pom.xml clean install                                                                                                                                 0.0s
 => CACHED [kubegraph stage-1 6/8] COPY --from=build --chown=janusgraph:janusgraph /home/app/target/kubehound-1.0.0.jar /opt/janusgraph/lib/kubehound-1.0.0.jar                                             0.0s
 => CACHED [kubegraph stage-1 7/8] COPY --chown=janusgraph:janusgraph scripts/health-check.groovy /opt/janusgraph/scripts/                                                                                  0.0s
 => CACHED [kubegraph stage-1 8/8] COPY --chown=janusgraph:janusgraph kubehound-dsl-init.groovy /opt/janusgraph/scripts/                                                                                    0.0s
 => [kubegraph] exporting to image                                                                                                                                                                          0.0s
 => => exporting layers                                                                                                                                                                                     0.0s
 => => writing image sha256:dc2ea5bc822eb845d833934320ddc55644a76a99834860c190ec905b93d6b578                                                                                                                0.0s
 => => naming to docker.io/library/kubehound-dev-kubegraph                                                                                                                                                  0.0s
[+] Running 3/3
 โœ” Container kubehound-dev-notebook  Started                                                                                                                                                               10.8s
 โœ” Container kubehound-dev-graphdb   Healthy                                                                                                                                                                3.7s
 โœ” Container kubehound-dev-storedb   Started                                                                                                                                                                0.7s
cd cmd && go build -ldflags="-X github.com/DataDog/KubeHound/pkg/config.BuildVersion=59ba228-" -o ../bin/kubehound kubehound/*.go

Then trying to execute KubeHound:

$ bin/kubehound
INFO[0000] Initializing application telemetry            component=kubehound run_id=01hv1eq4hr4bfewxcf5v7g10yj service=kubehound
WARN[0000] Telemetry disabled via configuration          component=kubehound run_id=01hv1eq4hr4bfewxcf5v7g10yj service=kubehound
INFO[0000] Starting KubeHound (run_id: 01hv1eq4hr4bfewxcf5v7g10yj)  component=kubehound run_id=01hv1eq4hr4bfewxcf5v7g10yj service=kubehound
INFO[0000] Initializing providers (graph, cache, store)  component=kubehound run_id=01hv1eq4hr4bfewxcf5v7g10yj service=kubehound
INFO[0000] Loading cache provider                        component=kubehound run_id=01hv1eq4hr4bfewxcf5v7g10yj service=kubehound
INFO[0000] Loaded memcache cache provider                component=kubehound run_id=01hv1eq4hr4bfewxcf5v7g10yj service=kubehound
INFO[0000] Loading store database provider               component=kubehound run_id=01hv1eq4hr4bfewxcf5v7g10yj service=kubehound
Error: factory config creation: store database client creation: error parsing uri: scheme must be "mongodb" or "mongodb+srv"
Usage:
  kubehound-local [flags]
  kubehound-local [command]

Available Commands:
  completion  Generate the autocompletion script for the specified shell
  dump        Collect Kubernetes resources of a targeted cluster
  help        Help about any command

Flags:
  -c, --config string   application config file
  -h, --help            help for kubehound-local

Use "kubehound-local [command] --help" for more information about a command.

FATA[0000] factory config creation: store database client creation: error parsing uri: scheme must be "mongodb" or "mongodb+srv"  component=kubehound run_id=01hv1eq4hr4bfewxcf5v7g10yj service=kubehound

Desktop:

  • OS: Ubuntu 22.04.4 LTS
  • Browser N/A
  • Version 22.04.4

Additional context
Docker version:

$ docker version
Client: Docker Engine - Community
 Version:           20.10.23
 API version:       1.41
 Go version:        go1.18.10
 Git commit:        7155243
 Built:             Thu Jan 19 17:45:08 2023
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          26.0.0
  API version:      1.45 (minimum version 1.24)
  Go version:       go1.21.8
  Git commit:       8b79278
  Built:            Wed Mar 20 15:17:48 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.28
  GitCommit:        ae07eda36dd25f8a1b98dfbf587313b99c0190bb
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Docker compose & go installations:

$ docker compose version
Docker Compose version v2.26.1
$ go version
go version go1.22.1 linux/amd64

Invalid APIVersion

When running kubehound, I am getting the mentioned error saying that apiVersion is invalid. I am able to run kubectl commands without any problem.

./kubehound run 
INFO[0000] Starting KubeHound (run_id: 3da28dd5-2050-46a2-bdb9-e34e90f54ca4)  component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
INFO[0000] Initializing launch options                   component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
INFO[0000] Loading application configuration from default embedded  component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
INFO[0000] Initializing application telemetry            component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
WARN[0000] Telemetry disabled via configuration          component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
INFO[0000] Loading cache provider                        component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
INFO[0000] Loaded MemCacheProvider cache provider        component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
INFO[0000] Loading store database provider               component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
INFO[0000] Loaded MongoProvider store provider           component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
INFO[0000] Loading graph database provider               component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
INFO[0000] Loaded JanusGraphProvider graph provider      component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
INFO[0000] Starting Kubernetes raw data ingest           component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
INFO[0000] Loading Kubernetes data collector client      component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound
Error: raw data ingest: collector client creation: getting kubernetes config: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
Usage:
  kubehound-local [flags]

Flags:
  -c, --config string   application config file
  -h, --help            help for kubehound-local

FATA[0000] raw data ingest: collector client creation: getting kubernetes config: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"  component=kubehound run_id=3da28dd5-2050-46a2-bdb9-e34e90f54ca4 service=kubehound

$ kubectl get pods No resources found in default namespace.

Can someone please help in resolving this?

Add TTPs directily on Edge details

Hello !
Thank's for the great tool !
When I tested the tool, I saw that the kubehound attacks (TTPs) were not linked directly in the Janus Graph database.

I suggest adding TTPs directly to the Edges details.

If we want to browse the Janus graph data or link the Janus graph with data from another security tool to trace or automate attacks or propose mitigations.

In these cases it's intressting to add the TTP reference directly to the edges.

JanusGraph server doesn't start

Describe the bug
When i'm preparing Kubehound to run, I execute the ./kubehound.sh backend-up command but it fails to get container JanusGraph Server running, it keeps restarting with the message:
waiting for JanusGraph Server...
/etc/opt/janusgraph/janusgraph-server.yaml will be used to start JanusGraph Server in foreground

To Reproduce
Steps to reproduce the behavior:

  1. Download the binary
  2. Execute "./kubehound.sh backend-up"

Expected behavior
Get the backend services running.

Screenshots
Captura de pantalla 2023-10-09 a la(s) 15 32 20
Captura de pantalla 2023-10-09 a la(s) 15 32 32
Captura de pantalla 2023-10-09 a la(s) 15 32 43

Desktop (please complete the following information):

  • OS: Sonoma 14.0
  • Browser Arc
  • Version 1.11

Additional context

Downloaded release:
https://github.com/DataDog/KubeHound/releases/latest/download/KubeHound_Darwin_arm64.tar.gz
Result of docker inspect command:
[
{
"Id": "fd520a0c2ad222dceacc58b830dc61458ce23d3820575184c3efd851ef0da712",
"Created": "2023-10-09T18:22:05.43352592Z",
"Path": "docker-entrypoint.sh",
"Args": [
"janusgraph"
],
"State": {
"Status": "restarting",
"Running": true,
"Paused": false,
"Restarting": true,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 137,
"Error": "",
"StartedAt": "2023-10-09T18:35:54.363988398Z",
"FinishedAt": "2023-10-09T18:36:00.551024818Z",
"Health": {
"Status": "unhealthy",
"FailingStreak": 0,
"Log": []
}
},
"Image": "sha256:f3c1795355a9da8c31065d8ed42be3bc82f0956987b6b8c2f96863ed63e658d5",
"ResolvConfPath": "/var/lib/docker/containers/fd520a0c2ad222dceacc58b830dc61458ce23d3820575184c3efd851ef0da712/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/fd520a0c2ad222dceacc58b830dc61458ce23d3820575184c3efd851ef0da712/hostname",
"HostsPath": "/var/lib/docker/containers/fd520a0c2ad222dceacc58b830dc61458ce23d3820575184c3efd851ef0da712/hosts",
"LogPath": "/var/lib/docker/containers/fd520a0c2ad222dceacc58b830dc61458ce23d3820575184c3efd851ef0da712/fd520a0c2ad222dceacc58b830dc61458ce23d3820575184c3efd851ef0da712-json.log",
"Name": "/kubehound-release-graphdb",
"RestartCount": 21,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "kubehound-release_kubenet",
"PortBindings": {
"8099/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "8099"
}
],
"8182/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "8182"
}
]
},
"RestartPolicy": {
"Name": "unless-stopped",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": [],
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"Mounts": [
{
"Type": "volume",
"Source": "kubehound-release_kubegraph_data",
"Target": "/var/lib/janusgraph",
"VolumeOptions": {}
}
],
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/a4d0d3507b25b3eaca82f7ec2b0f2da04ffdb8ded599efc8a19c5feda1cdbb93-init/diff:/var/lib/docker/overlay2/a8f1f28540588e07eb04508c4ee81e06cbf63a434bddc4575f6d0f78811aa206/diff:/var/lib/docker/overlay2/d8b8f0cf1c7d20ac1c9642a8240d31655941896d09a41109504b65dea143e960/diff:/var/lib/docker/overlay2/56e71f8b61ab95b5a97f40455eecaf077b9acac07ef6641803f3c40b8796b36b/diff:/var/lib/docker/overlay2/2cd3fbfd7d130c5acf7230da098c5ac60e87399341827c319339119c8fc442d9/diff:/var/lib/docker/overlay2/6a624f092c6fd385d4a2a8626355c0b4537d18e956898b2b82c56a68152b978a/diff:/var/lib/docker/overlay2/d8099fe4a558b3705888a2b12a5ae05a343fc20f2a75c44e42654736a5a951b3/diff:/var/lib/docker/overlay2/9dff0fb5759d9d75b6ddd06553afe49478bed680a9564e95c9b243683d3233e1/diff:/var/lib/docker/overlay2/5410c92351bf7f10543d765b0db4030b53b1ec44c0229484ad2d6eed4bf8fdad/diff:/var/lib/docker/overlay2/a189c64337fae26d5ab6dbbd1b4e5e3ff728c640a161f26c23ec1a18259a2647/diff:/var/lib/docker/overlay2/c055c39615279e85958e5e8ecbacff557777b4bc35d6d709d56c62d6cbdcdbc7/diff:/var/lib/docker/overlay2/d05018347d73099dc3876478199684e128ef8ed1b873d8d9826710bbc070f061/diff:/var/lib/docker/overlay2/4222835c026936d9a752fd7304ee326d0d7242d244020418801fe981fdb4b587/diff:/var/lib/docker/overlay2/d683f794fcc53c911b941a37ddd2ccabd4110cf6ff15d5588ef40a7d4447e7c8/diff:/var/lib/docker/overlay2/6b054a1c0575c7ee608a87c114fbbc36695a3f38e07f22b2a274a13b359b7efb/diff:/var/lib/docker/overlay2/b905ff1b12468ba208dd0fb71eedf33535b147fa771d6ce10379cc38e68a3842/diff:/var/lib/docker/overlay2/ef48b8ee0977d8d177a87afb8d0a7c5032138bc8380ecc0e20b4fa61d0586e6e/diff:/var/lib/docker/overlay2/0433d70d3b548974a90e1e3321a1c4c1751da7adddb163a39cb586d94a8510e3/diff:/var/lib/docker/overlay2/aca1f2a6d43ee32b1755a64315fe81e420bb77aae380baa207dc4d39fc8016be/diff:/var/lib/docker/overlay2/b565d6cdaf625d36b016a7892397e194538b98ff3403c87118acd4d55d7e0e3f/diff",
"MergedDir": "/var/lib/docker/overlay2/a4d0d3507b25b3eaca82f7ec2b0f2da04ffdb8ded599efc8a19c5feda1cdbb93/merged",
"UpperDir": "/var/lib/docker/overlay2/a4d0d3507b25b3eaca82f7ec2b0f2da04ffdb8ded599efc8a19c5feda1cdbb93/diff",
"WorkDir": "/var/lib/docker/overlay2/a4d0d3507b25b3eaca82f7ec2b0f2da04ffdb8ded599efc8a19c5feda1cdbb93/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "kubehound-release_kubegraph_data",
"Source": "/var/lib/docker/volumes/kubehound-release_kubegraph_data/_data",
"Destination": "/var/lib/janusgraph",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "fd520a0c2ad2",
"Domainname": "",
"User": "janusgraph",
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"ExposedPorts": {
"8099/tcp": {},
"8182/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"JAVA_HOME=/opt/java/openjdk",
"LANG=en_US.UTF-8",
"LANGUAGE=en_US:en",
"LC_ALL=en_US.UTF-8",
"JAVA_VERSION=jdk-11.0.20.1+1",
"JANUS_VERSION=1.0.0-rc2",
"JANUS_HOME=/opt/janusgraph",
"JANUS_CONFIG_DIR=/etc/opt/janusgraph",
"JANUS_DATA_DIR=/var/lib/janusgraph",
"JANUS_SERVER_TIMEOUT=30",
"JANUS_STORAGE_TIMEOUT=60",
"JANUS_PROPS_TEMPLATE=berkeleyje-lucene",
"JANUS_INITDB_DIR=/docker-entrypoint-initdb.d",
"gremlinserver.graphs.graph=/etc/opt/janusgraph/janusgraph.properties",
"gremlinserver.threadPoolWorker=8",
"gremlinserver.gremlinPool=0",
"JAVA_OPTIONS_FILE=/opt/janusgraph/conf/jvm.options",
"janusgraph.ids.block-size=3000000",
"janusgraph.schema.constraints=true",
"janusgraph.schema.default=none",
"gremlinserver.maxContentLength=2097152",
"gremlinserver.evaluationTimeout=240000",
"gremlinserver.metrics.jmxReporter.enabled=true",
"gremlinserver.metrics.consoleReporter.enabled=false",
"gremlinserver.metrics.slf4jReporter.enabled=false",
"gremlinserver.metrics.graphiteReporter.enabled=false",
"gremlinserver.metrics.csvReporter.enabled=false",
"gremlinserver.scriptEngines.gremlin-groovy.plugins[org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin].classImports[+]=com.datadog.ase.kubehound.EndpointExposure",
"gremlinserver.scriptEngines.gremlin-groovy.plugins[org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin].files[+]=scripts/kubehound-dsl-init.groovy"
],
"Cmd": [
"janusgraph"
],
"Healthcheck": {
"Test": [
"CMD",
"bin/gremlin.sh",
"-e",
"scripts/remote-connect.groovy"
],
"Interval": 30000000000,
"Timeout": 30000000000,
"Retries": 3
},
"Image": "ghcr.io/datadog/kubehound-graph:latest",
"Volumes": null,
"WorkingDir": "/opt/janusgraph",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"com.datadoghq.ad.logs": "[{"app": "kubegraph", "service": "kubehound"}]",
"com.docker.compose.config-hash": "b9d8951c4c51d1f6e87eccf848003100083486f7d69a578abc59bc7db7cd14c1",
"com.docker.compose.container-number": "1",
"com.docker.compose.depends_on": "",
"com.docker.compose.image": "sha256:f3c1795355a9da8c31065d8ed42be3bc82f0956987b6b8c2f96863ed63e658d5",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "kubehound-release",
"com.docker.compose.project.config_files": "xxxxxx/kubehound/deployments/kubehound/docker-compose.yaml,/xxxxx/kubehound/deployments/kubehound/docker-compose.release.yaml",
"com.docker.compose.project.working_dir": "/xxxxx/kubehound/deployments/kubehound",
"com.docker.compose.service": "kubegraph",
"com.docker.compose.version": "2.19.0",
"org.opencontainers.image.created": "โ€2023-09-06T06:40:31Zโ€",
"org.opencontainers.image.description": "Official JanusGraph Docker image",
"org.opencontainers.image.documentation": "https://docs.janusgraph.org/v1.0/",
"org.opencontainers.image.license": "Apache-2.0",
"org.opencontainers.image.ref.name": "ubuntu",
"org.opencontainers.image.revision": "30b9415",
"org.opencontainers.image.source": "https://github.com/DataDog/kubehound/",
"org.opencontainers.image.title": "JanusGraph Docker Image",
"org.opencontainers.image.url": "https://janusgraph.org/",
"org.opencontainers.image.vendor": "JanusGraph",
"org.opencontainers.image.version": "1.0.0-rc2"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "34b830f2f44a76437e74d97000d8d856147a79cdebcff0e5fd65616f41dd9858",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/34b830f2f44a",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"kubehound-release_kubenet": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"kubehound-release-graphdb",
"kubegraph",
"fd520a0c2ad2"
],
"NetworkID": "7a91697d9e723b2273592ad7051ee85f9406e04e654116c6ba30676599324450",
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"DriverOpts": null
}
}
}
}
]

Query sample

We are at the step that we want to write a query sample to get the grapgical image. Can you please give us some sample queries for the database that's been created underneath Kube?
Screenshot 2023-12-08 at 1 02 00โ€ฏPM
Also we need help with setup of the gtodv. Can you please show us an example and the fields that need to be completed - screenshot attached, thanks.

Graph websocket available over network

Hello.

currently, the docker exposes websocket port over localhost only:

tcp 0 0 127.0.0.1:8182 0.0.0.0:* LISTEN 3691545/docker-prox
Could you add an option to make it available over network (0.0.0.0:8182), please?

thanks
Rafal

storedb cannot pass healthcheck

Describe the bug

docker-compose healthcheck trying to execute docker-healthcheck command:
OCI runtime exec failed: exec failed: unable to start container process: exec: \"docker-healthcheck\": executable file not found in $PATH: unknown

image

To Reproduce
./kubehound backend-up

G.V is no longer free

Is your feature request related to a problem? Please describe.
Your documentation recommends the use of G.V to visualise the results of your tool and gives no assistance for any other method. G.V has introduced a license scheme and it is no longer possible to use your tool without a license or a trial key.

Describe the solution you'd like
Provide documentation for using your tool with another visualisation interface.

Describe alternatives you've considered
I suppose you could write one yourself, but that seems somewhat overkill.

Additional context
I've had an email conversation with G.V and they said that KubeHound was launched when they still had a free tier but they don't any longer.

Python query data not working perfectly

Describe the bug
Hi, I'm trying to make a simple query with the gremlin_python package and it's not working well.
I think in the gremlin_python package the implementation of DataType.custom is not implemented yet, take a look here: https://github.com/../gremlin-python/../graphbinaryV1.py. And you have to add its definition here : https://github.com/../gremlin-python/../traversal.py. I tried to create it on my own but without success.
Is there another way in python to extract the data for this type of query, like in json,csv or yaml or any other format ?

To Reproduce
Steps to reproduce the behavior:

  1. I run this code :
KH_QUERY = "kh.V().critical()"
c = Client("ws://127.0.0.1:8182/gremlin", "kh")
results = c.submit(KH_QUERY).all().result()
print(results)
  1. And i encounter this issue :
KeyError: <DataType.custom: 0>

Expected behavior
Result in list form or dict.

Desktop (please complete the following information):

  • OS: Ubuntu 22.04.4 LTS
  • Python : 3.10
  • Kubehound : 1.2.0

Conclusion

Maybe I need to create an issue in https://github.com/apache/tinkerpop/tree/master. But I think you're implementing things for Kubehound there, so maybe you'll answer my issue.

[help needed] using gremlin server for data output in kubehound

after configuring gremlin server which is running on different host in kubehound I am getting below errors:
Error occurred during operation gremlinServerWSProtocol.readLoop(): 'E0703: expected string Key for map, got type='0x%!x(MISSING)''

2024/03/01 11:05:27 Read loop error 'E0703: expected string Key for map, got type='0x%!x(MISSING)'', closing read loop.
2024/03/01 11:05:27 Connection error callback invoked, closing protocol.
2024/03/01 11:05:27 Error occurred during operation gremlinServerWSProtocol.readLoop(): 'E0703: expected string Key for map, got type='0x%!x(MISSING)''
2024/03/01 11:05:27 Read loop error 'E0703: expected string Key for map, got type='0x%!x(MISSING)'', closing read loop.
2024/03/01 11:05:27 Connection error callback invoked, closing protocol.

can anyone help me in configuring kubehound and gremlin properly to work together. its bit urgent. Thanks in advance!

Links are broken Readme

Describe the bug
the below links are broken -

https://github.com/DataDog/KubeHound/blob/main/docs/application/Architecture.excalidraw
https://github.com/DataDog/KubeHound/blob/main/edges
https://github.com/DataDog/KubeHound/tree/main/docs/edges

To Reproduce
Steps to reproduce the behavior - just one example:

  1. Go to README.md and under Quick Links section
  2. Click on 'design canvas'

Expected behavior
Should be an overview of the application architecture

Screenshots
image

Additional context
not critical - but considering you're switching it from inner-source to open-source, potentially less people could be interested in it

Additional property name is not allowed

 ./kubehound.sh backend-up
WARN[0000] The "COMPOSE_PROJECT_NAME" variable is not set. Defaulting to a blank string.
WARN[0000] The "COMPOSE_PROJECT_NAME" variable is not set. Defaulting to a blank string.
(root) Additional property name is not allowed

Package: KubeHound_Darwin_arm64
Device: 23.0.0 Darwin Kernel Version 23.0.0: Fri Sep 15 14:41:43 PDT 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T6000 arm64

Default docker setup does not work on Linux Docker

Describe the bug
Using the default docker configuraion, I was unable to examine the KubeHound output using the example Jupyter notebook deployment. I had to make config changes to the Dockerfiles to get it to work. Overall, my changes would break on some docker setups I think, but I was wondering if there's a way to make the example dev deployment more robust...

First problem is that in the notebook docker container it refers to "host": "host.docker.internal" as the kubegraph host to connect to. However, by default, this hostname is not going to be added on Linux docker. Only by adding the following to the notebook docker-compose (see extra_hosts attribute being added, docker/for-linux#264 (comment)):

version: "3.8"
services:
  notebook:
    build: ./notebook/
    restart: unless-stopped
    container_name: ${COMPOSE_PROJECT_NAME}-notebook
    ports:
      - "127.0.0.1:8888:8888"
    networks:
      - kubenet
    volumes:
      - ./notebook/shared:/root/notebooks/shared
    extra_hosts:
      - "host.docker.internal:host-gateway"

networks:
  kubenet:

When I tried to run the notebook and connect to the kubegraph using host.docker.internal hostname (without these changes), then it would fail with "Name or service unknown" type of issue when trying to connect.

After adding this extra_hosts attribute, it was still not working correctly for me (although, it may be some firewall issue, I wasn't able to determine root cause here), the kubegraph docker image did not expose the port for me, port is (understandably) only exposed on localhost. Now my assumption was that this should be OK, since you are accessing the same host as your localhost by using host.docker.internal. However, it denied connection to that port, I think because it's explicitly configured on the docker-compose, to expose the port only on the local interface 127.0.0.1 IP.

I had to add the following ports to the docker-compose.dev.yaml file to make the port accessible through host.docker.internal from the notebook (172.17.0.1 is IP for my host machine on the docker network interface).

kubegraph:
     build: ./kubegraph/
     ports:
       - "127.0.0.1:8182:8182"
       - "127.0.0.1:8099:8099"
       - "172.17.0.1:8182:8182"
       - "172.17.0.1:8099:8099"

After these changes, I was finally able to run the example queries in the jupyter notebook and started seeing the output of KubeHound.

It also does not help if I put localhost or 127.0.0.1 in the notebook, for obvious reasons (in the context of the notebook, localhost is not the host machine).

To Reproduce
Steps to reproduce the behavior:

  1. Clone repo
  2. make kubehound
  3. Check notebook, run queries (no need to actually run bin/kubehound, since it only populates the database)
  4. See connection errors when getting to running first query

Expected behavior
That the demo, example setup works without extra manual steps needed.

Desktop:

OS: Ubuntu 22.04.4 LTS
Browser N/A
Version 22.04.4

Additional context
Docker version:

$ docker version
Client: Docker Engine - Community
 Version:           20.10.23
 API version:       1.41
 Go version:        go1.18.10
 Git commit:        7155243
 Built:             Thu Jan 19 17:45:08 2023
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          26.0.0
  API version:      1.45 (minimum version 1.24)
  Go version:       go1.21.8
  Git commit:       8b79278
  Built:            Wed Mar 20 15:17:48 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.28
  GitCommit:        ae07eda36dd25f8a1b98dfbf587313b99c0190bb
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Docker compose & go installations:

$ docker compose version
Docker Compose version v2.26.1
$ go version
go version go1.22.1 linux/amd64

Feature Request: Attack for the ESCALATE verb.

Is your feature request related to a problem? Please describe.

At the moment there are attacks covering the bind and impersonate verbs in RBAC and their capabilities for privilege escalation. Another one which might be interesting to cover is the escalate verb, as it allows literal privilege escalation and is used as par of Kubernetes system accounts. There's some more details on it here

Unclear issue while building graph

Describe the bug

We're suffering from the following output, but can't give much more info due to the lack of --verbose flag or env variable behavior ๐Ÿ˜“

It seems that the collectors work well, but the graph-build state dies for "some reason" ๐Ÿ˜ญ
It would also be nice to have a --version flag so we can output the info in issues and benefit from an easier reproduction setup, my test was done with v1.2.0/KubeHound_Linux_x86_64 and a default configuration file from the same tgz file.

/opt/KubeHound ยป bash -x kubehound.sh run
+ KUBEHOUND_ENV=release
+ DOCKER_COMPOSE_FILE_PATH='-f deployments/kubehound/docker-compose.yaml'
+ DOCKER_COMPOSE_FILE_PATH+=' -f deployments/kubehound/docker-compose.release.yaml'
+ '[' -n '' ']'
+ DOCKER_COMPOSE_PROFILE='--profile infra'
++ hostname
+ DOCKER_HOSTNAME=XXXXXXXXXXX
+ export DOCKER_HOSTNAME
+ case "$1" in
+ run
+ ./kubehound -c config.yaml
INFO[0000] Starting KubeHound (run_id: 01hf961v7hj5etn4w28hpt0kez)  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0000] Initializing launch options                   component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0000] Loading application configuration from file config.yaml  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0000] Initializing application telemetry            component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
WARN[0000] Telemetry disabled via configuration          component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0000] Loading cache provider                        component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0000] Loaded memcache cache provider                component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0000] Loading store database provider               component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0000] Loaded mongodb store provider                 component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0000] Loading graph database provider               component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0000] Loaded janusgraph graph provider              component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Loading Kubernetes data collector client      component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Loaded k8s-api-collector collector client     component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Starting Kubernetes raw data ingest           component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Loading data ingestor                         component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Running dependency health checks              component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Running data ingest and normalization         component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Starting ingest sequences                     component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Waiting for ingest sequences to complete      component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Running ingestor sequence core-pipeline       component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Starting ingest sequence core-pipeline        component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Running ingest group k8s-role-group           component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Starting k8s-role-group ingests               component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Waiting for k8s-role-group ingests to complete  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Running ingest k8s-cluster-role-ingest        component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0001] Running ingest k8s-role-ingest                component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0015] Completed k8s-role-group ingest               component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0015] Finished running ingest group k8s-role-group  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0015] Running ingest group k8s-binding-group        component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0015] Starting k8s-binding-group ingests            component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0015] Running ingest k8s-role-binding-ingest        component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0015] Waiting for k8s-binding-group ingests to complete  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0015] Running ingest k8s-cluster-role-binding-ingest  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0026] Batch writer 208 Identity written             component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0026] Batch writer 254 PermissionSet written        component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0030] Batch writer 306 Identity written             component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0030] Batch writer 337 PermissionSet written        component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0030] Completed k8s-binding-group ingest            component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0030] Finished running ingest group k8s-binding-group  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0030] Running ingest group k8s-core-group           component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0030] Starting k8s-core-group ingests               component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0030] Waiting for k8s-core-group ingests to complete  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0030] Running ingest k8s-node-ingest                component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0030] Running ingest k8s-endpoint-ingest            component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0031] Batch writer 32 Node written                  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0050] Batch writer 1842 Endpoint written            component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0050] Completed k8s-core-group ingest               component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0050] Finished running ingest group k8s-core-group  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0050] Running ingest group k8s-pod-group            component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0050] Starting k8s-pod-group ingests                component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0050] Waiting for k8s-pod-group ingests to complete  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0050] Running ingest k8s-pod-ingest                 component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Batch writer 915 Pod written                  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Batch writer 2139 Container written           component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Batch writer 2725 Volume written              component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Batch writer 1398 Endpoint written            component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Completed k8s-pod-group ingest                component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Finished running ingest group k8s-pod-group   component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Completed ingest sequence core-pipeline       component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Completed pipeline ingest                     component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Completed data ingest and normalization in 1m8.958262579s  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building attack graph                         component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Loading graph edge definitions                component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Loading graph builder                         component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Running dependency health checks              component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Constructing graph                            component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
WARN[0070] Using large cluster optimizations in graph construction  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Starting mutating edge construction           component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge PodCreate                       component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Edge writer 106 PodCreate::POD_CREATE written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge PodExec                         component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Edge writer 9 PodExec::POD_EXEC written       component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge PodPatch                        component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Edge writer 23 PodPatch::POD_PATCH written    component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge TokenBruteforceCluster          component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Edge writer 34 TokenBruteforceCluster::TOKEN_BRUTEFORCE written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge TokenListCluster                component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Edge writer 32 TokenListCluster::TOKEN_LIST written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Starting simple edge construction             component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Creating edge builder worker pool             component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge ContainerEscapeNsenter          component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge ExploitHostRead                 component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Edge writer 0 ContainerEscapeNsenter::CE_NSENTER written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge IdentityAssumeNode              component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Edge writer 32 IdentityAssumeNode::IDENTITY_ASSUME written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge RoleBindClusteRoleBindingbClusterRoleClusterRole  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Edge writer 90 ExploitHostRead::EXPLOIT_HOST_READ written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge RoleBindRoleBindingbRoleBindingRole  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Edge writer 282 RoleBindRoleBindingbRoleBindingRole::ROLE_BIND written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge TokenSteal                      component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Edge writer 10 RoleBindClusteRoleBindingbClusterRoleClusterRole::ROLE_BIND written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge VolumeDiscover                  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Edge writer 1583 TokenSteal::TOKEN_STEAL written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0070] Building edge ContainerAttach                 component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 2725 VolumeDiscover::VOLUME_DISCOVER written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge EndpointExploitInternal         component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 2139 ContainerAttach::CONTAINER_ATTACH written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge ContainerEscapePrivilegedMount  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 61 ContainerEscapePrivilegedMount::CE_PRIV_MOUNT written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge ContainerEscapeSysPtrace        component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 18 ContainerEscapeSysPtrace::CE_SYS_PTRACE written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge PodAttach                       component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 1398 EndpointExploitInternal::ENDPOINT_EXPLOIT written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge TokenListNamespace              component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 915 PodAttach::POD_ATTACH written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge EndpointExploitExternal         component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 1682 EndpointExploitExternal::ENDPOINT_EXPLOIT written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge ExploitHostTraverseToken        component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 782 ExploitHostTraverseToken::EXPLOIT_HOST_TRAVERSE written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge ExploitHostWrite                component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 475 ExploitHostWrite::EXPLOIT_HOST_WRITE written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge IdentityAssumeContainer         component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 5567 TokenListNamespace::TOKEN_LIST written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge PodExecNamespace                component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 1115 PodExecNamespace::POD_EXEC written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge ContainerEscapeModuleLoad       component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Edge writer 61 ContainerEscapeModuleLoad::CE_MODULE_LOAD written  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Building edge PermissionDiscover              component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
ERRO[0071] building simple edge PermissionDiscover: PERMISSION_DISCOVER edge OUT id convert: graph id cache fetch (storeID=655493c1846c9e2644854855): no matching cache entry  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
INFO[0071] Closed background janusgraph worker on context cancel  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
Error: building attack graph: graph builder edge calculation: PERMISSION_DISCOVER edge OUT id convert: graph id cache fetch (storeID=655493c1846c9e2644854855): no matching cache entry
ERRO[0071] building simple edge IdentityAssumeContainer: IDENTITY_ASSUME edge IN id convert: graph id cache fetch (storeID=655493cc846c9e2644855172): no matching cache entry  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound
Usage:
  kubehound-local [flags]

Flags:
  -c, --config string   application config file
  -h, --help            help for kubehound-local

FATA[0071] building attack graph: graph builder edge calculation: PERMISSION_DISCOVER edge OUT id convert: graph id cache fetch (storeID=655493c1846c9e2644854855): no matching cache entry  component=kubehound run_id=01hf961v7hj5etn4w28hpt0kez service=kubehound

Expected behavior
A working ingestion & graph-build process ๐Ÿ˜‡

Desktop:

  • OS: Ubuntu 20.04.6 LTS // Yup, upgrading soon..
  • Browser: N/A
  • Version: v1.2.0/KubeHound_Linux_x86_64

Additional context
Thanks for this awesome tool, a PoC on minikupe worked nicely, but on our bigger infra.. KO.. Let me know if we can give anonymized version of specific files to help for bug reproduction!

Have a nice day! ๐ŸŒน

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.