Comments (8)
Yep fair enough having a summary of the errors encountered and recommendation on how to fix these are in our backlog, but it needs some more engineering effort.
Thanks!
from kubehound.
Thanks for the report, we are debugging it and we are going to create a PR with a potential fix and another PR with more verbose logging to simplify debugging on your side as well.
We believe that you are missing permission to read some resources on the targeted cluster and it makes the building the graph fail ungracefully because it's missing one side of an edge (IN or OUT vertices).
Our potential fix is going to log the error and continue instead of failing the complete graph building.
It will still create an imcomplete graph, so we should make it very explicit to the user that the graph may thus not fully representative of the really of the cluster
It would also be nice to have a --version flag so we can output the info in issues and benefit from an easier reproduction setup
We are aware and have a card tracking this.
Thanks!
from kubehound.
Thanks a lot for the quick answer!
We'll dig on the permissions on our side, but indeed a way to have a verbose stacktrace or change the LOG_LEVEL to have network exchanges be displayed would help the adoption for more users, and definitely ease the debuging/PR game! ^.^
Also, consider a --version
or --info
to have an automated system/version/arch dump to add in the issues! 🌹
from kubehound.
Here are my current perms, could you paste yours to compare ? 🙏
kubectl auth can-i --list
Resources Non-Resource URLs Resource Names Verbs
pods.*/exec [] [] [*]
pods.*/portforward [] [] [*]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
bindings [] [] [get list watch]
configmaps [] [] [get list watch]
endpoints [] [] [get list watch]
events [] [] [get list watch]
limitranges [] [] [get list watch]
namespaces/status [] [] [get list watch]
namespaces [] [] [get list watch]
persistentvolumeclaims/status [] [] [get list watch]
persistentvolumeclaims [] [] [get list watch]
pods/log [] [] [get list watch]
pods/status [] [] [get list watch]
pods [] [] [get list watch]
replicationcontrollers/scale [] [] [get list watch]
replicationcontrollers/status [] [] [get list watch]
replicationcontrollers [] [] [get list watch]
resourcequotas/status [] [] [get list watch]
resourcequotas [] [] [get list watch]
serviceaccounts [] [] [get list watch]
services/status [] [] [get list watch]
services [] [] [get list watch]
controllerrevisions.apps [] [] [get list watch]
daemonsets.apps/status [] [] [get list watch]
daemonsets.apps [] [] [get list watch]
deployments.apps/scale [] [] [get list watch]
deployments.apps/status [] [] [get list watch]
deployments.apps [] [] [get list watch]
replicasets.apps/scale [] [] [get list watch]
replicasets.apps/status [] [] [get list watch]
replicasets.apps [] [] [get list watch]
statefulsets.apps/scale [] [] [get list watch]
statefulsets.apps/status [] [] [get list watch]
statefulsets.apps [] [] [get list watch]
*.autoscaling.internal.knative.dev [] [] [get list watch]
horizontalpodautoscalers.autoscaling/status [] [] [get list watch]
horizontalpodautoscalers.autoscaling [] [] [get list watch]
cronjobs.batch/status [] [] [get list watch]
cronjobs.batch [] [] [get list watch]
jobs.batch/status [] [] [get list watch]
jobs.batch [] [] [get list watch]
*.bindings.knative.dev [] [] [get list watch]
*.caching.internal.knative.dev [] [] [get list watch]
endpointslices.discovery.k8s.io [] [] [get list watch]
*.eventing.knative.dev [] [] [get list watch]
daemonsets.extensions/status [] [] [get list watch]
daemonsets.extensions [] [] [get list watch]
deployments.extensions/scale [] [] [get list watch]
deployments.extensions/status [] [] [get list watch]
deployments.extensions [] [] [get list watch]
ingresses.extensions/status [] [] [get list watch]
ingresses.extensions [] [] [get list watch]
networkpolicies.extensions [] [] [get list watch]
replicasets.extensions/scale [] [] [get list watch]
replicasets.extensions/status [] [] [get list watch]
replicasets.extensions [] [] [get list watch]
replicationcontrollers.extensions/scale [] [] [get list watch]
*.flow.triggermesh.io [] [] [get list watch]
*.flows.knative.dev [] [] [get list watch]
*.messaging.knative.dev [] [] [get list watch]
nodes.metrics.k8s.io [] [] [get list watch]
pods.metrics.k8s.io [] [] [get list watch]
*.networking.internal.knative.dev [] [] [get list watch]
ingresses.networking.k8s.io/status [] [] [get list watch]
ingresses.networking.k8s.io [] [] [get list watch]
networkpolicies.networking.k8s.io [] [] [get list watch]
poddisruptionbudgets.policy/status [] [] [get list watch]
poddisruptionbudgets.policy [] [] [get list watch]
*.routing.triggermesh.io [] [] [get list watch]
*.serving.knative.dev [] [] [get list watch]
*.sources.knative.dev [] [] [get list watch]
*.sources.triggermesh.io [] [] [get list watch]
*.targets.triggermesh.io [] [] [get list watch]
*.* [] [] [get list]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
from kubehound.
👋
I started a PR #146 here to allow to continue on error.
I don't have an easy setup to reproduce but I believe it's because some resources aren't correctly pulled on your side, either because you don't have enough permissions or because of a race between a 2 linked resources pulls.
If you could try to run kubehound from that PR's branch and let us know if that helps, it would be great.
A long term fix with better error reporting is tracked but may need some more engineering time.
from kubehound.
Works way better now!
+ case "$1" in
+ run
+ ./kubehound -c config.yaml
INFO[0000] Starting KubeHound (run_id: 01hfpmntvhwwrs3ncdm1z5enan) component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0000] Initializing launch options component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0000] Loading application configuration from file config.yaml component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0000] Initializing application telemetry component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
WARN[0000] Telemetry disabled via configuration component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0000] Loading cache provider component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0000] Loaded memcache cache provider component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0000] Loading store database provider component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0000] Loaded mongodb store provider component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0000] Loading graph database provider component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0000] Loaded janusgraph graph provider component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0001] Loading Kubernetes data collector client component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0001] Loaded k8s-api-collector collector client component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0001] Starting Kubernetes raw data ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0001] Loading data ingestor component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0001] Running dependency health checks component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
Opening in existing browser session.
INFO[0002] Running data ingest and normalization component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0002] Starting ingest sequences component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0002] Waiting for ingest sequences to complete component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0002] Running ingestor sequence core-pipeline component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0002] Starting ingest sequence core-pipeline component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0002] Running ingest group k8s-role-group component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0002] Starting k8s-role-group ingests component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0002] Waiting for k8s-role-group ingests to complete component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0002] Running ingest k8s-cluster-role-ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0002] Running ingest k8s-role-ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0016] Completed k8s-role-group ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0016] Finished running ingest group k8s-role-group component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0016] Running ingest group k8s-binding-group component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0016] Starting k8s-binding-group ingests component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0016] Running ingest k8s-role-binding-ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0016] Waiting for k8s-binding-group ingests to complete component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0016] Running ingest k8s-cluster-role-binding-ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0027] Batch writer 207 Identity written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0027] Batch writer 253 PermissionSet written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0031] Batch writer 298 Identity written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0031] Batch writer 329 PermissionSet written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0031] Completed k8s-binding-group ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0031] Finished running ingest group k8s-binding-group component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0031] Running ingest group k8s-core-group component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0031] Starting k8s-core-group ingests component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0031] Running ingest k8s-node-ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0031] Waiting for k8s-core-group ingests to complete component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0031] Running ingest k8s-endpoint-ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0032] Batch writer 35 Node written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0051] Batch writer 1830 Endpoint written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0051] Completed k8s-core-group ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0051] Finished running ingest group k8s-core-group component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0051] Running ingest group k8s-pod-group component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0051] Starting k8s-pod-group ingests component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0051] Waiting for k8s-pod-group ingests to complete component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0051] Running ingest k8s-pod-ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
WARN[0052] process pod steadybit-outpost-extension-container-j5pv4 error (continuing): no matching cache entry component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
WARN[0053] process pod aws-node-fp5wb error (continuing): no matching cache entry component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
WARN[0056] process pod fluentbit-4qctz error (continuing): no matching cache entry component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
WARN[0059] process pod consul-client-mwzvh error (continuing): no matching cache entry component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
WARN[0062] process pod datadog-8n8zv error (continuing): no matching cache entry component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
WARN[0062] process pod ebs-csi-node-instance-storage-tchjr error (continuing): no matching cache entry component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
WARN[0068] process pod infra-falco-controller-lpb5m error (continuing): no matching cache entry component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
WARN[0071] process pod kube-proxy-8lpkq error (continuing): no matching cache entry component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Batch writer 918 Pod written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Batch writer 2044 Container written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Batch writer 2743 Volume written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Batch writer 1329 Endpoint written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Completed k8s-pod-group ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Finished running ingest group k8s-pod-group component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Completed ingest sequence core-pipeline component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Completed pipeline ingest component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Completed data ingest and normalization in 1m11.002111284s component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Building attack graph component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Loading graph edge definitions component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Loading graph builder component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Running dependency health checks component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Constructing graph component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
WARN[0072] Using large cluster optimizations in graph construction component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Starting mutating edge construction component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Building edge TokenListCluster component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Edge writer 32 TokenListCluster::TOKEN_LIST written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Building edge PodCreate component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Edge writer 105 PodCreate::POD_CREATE written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Building edge PodExec component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Edge writer 9 PodExec::POD_EXEC written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Building edge PodPatch component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Edge writer 23 PodPatch::POD_PATCH written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Building edge TokenBruteforceCluster component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Edge writer 34 TokenBruteforceCluster::TOKEN_BRUTEFORCE written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Starting simple edge construction component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Creating edge builder worker pool component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Building edge PermissionDiscover component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Building edge PodExecNamespace component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
ERRO[0072] building simple edge PermissionDiscover: PERMISSION_DISCOVER edge OUT id convert: graph id cache fetch (storeID=655b7799b24ed798c9a794f7): no matching cache entry component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
ERRO[0072] Failed to create a simple edge (type: PermissionDiscover). The created graph will be INCOMPLETE (change `builder.stop_on_error` to abort or error instead) component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Building edge RoleBindClusteRoleBindingbClusterRoleRole component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Edge writer 10 RoleBindClusteRoleBindingbClusterRoleRole::ROLE_BIND written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0072] Building edge TokenBruteforceNamespace component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Edge writer 1134 PodExecNamespace::POD_EXEC written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Building edge ContainerAttach component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Edge writer 2044 ContainerAttach::CONTAINER_ATTACH written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Building edge ExploitHostTraverseToken component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Edge writer 605 ExploitHostTraverseToken::EXPLOIT_HOST_TRAVERSE written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Building edge IdentityAssumeContainer component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Edge writer 927 IdentityAssumeContainer::IDENTITY_ASSUME written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Building edge IdentityAssumeNode component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Edge writer 35 IdentityAssumeNode::IDENTITY_ASSUME written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Building edge TokenListNamespace component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Edge writer 5148 TokenBruteforceNamespace::TOKEN_BRUTEFORCE written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Building edge SharePSNamespace component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Edge writer 0 SharePSNamespace::SHARE_PS_NAMESPACE written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0073] Building edge VolumeDiscover component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 2743 VolumeDiscover::VOLUME_DISCOVER written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge EndpointExploitInternal component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 1329 EndpointExploitInternal::ENDPOINT_EXPLOIT written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge ContainerEscapeNsenter component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 0 ContainerEscapeNsenter::CE_NSENTER written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge ContainerEscapePrivilegedMount component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 67 ContainerEscapePrivilegedMount::CE_PRIV_MOUNT written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge ExploitHostWrite component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 504 ExploitHostWrite::EXPLOIT_HOST_WRITE written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge RoleBindRoleBindingbRoleBindingRole component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 275 RoleBindRoleBindingbRoleBindingRole::ROLE_BIND written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge TokenSteal component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 5225 TokenListNamespace::TOKEN_LIST written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge ContainerEscapeModuleLoad component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 67 ContainerEscapeModuleLoad::CE_MODULE_LOAD written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge ExploitHostRead component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 99 ExploitHostRead::EXPLOIT_HOST_READ written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge PodAttach component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 1493 TokenSteal::TOKEN_STEAL written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge PodPatchNamespace component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 918 PodAttach::POD_ATTACH written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge EndpointExploitExternal component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 1369 PodPatchNamespace::POD_PATCH written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge ContainerEscapeSysPtrace component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 19 ContainerEscapeSysPtrace::CE_SYS_PTRACE written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge RoleBindClusteRoleBindingbClusterRoleClusterRole component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 10 RoleBindClusteRoleBindingbClusterRoleClusterRole::ROLE_BIND written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge VolumeAccess component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 1542 EndpointExploitExternal::ENDPOINT_EXPLOIT written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 2743 VolumeAccess::VOLUME_ACCESS written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Starting dependent edge construction component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Building edge ContainerEscapeVarLogSymlink component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Edge writer 374 ContainerEscapeVarLogSymlink::CE_VAR_LOG_SYMLINK written component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Completed edge construction component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] Completed graph construction in 2.458533895s component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
INFO[0074] KubeHound run (id=01hfpmntvhwwrs3ncdm1z5enan) complete in 1m14.97266663s component=kubehound run_id=01hfpmntvhwwrs3ncdm1z5enan service=kubehound
So our issue was at
ERRO[0073] building simple edge PermissionDiscover: PERMISSION_DISCOVER edge OUT id convert: graph id cache fetch (storeID=655b79e2026ec9ca9bd05664): no matching cache entry component=kubehound run_id=01hfpn7qpnykwf7sqjnm36mvpc service=kubehound
ERRO[0073] Failed to create a simple edge (type: PermissionDiscover). The created graph will be INCOMPLETE (change `builder.stop_on_error` to abort or error instead) component=kubehound run_id=01hfpn7qpnykwf7sqjnm36mvpc service=kubehound
from kubehound.
... I somehow completly missed that notification, will make sure that never happens.
So, fine for us to merge that fix then?
Thanks for testing!
Is that ok for me to close this issue?
from kubehound.
It's still not too explicit about the error, missing perm, or way to fix, but at least the program can run to the full without stopping! ;)
from kubehound.
Related Issues (20)
- Azure Kubernetes not supported? HOT 8
- Graph websocket available over network HOT 4
- JanusGraph server doesn't start HOT 3
- Additional property name is not allowed HOT 3
- Invalid APIVersion
- storedb cannot pass healthcheck HOT 4
- Kubehound on AWS EKS HOT 2
- https://kubehound.io/ certificate expired HOT 2
- Bad CPU Error
- Add Attack Reference JSON to Kubehound directory HOT 2
- Query sample HOT 1
- G.V is no longer free HOT 4
- Links are broken Readme HOT 1
- [help needed] using gremlin server for data output in kubehound HOT 4
- Python query data not working perfectly HOT 4
- Add TTPs directily on Edge details HOT 3
- Investigate how to make edges more readable for multiple edge types HOT 3
- Document how to label nodes with their names rather than their type HOT 1
- Feature Request: Attack for the ESCALATE verb.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kubehound.