Git Product home page Git Product logo

popeye's Introduction

Popeye: Kubernetes Live Cluster Linter

Popeye is a utility that scans live Kubernetes clusters and reports potential issues with deployed resources and configurations. As Kubernetes landscapes grows, it is becoming a challenge for a human to track the slew of manifests and policies that orchestrate a cluster. Popeye scans your cluster based on what's deployed and not what's sitting on disk. By linting your cluster, it detects misconfigurations, stale resources and assists you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.

Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!




Go Report Card codebeat badge Build Status release license docker GitHub stars Releases


Screenshots

Console

JSON

HTML

You can dump the scan report to HTML.

Grafana Dashboard

Popeye publishes Prometheus metrics. We provided a sample Popeye dashboard to get you started in this repo.


Installation

Popeye is available on Linux, OSX and Windows platforms.

  • Binaries for Linux, Windows and Mac are available as tarballs in the release page.

  • For OSX/Unit using Homebrew/LinuxBrew

    brew install derailed/popeye/popeye
  • Using go install

    go install github.com/derailed/popeye@latest
  • Building from source Popeye was built using go 1.21+. In order to build Popeye from source you must:

    1. Clone the repo

    2. Add the following command in your go.mod file

      replace (
        github.com/derailed/popeye => MY_POPEYE_CLONED_GIT_REPO
      )
      
    3. Build and run the executable

      go run main.go

    Quick recipe for the impatient:

    # Clone outside of GOPATH
    git clone https://github.com/derailed/popeye
    cd popeye
    # Build and install
    make build
    # Run
    popeye

PreFlight Checks

  • Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.

    export TERM=xterm-256color

The Command Line

You can use Popeye wide open or using a spinach yaml config to tune your linters. Details about the Popeye configuration file are below.

# Dump version info and logs location
popeye version
# Popeye a cluster using your current kubeconfig environment.
# NOTE! This will run Popeye in the context namespace if set or like kubectl will use the default namespace
popeye
# Run Popeye in the `fred` namespace
popeye -n fred
# Run Popeye in all namespaces
popeye -A
# Popeye uses a spinach config file of course! aka spinachyaml!
popeye -f spinach.yaml
# Popeye a cluster using a kubeconfig context.
popeye --context olive
# Stuck?
popeye help

Linters

Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks for a given set of curated Kubernetes resources. More will come soon! We are hoping Kubernetes friends will pitch'in to make Popeye even better.

The aim of the linters is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc...

Popeye is not another static analysis tool. It runs and inspect Kubernetes resources on live clusters and lint resources as they are in the wild!

Here is a list of some of the available linters:

Resource Linters Aliases
πŸ›€ Node no
Conditions ie not ready, out of mem/disk, network, pids, etc
Pod tolerations referencing node taints
CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM)
πŸ›€ Namespace ns
Inactive
Dead namespaces
πŸ›€ Pod po
Pod status
Containers statuses
ServiceAccount presence
CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM)
Container image with no tags
Container image using latest tag
Resources request/limits presence
Probes liveness/readiness presence
Named ports and their references
πŸ›€ Service svc
Endpoints presence
Matching pods labels
Named ports and their references
πŸ›€ ServiceAccount sa
Unused, detects potentially unused SAs
πŸ›€ Secrets sec
Unused, detects potentially unused secrets or associated keys
πŸ›€ ConfigMap cm
Unused, detects potentially unused cm or associated keys
πŸ›€ Deployment dp, deploy
Unused, pod template validation, resource utilization
πŸ›€ StatefulSet sts
Unused, pod template validation, resource utilization
πŸ›€ DaemonSet ds
Unused, pod template validation, resource utilization
πŸ›€ PersistentVolume pv
Unused, check volume bound or volume error
πŸ›€ PersistentVolumeClaim pvc
Unused, check bounded or volume mount error
πŸ›€ HorizontalPodAutoscaler hpa
Unused, Utilization, Max burst checks
πŸ›€ PodDisruptionBudget
Unused, Check minAvailable configuration pdb
πŸ›€ ClusterRole
Unused cr
πŸ›€ ClusterRoleBinding
Unused crb
πŸ›€ Role
Unused ro
πŸ›€ RoleBinding
Unused rb
πŸ›€ Ingress
Valid ing
πŸ›€ NetworkPolicy
Valid, Stale, Guarded np
πŸ›€ PodSecurityPolicy
Valid psp
πŸ›€ Cronjob
Valid, Suspended, Runs cj
πŸ›€ Job
Pod checks job
πŸ›€ GatewayClass
Valid, Unused gwc
πŸ›€ Gateway
Valid, Unused gw
πŸ›€ HTTPRoute
Valid, Unused gwr

You can also see the full list of codes


Saving Scans

To save the Popeye report to a file pass the --save flag to the command. By default it will create a tmp directory and will store your scan report there. The path of the tmp directory will be printed out on STDOUT. If you have the need to specify the output directory for the report, you can use this environment variable POPEYE_REPORT_DIR. By default, the name of the output file follow the following format : lint_<cluster-name>_<time-UnixNano>.<output-extension> (e.g. : "lint-mycluster-1594019782530851873.html"). If you want to also specify the output file name for the report, you can pass the --output-file flag with the filename you want as parameter.

Example to save report in working directory:

POPEYE_REPORT_DIR=$(pwd) popeye --save

Example to save report in working directory in HTML format under the name "report.html" :

POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

Save To S3

Alternatively, you can push the generated reports to an AWS S3 bucket (or other S3 compatible Object Storage) by providing the flag --s3-bucket. For parameters you need to provide the name of the S3 bucket where you want to store the report. To save the report in a bucket subdirectory provide the bucket parameter as bucket/path/to/report.

The AWS Go lib is used which handles your credentials. For more information check out the official documentation.

Example to save report to S3:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --out=json

If AWS S3 is not your bag, you can further define an S3 compatible storage (OVHcloud Object Storage, Minio, Google cloud storage, etc...) using s3-endpoint and s3-region as so:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --s3-region YOUR-REGION --s3-endpoint URL-OF-THE-ENDPOINT

Docker Support

You can also run Popeye in a container by running it directly from the official docker repo on DockerHub. The default command when you run the docker container is popeye, so you customize the scan by using the supported cli flags. To access your clusters, map your local kubeconfig directory into the container with -v :

docker run --rm -it -v $HOME/.kube:/root/.kube derailed/popeye --context foo -n bar

Running the above docker command with --rm means that the container gets deleted when Popeye exits. When you use --save, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output ;( To get around this, map /tmp to the container's /tmp.

NOTE: You can override the default output directory location by setting POPEYE_REPORT_DIR env variable.

docker run --rm -it \
  -v $HOME/.kube:/root/.kube \
  -e POPEYE_REPORT_DIR=/tmp/popeye \
  -v /tmp:/tmp \
  derailed/popeye --context foo -n bar --save --output-file my_report.txt

# Docker has exited, and the container has been deleted, but the file
# is in your /tmp directory because you mapped it into the container
cat /tmp/popeye/my_report.txt
<snip>

Output Formats

Popeye can generate linter reports in a variety of formats. You can use the -o cli option and pick your poison from there.

Format Description Default Credits
standard The full monty output iconized and colorized yes
jurassic No icons or color like it's 1979
yaml As YAML
html As HTML
json As JSON
junit For the Java melancholic
prometheus Dumps report a prometheus metrics dardanel
score Returns a single cluster linter score value (0-100) kabute

The Prom Queen!

Popeye can publish Prometheus metrics directly from a scan. You will need to have access to a prometheus pushgateway and credentials.

NOTE! These are subject to change based on users feedback and usage!!

In order to publish metrics, additional cli args must be present.

# Run popeye using console output and push prom metrics.
popeye --push-gtwy-url http://localhost:9091

# Run popeye using a saved html output and push prom metrics.
# NOTE! When scan are dump to disk, popeye_cluster_score metric below includes
# an additional label to track the persisted artifact so you can aggregate with the scan
# Don't think it's the correct approach as this changes the metric cardinality on every push.
# Hence open for suggestions here??
popeye -o html --save --push-gtwy-url http://localhost:9091

PopProm metrics

The following Popeye prometheus metrics are published:

  • popeye_severity_total [gauge] tracks various counts based on severity.
  • popeye_code_total [gauge] tracks counts by Popeye's linter codes.
  • popeye_linter_tally_total [gauge] tracks counts per linters.
  • popeye_report_errors_total [gauge] tracks scan errors totals.
  • popeye_cluster_score [gauge] tracks scan report scores.

PopGraf

A sample Grafana dashboard can be found in this repo to get you started.

NOTE! Work in progress, please feel free to contribute if you have UX/grafana/promql chops.


SpinachYAML

A spinach YAML configuration file can be specified via the -f option to further configure the linters. This file may specify the container utilization threshold and specific linter configurations as well as resources and codes that will be excluded from the linter.

NOTE! This file will change as Popeye matures!

Under the excludes key you can configure to skip certain resources, or linter codes. Popeye's linters are named after the k8s resource names. For example the PodDisruptionBudget linter is named poddisruptionbudgets and scans policy/v1/poddisruptionbudgets

NOTE! The linter uses the plural resource kind form and everything is spelled in lowercase.

A resource fully qualified name aka FQN is used in the spinach file to identity a resource name i.e. namespace/resource_name.

For example, the FQN of a pod named fred-1234 in the namespace blee will be blee/fred-1234. This provides for differentiating fred/p1 and blee/p1. For cluster wide resources, the FQN is equivalent to the name. Exclude rules can be either a straight string match or a regular expression. In the latter case the regular expression must be specified via the rx: prefix.

NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a loose regex rule. When your cluster resources change, this could lead to a sub-optimal scans. Thus we recommend running Popeye wide open once in a while to make sure you will pick up on any new issues that may have arisen in your clusters…

Here is an example spinach file as it stands in this release. There is a fuller eks and aks based spinach file in this repo under spinach. (BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)

# spinach.yaml

# A Popeye sample configuration file
popeye:
  # Checks resources against reported metrics usage.
  # If over/under these thresholds a linter warning will be issued.
  # Your cluster must run a metrics-server for these to take place!
  allocations:
    cpu:
      underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load.
      overPercUtilization: 50   # Checks if cpu is over allocated by more than 50% at current load.
    memory:
      underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load.
      overPercUtilization: 50   # Checks if mem is over allocated by more than 50% usage at current load.

  # Excludes excludes certain resources from Popeye scans
  excludes:
    # [NEW!] Global exclude resources and codes globally of any linters.
    global:
      fqns: [rx:^kube-] # => excludes all resources in kube-system, kube-public, etc..
      # [NEW!] Exclude resources for all linters matching these labels
      labels:
        app: [bozo, bono] #=> exclude any resources with labels matching either app=bozo or app=bono
      # [NEW!] Exclude resources for all linters matching these annotations
      annotations:
        fred: [blee, duh] # => exclude any resources with annotations matching either fred=blee or fred=duh
      # [NEW!] Exclude scan codes globally via straight codes or regex!
      codes: ["300", "206", "rx:^41"] # => exclude issue codes 300, 206, 410, 415 (Note: regex match!)

    # [NEW!] Configure individual resource linters
    linters:
      # Configure the namespaces linter for v1/namespaces
      namespaces:
        # [NEW!] Exclude these codes for all namespace resources straight up or via regex.
        codes: ["100", "rx:^22"] # => exclude codes 100, 220, 225, ...
        # [NEW!] Excludes specific namespaces from the scan
        instances:
          - fqns: [kube-public, kube-system] # => skip ns kube-pulbic and kube-system
          - fqns: [blee-ns]
            codes: [106] # => skip code 106 for namespace blee-ns

      # Skip secrets in namespace bozo.
      secrets:
        instances:
          - fqns: [rx:^bozo]

      # Configure the pods linter for v1/pods.
      pods:
        instances:
          # [NEW!] exclude all pods matching these labels.
          - labels:
              app: [fred,blee] # Exclude codes 102, 105 for any pods with labels app=fred or app=blee
            codes: [102, 105]

  resources:
    # Configure node resources.
    node:
      # Limits set a cpu/mem threshold in % ie if cpu|mem > limit a lint warning is triggered.
      limits:
        # CPU checks if current CPU utilization on a node is greater than 90%.
        cpu:    90
        # Memory checks if current Memory utilization on a node is greater than 80%.
        memory: 80

    # Configure pod resources
    pod:
      # Restarts check the restarts count and triggers a lint warning if above threshold.
      restarts: 3
      # Check container resource utilization in percent.
      # Issues a lint warning if about these threshold.
      limits:
        cpu:    80
        memory: 75


  # [New!] overrides code severity
  overrides:
    # Code specifies a custom severity level ie critical=3, warn=2, info=1
    - code: 206
      severity: 1

  # Configure a list of allowed registries to pull images from.
  # Any resources not using the following registries will be flagged!
  registries:
    - quay.io
    - docker.io

In Cluster

Popeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.

Here is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s directory in this repo.

kubectl apply -f k8s/popeye
---
apiVersion: v1
kind: Namespace
metadata:
  name:      popeye
---
apiVersion: batch/v1
kind: CronJob
metadata:
  name:      popeye
  namespace: popeye
spec:
  schedule: "* */1 * * *" # Fire off Popeye once an hour
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: popeye
          restartPolicy: Never
          containers:
            - name: popeye
              image: derailed/popeye:vX.Y.Z
              imagePullPolicy: IfNotPresent
              args:
                - -o
                - yaml
                - --force-exit-zero
              resources:
                limits:
                  cpu:    500m
                  memory: 100Mi

The --force-exit-zero should be set. Otherwise, the pods will end up in an error state.

NOTE! Popeye exits with a non-zero error code if any lint errors are detected.

Popeye Got Your RBAC!

In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.

Sample Popeye RBAC Rules (please note that those are subject to change.)

NOTE! Please review and tune per your cluster policies.

---
# Popeye ServiceAccount.
apiVersion: v1
kind:       ServiceAccount
metadata:
  name:      popeye
  namespace: popeye

---
# Popeye needs get/list access on the following Kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1
kind:       ClusterRole
metadata:
  name: popeye
rules:
- apiGroups: [""]
  resources:
   - configmaps
   - endpoints
   - namespaces
   - nodes
   - persistentvolumes
   - persistentvolumeclaims
   - pods
   - secrets
   - serviceaccounts
   - services
  verbs:     ["get", "list"]
- apiGroups: ["apps"]
  resources:
  - daemonsets
  - deployments
  - statefulsets
  - replicasets
  verbs:     ["get", "list"]
- apiGroups: ["networking.k8s.io"]
  resources:
  - ingresses
  - networkpolicies
  verbs:     ["get", "list"]
- apiGroups: ["batch.k8s.io"]
  resources:
  - cronjobs
  - jobs
  verbs:     ["get", "list"]
- apiGroups: ["gateway.networking.k8s.io"]
  resources:
  - gateway-classes
  - gateways
  - httproutes
  verbs:     ["get", "list"]
- apiGroups: ["autoscaling"]
  resources:
  - horizontalpodautoscalers
  verbs:     ["get", "list"]
- apiGroups: ["policy"]
  resources:
  - poddisruptionbudgets
  - podsecuritypolicies
  verbs:     ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
  resources:
  - clusterroles
  - clusterrolebindings
  - roles
  - rolebindings
  verbs:     ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
  resources:
  - pods
  - nodes
  verbs:     ["get", "list"]

---
# Binds Popeye to this ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind:       ClusterRoleBinding
metadata:
  name: popeye
subjects:
- kind:     ServiceAccount
  name:     popeye
  namespace: popeye
roleRef:
  kind:     ClusterRole
  name:     popeye
  apiGroup: rbac.authorization.k8s.io

Report Morphology

The lint report outputs each resource group scanned and their potential issues. The report is color/emoji coded in term of linter severity levels:

Level Icon Jurassic Color Description
Ok βœ… OK Green Happy!
Info πŸ”Š I BlueGreen FYI
Warn 😱 W Yellow Potential Issue
Error πŸ’₯ E Red Action required

The heading section for each scanned Kubernetes resource provides a summary count for each of the categories above.

The Summary section provides a Popeye Score based on the linter pass on the given cluster.


Known Issues

This initial drop is brittle. Popeye will most likely blow up when…

  • You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.25.X.
  • You don't have enough RBAC oomph to manage your cluster (see RBAC section)

Disclaimer

This is work in progress! If there is enough interest in the Kubernetes community, we will enhance per your recommendations/contributions. Also if you dig this effort, please let us know that too!


ATTA Girls/Boys!

Popeye sits on top of many of open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!

Contact Info

  1. Email: [email protected]
  2. Twitter: @kitesurfer

Β Β© 2024 Imhotep Software LLC. All materials licensed under Apache v2.0

popeye's People

Contributors

alexey-anufriev avatar aracki avatar atheiman avatar bpfoster avatar connorbrinton avatar danibaeyens avatar dependabot[bot] avatar derailed avatar derekperkins avatar djablonski avatar djablonski-moia avatar eminugurkenar avatar fvbommel avatar gandalfmagic avatar gkze avatar guusvw avatar hyooookyung avatar lareeth avatar lechat avatar magus031 avatar marians avatar matheusfm avatar nnordrum avatar orange-hbenmabrouk avatar qw1mb0 avatar renan avatar scraly avatar taintedkernel avatar yalctay93 avatar yogeek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

popeye's Issues

Running out of memory for PersistentVolumeClaim




Describe the bug
Popeye runs out of memory on one of my clusters (not all of them) if pvc section is included. Running report with all the other sections works fine.
The cluster runs in GKE.
Please let me know what additional info I can provide to help you debug this πŸ˜‰

To Reproduce
Steps to reproduce the behavior:

  1. Run popeye -s pvc
  2. See error

Expected behavior
Seeing the report πŸ˜€

Versions (please complete the following information):

  • OS: Ubuntu 19.04, 16GB of RAM
  • Popeye 0.3.11
  • K8s 1.12.7-gke.10

Full stack trace

fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x14d528d, 0x16)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/panic.go:617 +0x72
runtime.sysMap(0xcaa0000000, 0x2b0000000, 0x2294d78)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/mem_linux.go:170 +0xc7
runtime.(*mheap).sysAlloc(0x227c520, 0x2acf5c000, 0x227c530, 0x1567ae)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/malloc.go:633 +0x1cd
runtime.(*mheap).grow(0x227c520, 0x1567ae, 0x0)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/mheap.go:1222 +0x42
runtime.(*mheap).allocSpanLocked(0x227c520, 0x1567ae, 0x2294d88, 0x7f6b9f45d760)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/mheap.go:1150 +0x37f
runtime.(*mheap).alloc_m(0x227c520, 0x1567ae, 0x7f6b3f3e0100, 0x7f6b3f3e6940)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/mheap.go:977 +0xc2
runtime.(*mheap).alloc.func1()
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/mheap.go:1048 +0x4c
runtime.(*mheap).alloc(0x227c520, 0x1567ae, 0xc000010100, 0x7f6b3f3e68b0)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/mheap.go:1047 +0x8a
runtime.largeAlloc(0x2acf5c000, 0x450001, 0x7f6b3f3e68b0)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/malloc.go:1055 +0x99
runtime.mallocgc.func1()
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/malloc.go:950 +0x46
runtime.systemstack(0xc00004acc0)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/asm_amd64.s:351 +0x66
runtime.mstart()
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/proc.go:1153

goroutine 1 [running]:
runtime.systemstack_switch()
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/asm_amd64.s:311 fp=0xc000483748 sp=0xc000483740 pc=0x4561c0
runtime.mallocgc(0x2acf5c000, 0x12c5a80, 0x1, 0x1)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/malloc.go:949 +0x872 fp=0xc0004837e8 sp=0xc000483748 pc=0x40bcb2
runtime.growslice(0x12c5a80, 0xc87c000000, 0x223f7c00, 0x223f7c00, 0x223f7c01, 0xc87c000000, 0x1b65fc00, 0x223f7c00)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/slice.go:181 +0x1e6 fp=0xc000483850 sp=0xc0004837e8 pc=0x441516
github.com/derailed/popeye/internal/report.formatLine(0xc00049c070, 0x63, 0x1, 0x60, 0xc0001ac768, 0xc0001fa9c0)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/internal/report/writer.go:197 +0x2b7 fp=0xc000483930 sp=0xc000483850 pc=0x109a4b7
github.com/derailed/popeye/internal/report.(*Sanitizer).write(0xc0003e0380, 0x2, 0x1, 0xc00049c070, 0x63)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/internal/report/writer.go:144 +0x188 fp=0xc000483a38 sp=0xc000483930 pc=0x1099ab8
github.com/derailed/popeye/internal/report.(*Sanitizer).Print(...)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/internal/report/writer.go:128
github.com/derailed/popeye/internal/report.(*Builder).PrintReport(0xc0002313e0, 0x0, 0xc0003e0380)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/internal/report/builder.go:188 +0x616 fp=0xc000483bc8 sp=0xc000483a38 pc=0x10974c6
github.com/derailed/popeye/pkg.(*Popeye).dump(0xc00022bdc0, 0xc000231301)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/pkg/popeye.go:92 +0x2ba fp=0xc000483ce0 sp=0xc000483bc8 pc=0x10b814a
github.com/derailed/popeye/pkg.(*Popeye).Sanitize(0xc00022bdc0)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/pkg/popeye.go:100 +0x3e fp=0xc000483d00 sp=0xc000483ce0 pc=0x10b861e
github.com/derailed/popeye/cmd.doIt(0x2267740, 0xc00037f1f0, 0x0, 0x7)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:62 +0x11b fp=0xc000483d78 sp=0xc000483d00 pc=0x10b953b
github.com/spf13/cobra.(*Command).execute(0x2267740, 0xc00003a090, 0x7, 0x7, 0x2267740, 0xc00003a090)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x2ae fp=0xc000483e60 sp=0xc000483d78 pc=0xfdca9e
github.com/spf13/cobra.(*Command).ExecuteC(0x2267740, 0x0, 0x0, 0xc00037df88)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec fp=0xc000483f30 sp=0xc000483e60 pc=0xfdd6ec
github.com/spf13/cobra.(*Command).Execute(...)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:800
github.com/derailed/popeye/cmd.Execute()
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:38 +0x32 fp=0xc000483f88 sp=0xc000483f30 pc=0x10b9392
main.main()
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/main.go:29 +0x20 fp=0xc000483f98 sp=0xc000483f88 pc=0x11f3020
runtime.main()
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/proc.go:200 +0x20c fp=0xc000483fe0 sp=0xc000483f98 pc=0x42d82c
runtime.goexit()
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc000483fe8 sp=0xc000483fe0 pc=0x458111

goroutine 5 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x2275aa0)
	/Users/fernand/go_wk/derailed/pkg/mod/k8s.io/[email protected]/klog.go:941 +0x8b
created by k8s.io/klog.init.0
	/Users/fernand/go_wk/derailed/pkg/mod/k8s.io/[email protected]/klog.go:403 +0x6c

goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00022b840)
	/Users/fernand/go_wk/derailed/pkg/mod/[email protected]/stats/view/worker.go:147 +0xdd
created by go.opencensus.io/stats/view.init.0
	/Users/fernand/go_wk/derailed/pkg/mod/[email protected]/stats/view/worker.go:29 +0x57

goroutine 36 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f6b9d1e3f08, 0x72, 0xffffffffffffffff)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/netpoll.go:182 +0x56
internal/poll.(*pollDesc).wait(0xc000421b98, 0x72, 0x14600, 0x146bc, 0xffffffffffffffff)
	/usr/local/Cellar/go/1.12.5/libexec/src/internal/poll/fd_poll_runtime.go:87 +0x9b
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/Cellar/go/1.12.5/libexec/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000421b80, 0xc000382000, 0x146bc, 0x146bc, 0x0, 0x0, 0x0)
	/usr/local/Cellar/go/1.12.5/libexec/src/internal/poll/fd_unix.go:169 +0x19b
net.(*netFD).Read(0xc000421b80, 0xc000382000, 0x146bc, 0x146bc, 0x203000, 0x0, 0x135b2)
	/usr/local/Cellar/go/1.12.5/libexec/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc000454000, 0xc000382000, 0x146bc, 0x146bc, 0x0, 0x0, 0x0)
	/usr/local/Cellar/go/1.12.5/libexec/src/net/net.go:177 +0x69
crypto/tls.(*atLeastReader).Read(0xc00040c040, 0xc000382000, 0x146bc, 0x146bc, 0x100, 0x7f6b9d43efff, 0xc00047f9e0)
	/usr/local/Cellar/go/1.12.5/libexec/src/crypto/tls/conn.go:761 +0x60
bytes.(*Buffer).ReadFrom(0xc00045c258, 0x16a1320, 0xc00040c040, 0x409835, 0x134a000, 0x146c860)
	/usr/local/Cellar/go/1.12.5/libexec/src/bytes/buffer.go:207 +0xbd
crypto/tls.(*Conn).readFromUntil(0xc00045c000, 0x16a3320, 0xc000454000, 0x5, 0xc000454000, 0x9)
	/usr/local/Cellar/go/1.12.5/libexec/src/crypto/tls/conn.go:783 +0xf8
crypto/tls.(*Conn).readRecordOrCCS(0xc00045c000, 0x1559500, 0xc00045c138, 0xc00047fd58)
	/usr/local/Cellar/go/1.12.5/libexec/src/crypto/tls/conn.go:590 +0x125
crypto/tls.(*Conn).readRecord(...)
	/usr/local/Cellar/go/1.12.5/libexec/src/crypto/tls/conn.go:558
crypto/tls.(*Conn).Read(0xc00045c000, 0xc0004b9000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/local/Cellar/go/1.12.5/libexec/src/crypto/tls/conn.go:1236 +0x137
bufio.(*Reader).Read(0xc000450b40, 0xc0004ba038, 0x9, 0x9, 0x406234, 0xc000230300, 0xc00047fd58)
	/usr/local/Cellar/go/1.12.5/libexec/src/bufio/bufio.go:223 +0x23e
io.ReadAtLeast(0x16a11a0, 0xc000450b40, 0xc0004ba038, 0x9, 0x9, 0x9, 0x16a14c0, 0xc000252040, 0xc00008a060)
	/usr/local/Cellar/go/1.12.5/libexec/src/io/io.go:310 +0x88
io.ReadFull(...)
	/usr/local/Cellar/go/1.12.5/libexec/src/io/io.go:329
golang.org/x/net/http2.readFrameHeader(0xc0004ba038, 0x9, 0x9, 0x16a11a0, 0xc000450b40, 0x0, 0x0, 0xc0000b2390, 0x0)
	/Users/fernand/go_wk/derailed/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x88
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0004ba000, 0xc0000b2390, 0x0, 0x0, 0x0)
	/Users/fernand/go_wk/derailed/pkg/mod/golang.org/x/[email protected]/http2/frame.go:492 +0xa1
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00047ffb8, 0x15582d8, 0xc0004687b8)
	/Users/fernand/go_wk/derailed/pkg/mod/golang.org/x/[email protected]/http2/transport.go:1679 +0x8d
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00045a300)
	/Users/fernand/go_wk/derailed/pkg/mod/golang.org/x/[email protected]/http2/transport.go:1607 +0x76
created by golang.org/x/net/http2.(*Transport).newClientConn
	/Users/fernand/go_wk/derailed/pkg/mod/golang.org/x/[email protected]/http2/transport.go:670 +0x637

Support globbing in `exlude`

I'm running a cluster with pods that are spawned on the fly by an external application. These pods use images tagged latest and I'd like to ignore them in Popeye's reports.

All those pods share the same prefix (say foobar-) so it'd be nice to have globbing support in the exclude lists. That way one could write the following:

popeye:
  pod:
    exclude:
      - foobar-*

Info: Image pull secret might not be used




Describe the bug
Having secrets being used as imagePullSecret only causes info of "secret might not be in use".

To Reproduce
Steps to reproduce the behavior:

  1. Create a secret referenced only as imagePullSecret
  2. run popeye
  3. Check section "Secrets", it will be marked with "Used?" (Info)

Expected behavior
Popeye detects that the secret is in use as imagePullSecret

Screenshots
If applicable, add screenshots to help explain your problem.

Versions (please complete the following information):

  • OS: macOS
  • Popeye 0.3.6
  • K8s 1.12.6

Additional context
Add any other context about the problem here.

StatefulSet incorrectly determines apiVersion




Describe the bug
The StatefulSet sanitizer (and probably others also) determine the apiVersion incorrectly when the StatefulSet is created by an operator.

To Reproduce
Steps to reproduce the behavior:

  1. Install the prometheus-operator.
  2. Look at the alertmanager-main StatefulSet
  3. Run popeye -l warn -s sts
  4. See error

Expected behavior
The correct apiVersion is found, and violation is only reported when it's not the current one.

Screenshots
If applicable, add screenshots to help explain your problem.

Versions (please complete the following information):

  • OS: OSX
  • Popeye 0.4.3
  • K8s 1.14.8

Additional context
The problem is due to the fact that the operator adds the CRD source as kubectl.kubernetes.io/last-applied-configuration annotation. This includes, in the case of the prometheus-operator, its own apiVersion monitoring.coreos.com/v1 for the kind Alertmanager.

In sanitize/sts.go, line 56, the apiVersion is tried to be determined from that annotation, and only in case of errors, the fallback of trying to determine it from the selfLink is used. If this mechanism would be used as the default, the error would not appear.
I don't know what's the background of the approach to get the apiVersion from the annotation, so I cannot estimate if this change would cause problems somewhere else...

popeye via Docker - Set endpoint apiserver

Hello,

How to set my api endpoint on docker popeye?

I am using
docker build .

Sending build context to Docker daemon 4.068MB Step 1/11 : FROM golang:1.12.3-alpine AS build ---> 821acdc20eb8 Step 2/11 : ENV VERSION=v0.3.0 GO111MODULE=on PACKAGE=github.com/derailed/popeye ---> Using cache ---> 0a401431d7d7 Step 3/11 : WORKDIR /go/src/$PACKAGE ---> Using cache ---> 965320b59c77 Step 4/11 : COPY go.mod go.sum main.go ./ ---> Using cache ---> 2d628072a157 Step 5/11 : COPY internal internal ---> Using cache ---> cd745293c32c Step 6/11 : COPY pkg pkg ---> Using cache ---> e885f8bddbb9 Step 7/11 : COPY cmd cmd ---> Using cache ---> bba94286a838 Step 8/11 : RUN apk --no-cache add git ; CGO_ENABLED=0 GOOS=linux go build -o /go/bin/popeye -ldflags="-w -s -X $PACKAGE/cmd.version=$VERSION" *.go ---> Using cache ---> 7860fc5c32a9 Step 9/11 : FROM alpine:3.9.3 ---> cdf98d1859c1 Step 10/11 : COPY --from=build /go/bin/popeye /bin/popeye ---> Using cache ---> 73aa47a20000 Step 11/11 : ENTRYPOINT [ "/bin/popeye" ] ---> Using cache ---> 00da1ca5cf57 Successfully built 00da1ca5cf57

after that, i start my docker using:

docker run 00da1ca5cf57

I receiving:
PERSISTENTVOLUMES (2 SCANNED) πŸ’₯ 2 😱 0 πŸ”Š 0 βœ… 0 0Ωͺ β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”… Β· pods...........................................................................................πŸ’₯ πŸ’₯ Get http://localhost:8080/api/v1/pods: dial tcp 127.0.0.1:8080: connect: connection refused. Β· services.......................................................................................πŸ’₯ πŸ’₯ Get http://localhost:8080/api/v1/persistentvolumes: dial tcp 127.0.0.1:8080: connect: connection refused.

How to i set my .kube/config to use in my docker popeye?

Ingresses can not be found on clusters running K8s 1.13




Describe the bug
When running popeye against a 1.13 cluster, in which the ingresses still need to have Γ piVersion: extensions/v1beta1`, it fails with the following message:

INGS (1 SCANNED)                                                               πŸ’₯ 1 😱 0 πŸ”Š 0 βœ… 0 0Ωͺ
β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…
  Β· ingresses......................................................................................πŸ’₯
    πŸ’₯ the server could not find the requested resource.

To Reproduce
Steps to reproduce the behavior:

  1. Running a cluster with K8s 1.13, and existing ingresses, just run popeye -s ing
  2. See error

Expected behavior
Ingresses are being found & sanitized.

Versions (please complete the following information):

  • OS: Mac OSX Mojave (10.14.6)
  • Popeye [e.g. 0.4.3]
  • K8s [e.g. 1.13.0]

Additional context
I was able to fix this locally by using Ingress from k8s.io/api/extensions/v1beta1instead of k8s.io/api/networking/v1beta1, but this breaks compatibility with newer K8s versions. I don't know how to fix this properly... ☹️

Integration with kube-score

Again a nice initiative. Thank you for the effort :)

How is this different from kube-score. They both seem to do some kind of static analysis on k8s configurations.
https://github.com/zegl/kube-score

It would be nice to integrate both and have a nice experience from k9s.

I guess this will finally lead to a rules based engine something like OPA. Which has some more active elements like admission control rather than just recommendations.

What is the meaning of the icons in the output?




Describe the bug
The output shows icons with various status counts next to them. What is the meaning of these within each section of the output generated by Popeye?

To Reproduce
Steps to reproduce the behavior:
Merely run Popeye.

Expected behavior
The README.md or some docs should describe the various output's meaning.

Versions (please complete the following information):

  • OS: OSX
  • Popeye 0.1.4
  • openshift v3.11.43
  • kubernetes v1.11.0+d4cacc0

Pod-level messages are shown five times per pod




Describe the bug
When pod-level messages are shown in the console report, they appear five times instead of only once.

To Reproduce
Steps to reproduce the behavior:

  1. Run popeye against a cluster which has pod-level issues, e.g. POP-110
  2. See report

Expected behavior
See the message only once per pod.

Screenshots

  · namespace/pod................................................................................................................😱
      😱 [POP-110] Memory Current/Request (2866Mi/2861Mi) reached user 80% threshold (100%).
      😱 [POP-110] Memory Current/Request (2866Mi/2861Mi) reached user 80% threshold (100%).
      😱 [POP-110] Memory Current/Request (2866Mi/2861Mi) reached user 80% threshold (100%).
      😱 [POP-110] Memory Current/Request (2866Mi/2861Mi) reached user 80% threshold (100%).
      😱 [POP-110] Memory Current/Request (2866Mi/2861Mi) reached user 80% threshold (100%).

Versions (please complete the following information):

  • OS: OSX
  • Popeye 0.6.1
  • K8s 1.15.0

Additional context

Popeye uses wrong field (limits instead of requests) in resource utilisation checking




Describe the bug
Popeye uses limits instead of requests to compare actual usage to stated usage, so it results in warnings about over-utilisation i.e.

kubectl reports requests/limits

kubectl get statefulsets.apps mongodb-primary -o json  | jq '.spec.template.spec.containers[].resources.requests'
{
  "cpu": "10m",
  "memory": "128Mi"
}

kubectl get statefulsets.apps mongodb-primary -o json  | jq '.spec.template.spec.containers[].resources.limits'
{
  "cpu": "600m",
  "memory": "612Mi"
}

and Popeye stated actual requests are

default/mongodb-primary........................................................................😱
    😱 CPU over allocated. Requested:600m - Current:14m (4286%).
    😱 Memory over allocated. Requested:612Mi - Current:149Mi (411%).

To Reproduce
Steps to reproduce the behavior:

  1. Comapre Popeye report and kubectl report

Expected behavior
Popeye doesn't report warning and uses correct fields
Screenshots
If applicable, add screenshots to help explain your problem.

Versions (please complete the following information):

  • OS: OSX
  • Popeye 0.3.6
  • K8s Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.7-gke.17"

Additional context
Add any other context about the problem here.

Add outcome issue detail to the prometheus report.

As a prometheus user I need to have a label listing the outcome issues found in order to build a detailed alert via alertmanager (alert component of the prometheus stack).

I have am initial proposal for review:
#68

Output as Yaml or JSON?




Is your feature request related to a problem? Please describe.
I wanted to use this tool as part of a regular set of scans that run against our cluster. We could then output the logs of this scan to an elasticsearch instance and visualise the current (and previous) score of our cluster.

Describe the solution you'd like
The kubectl CLI provides a -o flag that allows yaml, json, wide etc. Outputting the contents of the report into a parseable format like yaml or json would be perfect.

Describe alternatives you've considered
I played around with some sed scripts but it felt a little hacky.

Additional context

Boom! runtime error: invalid memory address or nil pointer dereference

Hi,

Great tool, looks very promising. Unfortunately not working for me. Not sure if I am doing anything wrong or it's a bug. The spinach file is the same one you have shared as a sample.

I am using Osx and popeye version is 0.3.10.

➜ popeye popeye -f spinach.yaml
___ ___ _____ _____ S .-'-.
| _ _| _ \ \ \ / / | o | K \ | _/ _ \ _/ _| \ V /| _| S -,---._
|_| _/| || || || [] .->' O |-' Biffsem and Buffsem! =/ (
/_ /
_, _)----; |

Boom! runtime error: invalid memory address or nil pointer dereference
➜ popeye
➜ popeye
➜ popeye popeye --context k8s-cluster.ci.xooa.io
___ ___ _____ _____ S .-'-.
| _ _| _ \ \ \ / / | o | K \ | _/ _ \ _/ _| \ V /| _| S -,---._
|_| _/| || || || [] .->' O |-' Biffsem and Buffsem! =/ (
/_ /
_, _)----; |
Boom! runtime error: invalid memory address or nil pointer dereference

Work with less permissions (namespace only)




Is your feature request related to a problem? Please describe.
When having restricted access to the cluster, popeye doesn't seem to work.
If you only have access to resources in a namespace you will get messages like:

message: 'configmaps is forbidden: User "system:serviceaccount:xxx:xxx"
          cannot list resource "configmaps" in API group "" at the cluster scope'

although a request like this works (same kubeconfig):
kubectl get pvc -n "namespace"

Describe the solution you'd like
Maybe i'm using popeye wrong or it is not supported atm?

Describe alternatives you've considered
As we are consumers of namespace we don't have access to serviceaccounts with more permissions on cluster level.

Additional context
These are the permissions we are missing due to restricted access:

- apiGroups: ["rbac.authorization.k8s.io"]
  resources:
  - clusterroles
  - clusterrolebindings
  - roles
  - rolebindings
  verbs:     ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
  resources:
  - pods
  - nodes
  verbs:     ["get", "list"]

Flag -n seems not to work for pods




Describe the bug
Running popeye -n <my_namespace> gives results for pods in all namespaces.
Results for services are filtered correctly.

Expected behavior
Running popeye -n <my_namespace> should give me results only for the desired namespace.

Versions (please complete the following information):

  • OS: macOS
  • Popeye 0.1.4
  • K8s 1.12.6

Additional context
REALLY useful tool and again (-> k9s) a great job.
Thank you!

Inaccurate testing of service account tokens

We ran popeye with cluster-admin permissions in our 1.13 EKS cluster and it claims all service accounts in our cluster reference a secret that does not exist. Snippet of the output:

  Β· kube-system/external-dns.......................................................................πŸ’₯
    πŸ’₯ [POP-304] References a secret "external-dns-token-tb78n" which does not exists.
  Β· kube-system/fluentd-elasticsearch..............................................................πŸ’₯
    πŸ’₯ [POP-304] References a secret "fluentd-elasticsearch-token-hq72l" which does not exists.
  Β· kube-system/generic-garbage-collector..........................................................πŸ’₯
    πŸ’₯ [POP-304] References a secret "generic-garbage-collector-token-cnlpn" which does not exists.

But looking in the kube-system namespace, the secrets are there:

$ kubectl get serviceaccount -n kube-system \
>   external-dns fluentd-elasticsearch generic-garbage-collector \
>   -o=custom-columns=NAME:.metadata.name,Secrets:.secrets[*].name
NAME                        Secrets
external-dns                external-dns-token-tb78n
fluentd-elasticsearch       fluentd-elasticsearch-token-hq72l
generic-garbage-collector   generic-garbage-collector-token-cnlpn
$ kubectl get secret -n kube-system |
>   grep -e external-dns -e fluentd-elasticsearch -e generic-garbage-collector
external-dns-token-tb78n                         kubernetes.io/service-account-token   3      25d
fluentd-elasticsearch-token-hq72l                kubernetes.io/service-account-token   3      20d
generic-garbage-collector-token-cnlpn            kubernetes.io/service-account-token   3      34d

Not sure if its useful, but we invoke popeye (usingcluster-admin ClusterRole) like this:

popeye --all-namespaces --over-allocs

Popeye version:

Version:   0.4.2
Commit:    ca409ed9c1a9da98986990242f7033563abe2a1c

Kubernetes cluster version:

Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.8-eks-a977ba", GitCommit:"a977bab148535ec195f12edc8720913c7b943f9c", GitTreeState:"clean", BuildDate:"2019-07-29T20:47:04Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

I think the issue is somewhere in here, but I don't know enough Go to figure it out :(

func (s *ServiceAccount) checkSecretRefs(fqn string, refs []v1.ObjectReference) {
for _, ref := range refs {
sfqn := cache.FQN(ref.Namespace, ref.Name)
if _, ok := s.ListSecrets()[sfqn]; !ok {
s.AddCode(304, fqn, sfqn)
}
}
}

Scan all namepsaces




Is your feature request related to a problem? Please describe.
with the -n flag you can choose the namespaces you like to scan, if omitted the one you set in the context is used or default(?)

Describe the solution you'd like
adding all to the flag scans all namespaces

Describe alternatives you've considered
maybe an all: true under namespaces in the spinach config file

Node test does not detect statefulsets when analyzing taints




Describe the bug
I have GPU nodes with a taint on them. On those nodes, I do not have any deployments, but I do have a statefulset that tolerates the taint. Popeye reports, "Found taint `nvidia.com/gpu but no pod can tolerate." It appears that Popeye only analyzed deployments, not pods, and definitely not statefulsets.

Also a couple of English grammar bits. The open quote is not closed properly in your error message. It should read:

 Found taint `nvidia.com/gpu' but no pod can tolerate.

not: (notice the missing end quote)

 Found taint `nvidia.com/gpu but no pod can tolerate.

Also, it's a bit clunky wording. "that" would be a better word choice than "but":

 Found taint `nvidia.com/gpu' that no pod can tolerate.

To Reproduce
Steps to reproduce the behavior:

  1. Deploy a node pool with taints.
  2. Deploy a statefulset to the node pool that tolerates the taint.
  3. Run popeye. Note that it reports the nodes have no pods that can tolerate the taint.

Expected behavior
It should analyze pods, not deployments. Pods can come from jobs, daemonsets, and statefulesets, as well as deployments.

Versions (please complete the following information):

  • OS: [OSX]
  • Popeye [0.1.0]
  • K8s [1.12.6]

Containers are running as root




Describe the bug
I get a warning even though the containers dont run as root user.

My Dockerfile

FROM microsoft/dotnet:2.1-sdk
WORKDIR /build_dir
COPY . .

RUN dotnet restore /build_dir/src/Banners/Banners.csproj
RUN dotnet restore /build_dir/tests/Banners.IntegrationTests/Banners.IntegrationTests.csproj
RUN dotnet restore /build_dir/tests/Banners.UnitTests/Banners.UnitTests.csproj
RUN dotnet build /build_dir/src/Banners/Banners.csproj
RUN dotnet test /build_dir/tests/Banners.UnitTests/Banners.UnitTests.csproj
RUN dotnet publish /build_dir/src/Banners/Banners.csproj -o /publish

WORKDIR /publish

RUN groupadd -r storefront && useradd -r -g storefront storefront
RUN chown -R storefront:storefront /build_dir
USER storefront

ENTRYPOINT ["dotnet", "Banners.dll"]

Deployment.yaml

And I added this block on my deployment.yaml

securityContext:
runAsUser: 999
runAsGroup: 999
runAsNonRoot: true
allowPrivilegeEscalation: false

Versions:

  • Popeye 0.4.3
  • K8s 1.12.2

What am I doing wrong? Thanks for your help.

HPA sanitizer calculates cluster CPU & mem capacity incorrectly




Describe the bug
The HPA saniziter is miscalculating the available CPU and memory amounts in ListClusterMetrics(internal/cache/no_mx.go, line 25).
Instead of calculating the total amount available, it sums up the usage instead, resulting in completely incorrect cluster capacity.

To Reproduce
Steps to reproduce the behavior:

  1. Create a HPA for a deployment with minimal resource usage
  2. Run popeye -l warn -s hpa -A
  3. Compare your node capacity with the reported cluster capacity.

Expected behavior
I would expect the cluster capacity being calculated based on resources available, in the best case the allocatable values.

Versions (please complete the following information):

  • OS: OSX
  • Popeye 0.4.3
  • K8s 1.14.8

v1.4 dies with "runtime error: integer divide by zero" on linux




Describe the bug
When running popeye, the "Connectivity" and "Metrics" steps seem to be ok (have a checkmark), but then i get (removed some newlines)

SUMMARY
β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…
SERVICEACCOUNTS
β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…
πŸ’₯ Scan failed!: the server could not find the requested resource (get clusterro
lebindings.rbac.authorization.k8s.io)
SERVICES
β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…
πŸ’₯ Scan failed!: the server could not find the requested resource (get services)
PODS
β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…
πŸ’₯ Scan failed!: the server could not find the requested resource (get pods)
NAMESPACES
β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…
πŸ’₯ Scan failed!: the server could not find the requested resource (get namespace
s)
NODES
β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…
πŸ’₯ Scan failed!: the server could not find the requested resource (get nodes)

panic: runtime error: integer divide by zero

goroutine 1 [running]:
github.com/derailed/popeye/pkg.(*Popeye).printSummary(0xc00054bd40)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/pkg/popeye.go:95 +0x4e8
github.com/derailed/popeye/pkg.(*Popeye).Sanitize(0xc00054bd40)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/pkg/popeye.go:76 +0x4fa
github.com/derailed/popeye/cmd.doIt(0x21b02e0, 0x21dbac8, 0x0, 0x0)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:56 +0xdf
github.com/spf13/cobra.(*Command).execute(0x21b02e0, 0xc0000ae170, 0x0, 0x0, 0x21b02e0, 0xc0000ae170)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x2ae
github.com/spf13/cobra.(*Command).ExecuteC(0x21b02e0, 0x0, 0x0, 0xc0003b7f88)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:800
github.com/derailed/popeye/cmd.Execute()
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:43 +0x32
main.main()
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/main.go:19 +0x20

To Reproduce
Steps to reproduce the behavior:

  1. install popeye using snap on a linux machine (im using ubuntu 18.04)
  2. fire popeye
  3. See error

Expected behavior
Should run through :) Im also surprised about the

Scan failed!: the server could not find the requested resource (get clusterro
lebindings.rbac.authorization.k8s.io)

with 0.1.3 everything works fine, so i doubt that it is a permissions issue.

Screenshots

Versions (please complete the following information):

  • OS: Linux, Unbuntu 18.04, 4.15.0-46-generic
  • Popeye 0.1.4
  • K8s:
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.5-gke.5", GitCommit:"2c44750044d8aeeb6b51386ddb9c274ff0beb50b", GitTreeState:"clean", BuildDate:"2019-02-01T23:53:25Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"linux/amd64"}

Additional context
Also wondering (not that familiar with golang): is it normal to have the path /Users/fernand in the exception message? User doesnt exist here, but i guess this is some kind of go-package-effect?

Output to Slack?




Is your feature request related to a problem? Please describe.
The output from this pool renders nicely for terminals but when sent to slack, has lots of unicode and color encoding issues.

Describe the solution you'd like
An optional output mode thats more friendly to sending in a payload to slack, that shows the same information but with less RGB colors?

Describe alternatives you've considered
none

Additional context

--lint level does not affect junit (or json) output

Version:

v0.6.1

Steps to reproduce:

popeye -l error -o junit

Expected:

Only issues with "error" severity are reported

Observed:

All issues are reported as test failures, e.g.
<failure message="[POP-301] Connects to API Server? ServiceAccount token is mounted" type="warn"></failure>

Impact:

Running of popeye as part of a CI pipeline (e.g. Jenkins) is difficult without post-processing the output.

Emoji don't always render: please prefer ASCII or limited Unicode.




Describe the bug
What is the emoji? I just see boxes on my terminal.

To Reproduce

Steps to reproduce the behavior:
Run the tool on Konsole on Linux

Expected behavior
Colorized output or non-emoji textual output.

Screenshots
image

Versions (please complete the following information):

  • OS: Linux, Terminal is Konsole on KDE5
  • Popeye 0.1.4
  • K8s: 1.9.6 for this cluster

Additional context

This is both a cross-platform and an accessibility issue.

Index out of bounds on lint nodes




Describe the bug
After linting nodes panic: index out of range is being thrown.

To Reproduce
Steps to reproduce the behavior:
This error occurs consistently on multiple clusters in our environment.

Expected behavior
lint should complete

Screenshots

(etcd cluster is also master role)

[...]
NODES                                                     πŸ’₯ 0 😱 5 πŸ”Š 0 βœ… 6 55Ωͺ
β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…
  Β· ip-10-46-33-196.ec2.internal...............................................βœ…
  · ip-10-46-33-63.ec2.internal................................................😱
    😱 Found taint `dedicated but no pod can tolerate.
  · ip-10-46-34-24.ec2.internal................................................😱
    😱 Found taint `dedicated but no pod can tolerate.
  Β· ip-10-46-35-165.ec2.internal...............................................βœ…
  Β· ip-10-46-38-218.ec2.internal...............................................βœ…
  · ip-10-46-38-51.ec2.internal................................................😱
    😱 Found taint `dedicated but no pod can tolerate.
  Β· ip-10-46-39-186.ec2.internal...............................................βœ…
  · ip-10-46-39-50.ec2.internal................................................😱
    😱 Found taint `dedicated but no pod can tolerate.
  · ip-10-46-40-86.ec2.internal................................................😱
    😱 Found taint `dedicated but no pod can tolerate.
  Β· ip-10-46-41-33.ec2.internal................................................βœ…
  Β· ip-10-46-43-8.ec2.internal.................................................βœ…

panic: runtime error: index out of range

goroutine 1 [running]:
github.com/derailed/popeye/internal/k8s.(*Client).InUseNamespaces(0xc0000ae940, 0xc000573300, 0x1, 0x1)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/internal/k8s/client.go:116 +0x29f
github.com/derailed/popeye/internal/linter.(*Namespace).Lint(0xc0000ba6d0, 0x2276c40, 0xc00027f780, 0x206bc9b, 0x2)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/internal/linter/ns.go:30 +0xcf
github.com/derailed/popeye/pkg.(*Popeye).Sanitize(0xc000709d40)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/pkg/popeye.go:62 +0x217
github.com/derailed/popeye/cmd.doIt(0x2db09a0, 0x2ddbcd8, 0x0, 0x0)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:56 +0xdf
github.com/spf13/cobra.(*Command).execute(0x2db09a0, 0xc00003a020, 0x0, 0x0, 0x2db09a0, 0xc00003a020)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x2ae
github.com/spf13/cobra.(*Command).ExecuteC(0x2db09a0, 0x0, 0x0, 0xc0003fff88)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:800
github.com/derailed/popeye/cmd.Execute()
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:43 +0x32
main.main()
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/main.go:19 +0x20
$ kubectl get nodes                                                                                                                                                                                                          2 ↡
NAME                           STATUS   ROLES    AGE     VERSION
ip-10-46-33-196.ec2.internal   Ready    worker   6h20m   v1.12.3
ip-10-46-33-63.ec2.internal    Ready    master   6h30m   v1.12.3
ip-10-46-34-24.ec2.internal    Ready    master   6h30m   v1.12.3
ip-10-46-35-165.ec2.internal   Ready    master   6h20m   v1.12.3
ip-10-46-38-218.ec2.internal   Ready    worker   6h20m   v1.12.3
ip-10-46-38-51.ec2.internal    Ready    master   6h29m   v1.12.3
ip-10-46-39-186.ec2.internal   Ready    master   6h20m   v1.12.3
ip-10-46-39-50.ec2.internal    Ready    master   6h28m   v1.12.3
ip-10-46-40-86.ec2.internal    Ready    master   6h29m   v1.12.3
ip-10-46-41-33.ec2.internal    Ready    worker   6h20m   v1.12.3
ip-10-46-43-8.ec2.internal     Ready    master   6h28m   v1.12.3

Versions (please complete the following information):

  • OS: 10.14.3
  • Popeye 0.1.2
  • K8s 1.12.3

Additional context
Add any other context about the problem here.

Send report to slack




I'd like to send the report to the approppiatte team. If I can send the report to a specific channel in slack, that would be great.

You could reuse the output parameter and add the webhook url of the slack api..maybe..

-o slack --slack-address

Disable over allocation




Is your feature request related to a problem? Please describe.
The problem is that the over-allocation check is incredibly subjective, since it seems to just grab the current metric for each service. If you want to sanity test a cluster that may have absolutely no load on it, you will always get piles of warnings that are for the most part meaningless. In a sanity test of a cluster's configuration, in some cases you only care about under-allocated as your request/limits are set for real production values.

Describe the solution you'd like
A way to disable the over-allocation test, tried playing with the values and couldn't get it to just go away.

Describe alternatives you've considered
None

Additional context
Add any other context or screenshots about the feature request here.

EKS resources are considered unused

When deploying on AWS, EKS creates some default namespaces popeye complains about them:

Describe the bug

  Β· kube-node-lease................................................................................πŸ”Š
    πŸ”Š [POP-400] Used? Unable to locate resource reference.
  Β· kube-public....................................................................................πŸ”Š
    πŸ”Š [POP-400] Used? Unable to locate resource reference.

To Reproduce
Steps to reproduce the behavior:

  1. Create cluster with EKS
  2. Run popeye

Expected behavior

Default setup on EKS has no warnings

Warning about helm releases

Describe the bug

popeye complains about helm releases that are stored in secrets since 3.0.0:

    πŸ”Š [POP-400] Used? Unable to locate resource reference.
  Β· default/sh.helm.release.v1.external-dns.v4.....................................................πŸ”Š

To Reproduce
Steps to reproduce the behavior:

  1. Install something with helm 3
  2. Run popeye

Expected behavior
Helm secrets are considered used

Service sanitizer "No target ports match service port `%s" has no code and cannot be filtered




Describe the bug
This service sanitizer report does not have any associated code, and cannot be filtered out by spinach yaml.

To Reproduce
Steps to reproduce the behavior:

  1. Create a Service with a selector matching pods in a deployment
  2. Configure the Pod so that it does not have containers with ports matching all ports in the Service
  3. Run popeye -s svc -f spinach.yaml

Sample bad service:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"pilot","chart":"pilot","heritage":"Tiller","istio":"pilot","release":"istio"},"name":"istio-pilot","namespace":"istio-system"},"spec":{"ports":[{"name":"grpc-xds","port":15010},{"name":"https-xds","port":15011},{"name":"http-legacy-discovery","port":8080},{"name":"http-monitoring","port":15014}],"selector":{"istio":"pilot"}}}
  creationTimestamp: "2020-01-22T19:49:37Z"
  labels:
    app: pilot
    chart: pilot
    heritage: Tiller
    istio: pilot
    release: istio
  name: istio-pilot
  namespace: istio-system
  resourceVersion: "826"
  selfLink: /api/v1/namespaces/istio-system/services/istio-pilot
  uid: 52781a08-3d50-11ea-aab9-005056a6315d
spec:
  clusterIP: 10.43.30.154
  ports:
  - name: grpc-xds
    port: 15010
    protocol: TCP
    targetPort: 15010
  - name: https-xds
    port: 15011
    protocol: TCP
    targetPort: 15011
  - name: http-legacy-discovery
    port: 8080
    protocol: TCP
    targetPort: 8080
  - name: http-monitoring
    port: 15014
    protocol: TCP
    targetPort: 15014
  selector:
    istio: pilot
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Sample spinach.yaml

popeye:
  excludes:
    service:
    - name: "rx:istio-system"

Observed behaviour

SERVICES (57 SCANNED)                                                        πŸ’₯ 1 😱 0 πŸ”Š 0 βœ… 56 98Ωͺ
β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…
  Β· istio-system/istio-pilot.......................................................................πŸ’₯
    πŸ’₯ No target ports match service port `TCP:http-monitoring:15014.

Expected behavior
The problem service is filtered out by the exclusion list

SERVICES (51 SCANNED)                                                       πŸ’₯ 0 😱 0 πŸ”Š 0 βœ… 51 100Ωͺ
β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…
  Β· Nothing to report.
SERVICES (57 SCANNED)                                                        πŸ’₯ 1 😱 0 πŸ”Š 0 βœ… 56 98Ωͺ
β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…β”…
  Β· istio-system/istio-pilot.......................................................................πŸ’₯
    πŸ’₯ [POP-1106] No target ports match service port TCP:http-monitoring:15014.

Screenshots
If applicable, add screenshots to help explain your problem.

Versions (please complete the following information):

  • OS: Linux
  • Popeye 0.6.2
  • K8s 1.14.1-k3s.4

Additional context
The bad service comes from an istio helm chart.

xUnit compatible output

Is your feature request related to a problem? Please describe.
Would be nice if popeye had xUnit (JUnit XML) compatible output.

This is quite common format for Jenkins artifacts.
https://jenkins.io/blog/2016/10/31/xunit-reporting/
https://llg.cubic.org/docs/junit/

Describe the solution you'd like
Running

popeye -o junit-xml

would produce XML output.

Describe alternatives you've considered
Json or YAML, already exists, but would require additional parsing with other tools.

Additional context
This would generally improve popeye integration with some CI/CD tools, where jurassic output is too ancient while default output is to fancy :)

HTML friendly output




Is your feature request related to a problem? Please describe.
We use Jenkins to run popeye and want to present the result as HTML report.

Describe the solution you'd like
Either a direct html output or jurassic output should use html friendly characters (see screenshot).

Describe alternatives you've considered
We also use Junit format and let Jenkins read that, but the normal text output is way clearer for a developer.

Additional context
image

Bug when running on k8s 1.11.9

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:
Run application

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
See code output below

Versions (please complete the following information):

  • Linux
  • Popeye [0.1.3]
  • K8s [1.11.9]

Additional context
Add any other context about the problem here.

panic: strings: negative Repeat count

goroutine 1 [running]:
strings.Repeat(0x146b985, 0x1, 0xfffffffffffffffd, 0xc00036a76c, 0x3)
	/usr/local/Cellar/go/1.12.1/libexec/src/strings/strings.go:533 +0x5ca
github.com/derailed/popeye/internal/report.Write(0x1647f20, 0xc00061fd80, 0x2, 0x1, 0xc000288460, 0x4e)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/internal/report/writer.go:81 +0x19c
github.com/derailed/popeye/pkg.(*Popeye).printReport(0xc000769d40, 0x7fb77196e648, 0xc0000f8240, 0xc00035ad74, 0x4)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/pkg/popeye.go:130 +0x7a5
github.com/derailed/popeye/pkg.(*Popeye).Sanitize(0xc000769d40)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/pkg/popeye.go:74 +0x4e4
github.com/derailed/popeye/cmd.doIt(0x21aa800, 0xc000203540, 0x0, 0x2)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:56 +0xdf
github.com/spf13/cobra.(*Command).execute(0x21aa800, 0xc00003a0a0, 0x2, 0x2, 0x21aa800, 0xc00003a0a0)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x2ae
github.com/spf13/cobra.(*Command).ExecuteC(0x21aa800, 0x0, 0x0, 0xc000381f88)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:800
github.com/derailed/popeye/cmd.Execute()
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:43 +0x32
main.main()
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/main.go:19 +0x20

panic analyzing old cluster




Describe the bug
when analyzing my kubernetes cluster, I get a panic. if you can provide some pointers, I'm happy to try and fix this bug so that it works for us.

panic: strings: negative Repeat count

goroutine 1 [running]:
strings.Repeat(0x206ba85, 0x1, 0xfffffffffffffffc, 0xc0006e3d60, 0x3)
        /usr/local/Cellar/go/1.12.1/libexec/src/strings/strings.go:533 +0x5ca
github.com/derailed/popeye/internal/report.Write(0x2248c60, 0xc00081e440, 0x1, 0x1, 0xc000f968c0, 0x4f)
        /Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/internal/report/writer.go:81 +0x19c
github.com/derailed/popeye/pkg.(*Popeye).printReport(0xc0008bdd40, 0x681c518, 0xc00000e160, 0xc0000db99c, 0x4)
        /Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/pkg/popeye.go:130 +0x7a5
github.com/derailed/popeye/pkg.(*Popeye).Sanitize(0xc0008bdd40)
        /Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/pkg/popeye.go:74 +0x4e4
github.com/derailed/popeye/cmd.doIt(0x2db09a0, 0x2ddbcd8, 0x0, 0x0)
        /Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:56 +0xdf
github.com/spf13/cobra.(*Command).execute(0x2db09a0, 0xc0000b2000, 0x0, 0x0, 0x2db09a0, 0xc0000b2000)
        /Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x2ae
github.com/spf13/cobra.(*Command).ExecuteC(0x2db09a0, 0x0, 0x0, 0xc0003e5f88)
        /Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
        /Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:800
github.com/derailed/popeye/cmd.Execute()
        /Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:43 +0x32
main.main()
        /Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/main.go:19 +0x20

To Reproduce
Steps to reproduce the behavior:

  1. connect to my old cluster
  2. run popeye
  3. panic

Expected behavior
I get a full report.

Versions (please complete the following information):

  • OS: [e.g. OSX] - OSX
  • Popeye [e.g. 0.1.0] - 0.1.2
  • K8s [e.g. 1.11.0] - 1.8.15

Allow suppressing specific checks / reports for specific instances




Is your feature request related to a problem? Please describe.
Some sanitizers are producing warnings which make sense most of time, but on some objects, I would like to be able to accept those "issues".
For example, the "container runs as root" is very helpful 98% of the times, but some very infrastructure-related pods require this.

Describe the solution you'd like
It would be nice if there was a way to skip specific aspects of a sanitizer by given names or regex for objects on which they should not be run, or at least on which they should not be reported (and maybe even not considered for the overall score)

Describe alternatives you've considered
Just excluding the objects (e.g. pods) is not enough, since this will ignore all checks on them.

Additional context
No additional context.

Junit shows all tests as failed in jenkins

This is super exciting there is now junit support! Our Jenkins job is showing all of the tests as failed in the results. I don't have any output to paste right now. If you are unable to see what's going on I'll be happy to try and recreate in my private profile.

  • run tests and pipe the output to a results file (the --save flag was storing the file in the Jenkins workspace in a location our job does not have access to.)
  • use junit <path to results file> to chart the results

Helm revision configmaps which are unused always

Right now the config maps all score on unused config maps that are the ones created by helm to keep track of revisions. Would be great to ignore these unused configmaps since they are managed by helm and kept for historical reasons.

Add command line option to report only above a specific level




Is your feature request related to a problem? Please describe.
In favour of getting a quick overview it would be nice to reduce the output to just those results above a specified level, e.g. WARN.

Describe the solution you'd like
If I would provide the command line option, e.g. --min-level warn, I would get a report which just contains the results which have a level of WARN or ERROR.

Describe alternatives you've considered
None.

Additional context
No additional context.

The k8s manifests use a version of popeye, which is not found in dockerhub.




Describe the bug
If you try to install popeye in this way https://github.com/derailed/popeye#popeye-in-cluster, then there will be an image pull error:

Failed to pull image "derailed/popeye:v0.3.3": rpc error: code = Unknown desc = Error response from daemon: manifest for derailed/popeye:v0.3.3 not found

To Reproduce

  1. git clone repo
  2. Run: kubectl apply -f k8s/popeye/ns.yml && kubectl apply -f k8s/popeye

Expected behavior
In dockerhub you need to set up an image build by tags. And do not forget to fix the version of the image in the manifest

Does it also warns about resource request/limit ratio ?




Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

What about a Popeye Operator




Is your feature request related to a problem? Please describe.
It's not a problem. But would be nice to have Popeye running on a cluster as a Kubernetes Operator, so users would have reports direct into the cluster (we could have a static content served on NGINX - or events and mail notifications). Plus the Operator would have "real time" data to work with.

Describe the solution you'd like
Creating a Popeye Operator (scope TBD) to include all its features into a cluster as a Kubernetes native application.

Describe alternatives you've considered
None

Additional context
I'm the maintainer of the Nexus and Kogito Operator, if you guys consider this project, we can sit down and discuss it further. I'm willing to help.

:-)

integrating reports into elasticsearch and kibana




Is your feature request related to a problem? Please describe.
Provide cronjob report to elasticsearch and show to kibana dashboard.

Describe the solution you'd like
Consider integrating reports into elasticsearch, so need to have a service responsible for delivering report information to elasticsearch.

Describe alternatives you've considered
K8s cluster can be provided for analysis.

v0.3.0 crashes at start: runtime error: invalid memory address or nil pointer dereference




Describe the bug
Running popeye by just typing "popeye" on the shell leads to the following error:
runtime error: invalid memory address or nil pointer dereference.

The logfile is empty.

To Reproduce
Steps to reproduce the behavior:

  1. Open shell
  2. type popeye
  3. press enter
  4. See error

Expected behavior
See popeye work as desired

Screenshots
image

image

Versions (please complete the following information):

  • OS: macOS
  • Popeye 0.3.0
  • K8s 1.13.5 and 1.12.6

Additional context
... my sincere condolences! ...

Log

8:01AM ERR runtime error: invalid memory address or nil pointer dereference
8:01AM ERR goroutine 1 [running]:
runtime/debug.Stack(0x2e9aae0, 0x20caa03, 0x0)
	/usr/local/Cellar/go/1.12.3/libexec/src/runtime/debug/stack.go:24 +0x9d
github.com/derailed/popeye/cmd.doIt.func1()
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:50 +0x15b
panic(0x1f3e620, 0x2e83250)
	/usr/local/Cellar/go/1.12.3/libexec/src/runtime/panic.go:522 +0x1b5
github.com/derailed/popeye/internal/linter.(*Secret).checkContainerRefs(0xc00043a0b0, 0xc0001270e0, 0x25, 0xc000566140, 0x1, 0x1, 0xc000616ea0)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/internal/linter/secret.go:178 +0x26a
github.com/derailed/popeye/internal/linter.(*Secret).lint(0xc00043a0b0, 0xc0004b6ba0, 0xc00039f410, 0xc0003dc480)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/internal/linter/secret.go:85 +0x2ce
github.com/derailed/popeye/internal/linter.(*Secret).Lint(0xc00043a0b0, 0x22e15e0, 0xc0000f8540, 0x20cb19c, 0x3)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/internal/linter/secret.go:51 +0xec
github.com/derailed/popeye/pkg.(*Popeye).sanitize(0xc0000f8500)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/pkg/popeye.go:140 +0x1ef
github.com/derailed/popeye/pkg.(*Popeye).Sanitize(0xc0000f8500)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/pkg/popeye.go:106 +0x2b
github.com/derailed/popeye/cmd.doIt(0x2e8d460, 0x2eb87c0, 0x0, 0x0)
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:60 +0x11b
github.com/spf13/cobra.(*Command).execute(0x2e8d460, 0xc00003a1b0, 0x0, 0x0, 0x2e8d460, 0xc00003a1b0)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x2ae
github.com/spf13/cobra.(*Command).ExecuteC(0x2e8d460, 0x0, 0x0, 0xc0003d9f88)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/[email protected]/command.go:800
github.com/derailed/popeye/cmd.Execute()
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/cmd/root.go:37 +0x32
main.main()
	/Users/fernand/go_wk/derailed/src/github.com/derailed/popeye/main.go:25 +0x20

Parametrize what is considered a warning or error by the SpinachYAML file

Is your feature request related to a problem? Please describe.
In my scenario I would like resource request/presence limits to be an error.
In the current scenario it is a warning.

Describe the solution you'd like
Parametrize what is considered a warning or error by the SpinachYAML file.
This could be used for any Sanitizers

Port name missing considered an error

Hi, just a brainstorming here.

Why do you consider an unnamed port in a deployment/pod as an error?
Do you have a source on the documentation that supports this decision?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.