Git Product home page Git Product logo

codenotary / kube-notary Goto Github PK

View Code? Open in Web Editor NEW
63.0 20.0 8.0 1.92 MB

A Kubernetes watchdog for verifying image trust with Codenotary (www.codenotary.com)

License: GNU General Public License v3.0

Dockerfile 2.28% Makefile 5.03% Go 62.71% Shell 9.53% HTML 17.77% Mustache 2.67%
kubernetes kubernetes-monitoring notarization devops security container container-image integrity-checker continuous-verification

kube-notary's Introduction

kube-notary

A Kubernetes watchdog for verifying image trust with Codenotary Cloud.

Codenotary Cloud for Kubernetes

How it works

kube-notary is a monitoring tool for Continuous Verification (CV) via Codenotary Cloud. The idea behind CV is to continuously monitor your cluster at runtime and be notified when unknown or untrusted container images are running.

Once kube-notary is installed within your cluster, all pods are checked every minute (interval and other settings can be configured). For each of the running containers in each pod, kube-notary resolves the ImageID of the container's image to the actual image's hash and finally looks up the hash's signature in the Codenotary's Cloud.

Furthermore, kube-notary provides a built-in exporter for sending verification metrics to Prometheus, which can then that can be easily visualized with the provided grafana dashboard.

Images you trust can be signed by using the Codenotary vcn CLI tool.

https://infograph.venngage.com/ps/ex4ECrROPCQ/codenotary-for-kubernetes

Install

kube-notary is installed using a Helm chart.

Kubernetes 1.9 or above, and Helm 2.8 or above need to be installed in your cluster.

First, make sure your local config is pointing to the context you want to use (ie. check kubectl config current-context). Then, to install kube-notary:

To run the following steps are required:

  • Be sure to have an api-key secret

Note: You can obtain an api-key from CodeNotary Cloud.

  • Install helm chart with following parameters:
helm install \
    -n kube-notary ../../helm/kube-notary \
    --set image.repository=$KUBE_NOTARY_IMAGE --set image.tag=$KUBE_NOTARY_TAG \
    --set image.pullPolicy=Always \
    --set cnc.host={CNC ip address, default nil} \
    --set cnc.port={CNC port address, default 3324} \
    --set cnc.cert={CNC certificate, default nil} \
    --set cnc.noTls={CNC enable unsecure connections, default true} \
    --set cnc.skipTlsVerify={CNC skip tls verification, default false} \
    --set cnc.signerID={CNC parameter used to filter results on a specific signer ID, default nil} \
    --set cnc.ledgerName={CNC used when a cross-ledger key is provided in order to specify the ledger on which future operations will be directed. Default nil} \
    --set cnc.apiKey={API Key from CNC} \
    --wait

To sign an image use vcn CLI. Please contact CodeNotary support for more information.

See the Configuration paragraph for detailed instructions.

Namespaced

If you do not have cluster-wide access, you can still install kube-notary within a single namespace, using:

helm install -n kube-notary helm/kube-notary --set watch.namespace="default"

When so configured, a namespaced Role will be created instead of the default ClusterRole to accommodate Kubernetes RBAC for a single namespace. kube-notary will get permission for, and will watch, the configured namespace only.

Manual installation (without Helm)

Alternatively, it is possible to manually install kube-notary without using Helm. Instructions and templates for manual installation are within the kubernetes folder.

Uninstall

You can uninstall kube-notary at any time using:

helm delete --purge kube-notary

Usage

kube-notary provides both detailed log output and a Prometheus metrics endpoint to monitor the verification status of your running containers. After the installation you will find instructions on how to get them.

Examples:

  # Metrics endpoint
  export SERVICE_NAME=service/$(kubectl get service --namespace default -l "app.kubernetes.io/name=kube-notary,app.kubernetes.io/instance=kube-notary" -o jsonpath="{.items[0].metadata.name}")
  echo "Check the metrics endpoint at http://127.0.0.1:9581/metrics"
  kubectl port-forward --namespace default $SERVICE_NAME 9581

  # Results endpoint
  export SERVICE_NAME=service/$(kubectl get service --namespace default -l "app.kubernetes.io/name=kube-notary,app.kubernetes.io/instance=kube-notary" -o jsonpath="{.items[0].metadata.name}")
  echo "Check the verification results endpoint at http://127.0.0.1:9581/results"
  kubectl port-forward --namespace default $SERVICE_NAME 9581

  # Stream logs
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=kube-notary,app.kubernetes.io/instance=kube-notary" -o jsonpath="{.items[0].metadata.name}")
  kubectl logs --namespace default -f $POD_NAME

  # Bulk sign all running images
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=kube-notary,app.kubernetes.io/instance=kube-notary" -o jsonpath="{.items[0].metadata.name}")
  kubectl exec --namespace default -t $POD_NAME sh /bin/bulk_sign > vcn_bulk_sign.sh
  chmod +x vcn_bulk_sign.sh && ./vcn_bulk_sign.sh

Metrics

If a Prometheus installation is running within your cluster, metrics provided by kube-notary will be automatically discovered. Furthermore, you can find an example of a preconfigured Grafana dashboard here.

Configuration

By default, kube-notary is installed into the current namespace (you can change it by using helm install --namespace) but it will watch to pods in all namespaces.

At install time you can change any values of helm/kube-notary/values.yaml by using the Helm's --set option. For example, to instruct kube-notary to check only the kube-system namespace, just use:

helm install -n kube-notary helm/kube-notary --set watch.namespace="kube-system"

Runtime configuration

The following options within helm/kube-notary/values.yaml have effect on the kube-notary runtime behavior.

# Runtime config
log:
  level: info # verbosity level, one of: trace, debug, info, warn, error, fatal, panic
watch:
  namespace: "" # the namespace name to watch
  interval: 60s # duration of the watching interval
   - ...
   - ...

During the installation, they are stored in a configmap. Configuration hot-reloading is supported, so you can modify and apply the configmap while kube-notary is running.

For example, to change the watching interval from default to 30s:

kubectl patch configmaps/kube-notary \
    --type merge \
    -p '{"data":{"config.yaml":"log:\n  level: debug\nwatch: \n  namespace: \n  interval: 30s"}}'

FAQ

Why Continuous Verification ?

Things change over time. Suppose you signed an image because you trust it. Later, you find a security issue within the image or you just want to deprecate that version. When that happens you can simply use vcn to untrust or unsupport that image version. Once the image is not trusted anymore, thanks to kube-notary you can easily discover if the image is still running somewhere in your cluster.

In general, verifying an image just before its execution is not enough because the image's status or the image that's used by a container can change over time. Continuous Verification ensures that you will always get noticed if an unwanted behavior occurs.

How can I sign my image?

You can easily sign your container's images by using the vcn CLI we provide separately.

vcn supports local Docker installations out of the box using docker:// prefix, followed by the image name or image reference.

Furthermore, if you want to bulk sign all images running inside your cluster, you will find below instructions to generate a script that automates the process.

Export POD_NAME setting it to the kube-notary's pod name, then run:

kubectl exec --namespace default -t $POD_NAME sh /bin/bulk_sign > vcn_bulk_sign.sh
chmod +x vcn_bulk_sign.sh && ./vcn_bulk_sign.sh

Note that a CodeNotary account and a local installation of vcn are needed. Also, make sure your kubectl is pointing to the context you want to use.

How can I be notified when untrusted images are runnig?

First, Prometheus and Grafana need to be installed in your cluster.

Then it's easy to create alerts using the provided Grafana dashboard

Why my image cannot be signed? (manifest v2 schema 1)

The image manifest v2 schema 1 format is deprecated in favor of the v2 schema 2 format.

Please, move to v2 schema 2 as soon as possible. Usually, you can fix that simply by pushing your image again on the registry.

Cannot create resource "clusterrolebindings"

Recent versions of Kubernetes employ a role-based access control (or RBAC) system to drive authorization decisions. It might be possible that your account does not have enough privileges to create the ClusterRole needed to get cluster-wide access.

Please use a high privileged account to install kube-notary. Alternatively, if you don't have cluster-wide access, you can still install kube-notary to work in a single namespace which you can access. See the namespaced installation paragraph for further details.

Helm error: release kube-watch failed: namespaces "..." is forbidden

It might be possible that tiller (the Helm's server-side component) does not have permission to install kube-notary.

When working within a role-based access control enabled Kubernetes installation, you may need to add a service account with cluster-admin role for tiller.

The easier way to do that is just to create a rbac-config.yaml copying and pasting the provided example in the Helm documentation, then:

$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller --history-max 200

ServiceMonitor for Prometheus Operator

See #11.

Testing

make test/e2e

Developing

To launch a debug environment with kubernetes it's possible to use the make utilities with:

make image/debug
make kubernetes/debug

It launches a kubernetes cluster with kind . A dlv debug server is launched inside the pod and it's possible to execute a remote debugging.

License

This software is released under GPL3.

kube-notary's People

Contributors

leogr avatar marcosquesada avatar mmeloni avatar peet86 avatar sanderson462 avatar simonelazzaris avatar vchain-us-mgmt avatar vchaindz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-notary's Issues

Handling of updated secrets and configuration

In the CNLC branch, if a secret is updated, kube-notary does not realize it, as it is exposed as an environment variable.

The behavior of KubeNotary should be examined and a decision needs to be made how to handle (or document this case).

As Kubernetes does not appear to provide a solution, unless you use eg. mounted secrets, or Helm tricks, the solution may very well to ignore the problem and document it properly.
If the route of making kube-notary aware of the changes is chosen, it needs to be something in line of how Kubernetes workloads usually deal with this.

tags and metadata.

Add Project and stage tags and notarization metadata.

from:
vcn n --silent --attr GitLab="$CI_COMMIT_SHA" docker://$CONTAINER_IMAGE:$CI_COMMIT_SHA

to

vcn n --silent --attr GitLab="$CI_COMMIT_SHA" --attr project="p1233" --attr stage="test" docker://$CONTAINER_IMAGE:$CI_COMMIT_SHA

and

image (1)

Verification of images from private repository with priavate CA

Hello! Is there any way I could provide by CA cert to the kube-notary instance?
Currently it doesn't work and logs following errors:

time="2020-06-24T10:40:51Z" level=debug msg="Verification started" log.level=debug podCount=1 trust.keys="[(my_signer_id)]" watch.interval=1m0s watch.namespace=ns
time="2020-06-24T10:40:51Z" level=error msg="Cannot verify \"docker-pullable://repo:image\" in pod \"pod_name\": reading image repo:image@sha256:sha: Get https://repo/v2/: x509: certificate signed by unknown authority"
time="2020-06-24T10:40:51Z" level=debug msg="Sleeping for 1m0s"

I'm using version:

Version:        v0.8.3
Git Rev:        8e8d0e3564e52f8e9a423504d405aafbb1991b44 (master)
UserAgent:      vcn/v0.8.3 (linux; amd64)

Status page broken when results are errored

The embedded status page is not handling the empty hash case (when an error is present).

This is an example of /results output that produce this behavior.

[
  {
    "hash": "",
    "containers": [
      {
        "namespace": "default",
        "pod": "hello-world-deployment-7cb8bf4c69-ch6nl",
        "container": "hello-world",
        "containerID": "docker://beb5c59ce229d19f0931f32af1c8f1ed9a7bb3c3e5d40fc77c90c9d79ae5cd54",
        "image": "cnioimages2609.azurecr.io/hello-world:13",
        "imageID": "docker-pullable://cnioimages2609.azurecr.io/hello-world@sha256:0016fef6ce1d57cb5d25bdbb5f6762165a7d4e73b6f07bc5179d22d094f35f83"
      },
      {
        "namespace": "default",
        "pod": "hello-world-deployment-7cb8bf4c69-v6kt7",
        "container": "hello-world",
        "containerID": "docker://17749bbc9ec13e60efae04ef68b9aa4b36f7fc276c43bf3a31fd71636b7ef224",
        "image": "cnioimages2609.azurecr.io/hello-world:13",
        "imageID": "docker-pullable://cnioimages2609.azurecr.io/hello-world@sha256:0016fef6ce1d57cb5d25bdbb5f6762165a7d4e73b6f07bc5179d22d094f35f83"
      },
      {
        "namespace": "monitoring",
        "pod": "alertmanager-kube-prometheus-0",
        "container": "alertmanager",
        "containerID": "docker://12b812c899f3bef58646e19e8b73dd9a9fc434754759e181394854742c6ccbe5",
        "image": "quay.io/prometheus/alertmanager:v0.15.1",
        "imageID": "docker-pullable://quay.io/prometheus/alertmanager@sha256:89ff5752f4a4b38d8ea8f4ee46ed21625ff5ffa58660500045a6c08f02b54c36"
      },
      {
        "namespace": "monitoring",
        "pod": "alertmanager-kube-prometheus-0",
        "container": "config-reloader",
        "containerID": "docker://76d5a02d9f86b655f305bafc51bec9cd0a932fba777b432c2eb56ac2d402f5ac",
        "image": "quay.io/coreos/configmap-reload:v0.0.1",
        "imageID": "docker-pullable://quay.io/coreos/configmap-reload@sha256:e2fd60ff0ae4500a75b80ebaa30e0e7deba9ad107833e8ca53f0047c42c5a057"
      },
      {
        "namespace": "monitoring",
        "pod": "kube-prometheus-exporter-node-tj7zl",
        "container": "node-exporter",
        "containerID": "docker://cf5501de413e21141c91187b351094ebd6b2c902971605e791ca5710c05a4bc3",
        "image": "quay.io/prometheus/node-exporter:v0.15.2",
        "imageID": "docker-pullable://quay.io/prometheus/node-exporter@sha256:0c7dd2350bed76fce17dff8bd2a2ac599bc989c7328eb77b0751b8024cf0457d"
      },
      {
        "namespace": "monitoring",
        "pod": "kube-prometheus-grafana-6c4dffd84d-q5g2d",
        "container": "grafana",
        "containerID": "docker://87867ccbee43eba688b15eb1da6d43f4bc7bf0fe7e3cb60e04f07a58ed9cfd6e",
        "image": "grafana/grafana:5.0.0",
        "imageID": "docker-pullable://grafana/grafana@sha256:9c86e0950726eb2d38dba6a0fa77e8757b76782a9a3cf56b65fcb689fcfd3b9e"
      },
      {
        "namespace": "monitoring",
        "pod": "kube-prometheus-grafana-6c4dffd84d-q5g2d",
        "container": "grafana-watcher",
        "containerID": "docker://cdb8b903df33eef49f94623182b6c7f5aafdf238a1e96c334e9be78e86c131bc",
        "image": "quay.io/coreos/grafana-watcher:v0.0.8",
        "imageID": "docker-pullable://quay.io/coreos/grafana-watcher@sha256:9180ce0aea9f804bc07a2c6a9f35cb4214ff336fd6253aa428f9e16acbc68b53"
      },
      {
        "namespace": "monitoring",
        "pod": "prometheus-kube-prometheus-0",
        "container": "alerting-rule-files-configmap-reloader",
        "containerID": "docker://da4add402beea4366310f4503b3d6c4afd9fd7a097222c88181b25061edeba6b",
        "image": "quay.io/coreos/configmap-reload:v0.0.1",
        "imageID": "docker-pullable://quay.io/coreos/configmap-reload@sha256:e2fd60ff0ae4500a75b80ebaa30e0e7deba9ad107833e8ca53f0047c42c5a057"
      },
      {
        "namespace": "monitoring",
        "pod": "prometheus-kube-prometheus-0",
        "container": "prometheus",
        "containerID": "docker://86d7a02809e87acbd37110eaa0e88aa0bb8d89bb67f047f8a50c2028ef8cbda4",
        "image": "quay.io/prometheus/prometheus:v2.2.1",
        "imageID": "docker-pullable://quay.io/prometheus/prometheus@sha256:0e90814ff93acf8092d682f898de602b0bbe0fb5224cd99e54b8ff8508c72605"
      },
      {
        "namespace": "monitoring",
        "pod": "prometheus-kube-prometheus-0",
        "container": "prometheus-config-reloader",
        "containerID": "docker://92b92ab685b64978f58b20959d626d24fe7f66302c5762840848dad9f7cca26b",
        "image": "quay.io/coreos/prometheus-config-reloader:v0.20.0",
        "imageID": "docker-pullable://quay.io/coreos/prometheus-config-reloader@sha256:9861d4e2f634740b17e922dc26cc1f9c8673dfa3a20e5cf2a7dfd849d671158e"
      },
      {
        "namespace": "monitoring",
        "pod": "prometheus-operator-545b59ffc9-rl7sm",
        "container": "prometheus-operator",
        "containerID": "docker://4d294430dffc2a516384ccd6f3394c19593354cf205cf1f431fab879df856f57",
        "image": "quay.io/coreos/prometheus-operator:v0.20.0",
        "imageID": "docker-pullable://quay.io/coreos/prometheus-operator@sha256:88cd66e273db8f96cfcce2eec03c04b04f0821f3f8d440396af2b5510667472d"
      }
    ],
    "errors": [
      "image manifest v2 schema 1 is deprecated: quay.io/coreos/prometheus-operator@sha256:88cd66e273db8f96cfcce2eec03c04b04f0821f3f8d440396af2b5510667472d"
    ]
  },
  {
    "hash": "eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c",
    "containers": [
      {
        "namespace": "kube-system",
        "pod": "coredns-69b5b66fd8-jbk48",
        "container": "coredns",
        "containerID": "docker://195f3e736c1a8a16addb47f8b1f2fd796c3f7a46edfcb02e1e33680833d041d2",
        "image": "aksrepos.azurecr.io/mirror/coredns:1.3.1",
        "imageID": "docker-pullable://aksrepos.azurecr.io/mirror/coredns@sha256:638adb0319813f2479ba3642bbe37136db8cf363b48fb3eb7dc8db634d8d5a5b"
      },
      {
        "namespace": "kube-system",
        "pod": "coredns-69b5b66fd8-tptjx",
        "container": "coredns",
        "containerID": "docker://fd9e28e5f0ef74b974185b280ee76a9251712e0156ca3fe984a48ae28b10e4bc",
        "image": "aksrepos.azurecr.io/mirror/coredns:1.3.1",
        "imageID": "docker-pullable://aksrepos.azurecr.io/mirror/coredns@sha256:638adb0319813f2479ba3642bbe37136db8cf363b48fb3eb7dc8db634d8d5a5b"
      }
    ],
    "verification": {
      "level": 1,
      "owner": "0x81576cf4b8c2b9dcaa677d169569bf9c2a7239e9",
      "status": 0,
      "timestamp": "2019-06-06T12:51:55Z"
    }
  },
  {
    "hash": "33813c948942e633d447f370a9003ce2ba7ace3838c16cc0e81cf836ee537552",
    "containers": [
      {
        "namespace": "kube-system",
        "pod": "coredns-autoscaler-65d7986c6b-xrtz4",
        "container": "autoscaler",
        "containerID": "docker://28403e1cf224888bc181e56053d5ac42b94b3fac4a55bcabe2f8826a7b36faba",
        "image": "aksrepos.azurecr.io/mirror/cluster-proportional-autoscaler-amd64:1.3.0",
        "imageID": "docker-pullable://aksrepos.azurecr.io/mirror/cluster-proportional-autoscaler-amd64@sha256:4fd37c5b29a38b02c408c56254bd1a3a76f3e236610bc7a8382500bbf9ecfc76"
      }
    ],
    "verification": {
      "level": 0,
      "owner": "",
      "status": 2,
      "timestamp": ""
    }
  },
  {
    "hash": "d1138d7febb80574b55ab1c66d5878ef29017b10a820280d8ed626f9699e9475",
    "containers": [
      {
        "namespace": "kube-system",
        "pod": "kube-proxy-pflnq",
        "container": "kube-proxy",
        "containerID": "docker://b626137cb3c295bcde3fd0e915493abd443fd5e4426e5ba9a45e1e7636387d89",
        "image": "aksrepos.azurecr.io/mirror/hyperkube-amd64:v1.14.6",
        "imageID": "docker-pullable://aksrepos.azurecr.io/mirror/hyperkube-amd64@sha256:db2120fdf4f18c4a8b7a7e7cb14dd29c376c328f9cd1cc1d9375daa0ce37bf20"
      }
    ],
    "verification": {
      "level": 0,
      "owner": "",
      "status": 2,
      "timestamp": ""
    }
  },
  {
    "hash": "f9aed6605b814b69e92dece6a50ed1e4e730144eb1cc971389dde9cb3820d124",
    "containers": [
      {
        "namespace": "kube-system",
        "pod": "kubernetes-dashboard-6fbc7f598b-d9m6g",
        "container": "main",
        "containerID": "docker://c57860958a7796f8744ebd2b2e3aa28afab9d19dd410344346005e5c83dfe9a0",
        "image": "aksrepos.azurecr.io/mirror/kubernetes-dashboard-amd64:v1.10.1",
        "imageID": "docker-pullable://aksrepos.azurecr.io/mirror/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747"
      }
    ],
    "verification": {
      "level": 0,
      "owner": "",
      "status": 2,
      "timestamp": ""
    }
  },
  {
    "hash": "9801395070f386b1bc07bfe1db04639acf7539f86ef29dc19d191d87d42d6364",
    "containers": [
      {
        "namespace": "kube-system",
        "pod": "metrics-server-66dbbb67db-tpk88",
        "container": "metrics-server",
        "containerID": "docker://d9359b4db42db9c5a2487c1bdd014788556e5368722bec52abfacaad6f06a181",
        "image": "aksrepos.azurecr.io/mirror/metrics-server-amd64:v0.2.1",
        "imageID": "docker-pullable://aksrepos.azurecr.io/mirror/metrics-server-amd64@sha256:220c0ed3451cb95e4b2f72dd5dc8d9d39d9f529722e5b29d8286373ce27b117e"
      }
    ],
    "verification": {
      "level": 0,
      "owner": "",
      "status": 2,
      "timestamp": ""
    }
  },
  {
    "hash": "2d0a693df3ba463723d93444a0ce80550d17fb1e27db59d3d45ad275ffb8e71b",
    "containers": [
      {
        "namespace": "kube-system",
        "pod": "tiller-deploy-675cbc8478-xkc9f",
        "container": "tiller",
        "containerID": "docker://082fd3fb27a9b33891056861c968081a890d76babd2cde249b28e07158fdc218",
        "image": "gcr.io/kubernetes-helm/tiller:v2.14.3",
        "imageID": "docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:c09393087c4be55023f86be979a9dfe486c728703eba996344adc9783f261baa"
      }
    ],
    "verification": {
      "level": 3,
      "owner": "0x068e10d036175b874017320db5a9b852620679c4",
      "status": 0,
      "timestamp": "2019-09-12T13:29:00Z"
    }
  },
  {
    "hash": "5e736ccec00bf578b843ec3fbb37915a10c818e57a8a14c8ce56c5b46c8ab50f",
    "containers": [
      {
        "namespace": "kube-system",
        "pod": "tunnelfront-6df86d6df5-294j2",
        "container": "tunnel-front",
        "containerID": "docker://8290f5f39393a7042ca128c9b482e2d5415d20e6d5ddbfda576ef450792a76a5",
        "image": "aksrepos.azurecr.io/prod/hcp-tunnel-front:v1.9.2-v3.0.7",
        "imageID": "docker-pullable://aksrepos.azurecr.io/prod/hcp-tunnel-front@sha256:31e56b470ad11c9699ead847b652ec65dfac8dc6f89cfe7d2f9005080933e388"
      }
    ],
    "verification": {
      "level": 0,
      "owner": "",
      "status": 2,
      "timestamp": ""
    }
  },
  {
    "hash": "a8e478d80b80eb0f9d653b6b70ae53b80f7f9d9232a338cba3a99ba57e38a0ae",
    "containers": [
      {
        "namespace": "monitoring",
        "pod": "kube-notary-5d8b9cfcdc-vpjjc",
        "container": "kube-notary",
        "containerID": "docker://7b565752aba3e099f06a37cdc8fdb0f04c30a1aa66c6944cf8d68b5b3c9a62b8",
        "image": "codenotary/kube-notary:latest",
        "imageID": "docker-pullable://codenotary/kube-notary@sha256:23bc2bd638d72bf6424e6c697f0db6b9b5a15b00f0049ea069ab44dcb15e6798"
      }
    ],
    "verification": {
      "level": 3,
      "owner": "0x068e10d036175b874017320db5a9b852620679c4",
      "status": 0,
      "timestamp": "2019-09-13T07:51:50Z"
    }
  },
  {
    "hash": "6fc90bfda61575afdc33d4dcd3911ebd08ff85423fcd3f0b614a00573c766483",
    "containers": [
      {
        "namespace": "monitoring",
        "pod": "kube-prometheus-exporter-kube-state-7fdbfbf866-rxl5d",
        "container": "exporter-kube-state",
        "containerID": "docker://2652b5320311d414997c180d177d9bfb9ad8589691b73e6fb4a45c55b90a25f4",
        "image": "gcr.io/google_containers/kube-state-metrics:v1.2.0",
        "imageID": "docker-pullable://gcr.io/google_containers/kube-state-metrics@sha256:953a3b6bf0046333c656fcfa2fc3a08f4055dc3fbd5b1dcdcdf865a2534db526"
      }
    ],
    "verification": {
      "level": 0,
      "owner": "",
      "status": 2,
      "timestamp": ""
    }
  },
  {
    "hash": "5a3d976e568c7cb1c5df5d0803cbe48ba49ca1f0790368e8fca42fe9aa7793b5",
    "containers": [
      {
        "namespace": "monitoring",
        "pod": "kube-prometheus-exporter-kube-state-7fdbfbf866-rxl5d",
        "container": "exporter-kube-state-addon-resizer",
        "containerID": "docker://f9efaecabf63115c438dc48e0fb2d947fb8e9eaaf635bf4115c875be1eb77230",
        "image": "gcr.io/google-containers/addon-resizer-amd64:2.1",
        "imageID": "docker-pullable://gcr.io/google-containers/addon-resizer-amd64@sha256:d00afd42fc267fa3275a541083cfe67d160f966c788174b44597434760e1e1eb"
      }
    ],
    "verification": {
      "level": 0,
      "owner": "",
      "status": 2,
      "timestamp": ""
    }
  }
]

link not working

I am not sure about the state of this project but the link to notary cloud in the README is dead
Screenshot 2023-03-23 at 08 33 18

Embedded status web page

Add an another endpoint that print outs a simple status page with metrics.
Intended for users who do not use Prometheus.

OpenShift environment - Kube-Notary pod runs into permission error

I deployed Kube-Notary into our OpenShift environment using the deployment scripts in this repo (kubernetes/kube-notary/templates):

Step by step I ran:

oc login -u system:admin
 kubectl apply -f serviceaccount.yaml
 oc adm policy add-scc-to-user privileged -z kube-notary
 kubectl apply -f role.yaml
 kubectl apply -f rolebinding.yaml
 kubectl apply -f service.yaml
 kubectl apply -f configmap.yaml
 kubectl apply -f deployment.yaml

When checking the logs of the kube-notary pod:

kubectl get  pods | grep kube-notary
kubectl logs kube-notary-...

I see the following errors:

Error getting pods: pods is forbidden: User \"system:serviceaccount:test:kube-notary\" cannot list pods at the cluster scope: no RBAC policy matched"

No data is being collected.

image verification fails: Authentication is required

Kube-Notary is manually deployed into my K8s environment, but I noticed some errors in my pod log:

time="2020-04-10T06:27:19Z" level=error msg="Cannot verify \"docker-pullable://...: Authentication is required"

It's a private image registry - how can I enable the kube-notary to check these images as well?

401 when reading private image from azurecr.io

Error returned:

time="2019-10-11T14:34:15Z" level=error msg="Cannot verify \"docker-pullable://cnioimages2609.azurecr.io/hello-world@sha256:0016fef6ce1d57cb5d25bdbb5f6762165a7d4e73b6f07bc5179d22d094f35f83\" in pod \"hello-world-deployment-7cb8bf4c69-v6kt7\": reading image cnioimages2609.azurecr.io/hello-world@sha256:0016fef6ce1d57cb5d25bdbb5f6762165a7d4e73b6f07bc5179d22d094f35f83: unsupported status code 401"

Maybe related to this https://github.com/vchain-us/kube-notary/blob/95b8da57040b4db32e78c55a57ad8c58758cc15d/cmd/kubewatch/root.go#L25

ServiceMonitor (for prometheus operator)

In order to make kube-notary discovered by the prometheus operator:

  • the endpoint resource needs the following label:
  labels:
    k8s-app: kube-notary
  • the ServiceMonitor resource has to be created, as following:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: kube-notary
spec:
  endpoints:
  - interval: 15s
    port: metrics-port
  selector:
    matchLabels:
      k8s-app: kube-notary

asciicast

TODOs:

  • the required label could be set in the default setup
  • this configuration should be added to the repo

OpenShift - no kube-notary metrics show up in Prometheus

I deployed Kube-Notary using the kubernetes templates as described in issue 12 but couldn't find any image status metrics. Prometheus finds no targets to scrape Kube-Notary.

The Prometheus configuration is default without any changes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.