Git Product home page Git Product logo

Comments (10)

BillDett avatar BillDett commented on July 25, 2024

The operator should detect images from either quay.io or an on-premise Quay installation. We designed it for no user config needed. Can you share your Pod manifest here so we can see what's going on?

from container-security-operator.

aetomala avatar aetomala commented on July 25, 2024

I took the image you use in the high.pod.yaml and pushed that image into my private registry. Please find baddocker (pulls image from quay.io) and baddocker2 (pulls from my private registry)
baddocker manifest
kind: Pod
apiVersion: v1
metadata:
name: baddocker
namespace: default
selfLink: /api/v1/namespaces/default/pods/baddocker
uid: 9901c9d9-6556-4a4e-8d2a-fdfc5ff99f94
resourceVersion: '364279'
creationTimestamp: '2020-01-29T16:10:02Z'
labels:
app: baddocker
annotations:
cni.projectcalico.org/podIP: 172.30.235.185/32
spec:
restartPolicy: Always
serviceAccountName: default
imagePullSecrets:
- name: aetomala-pull-secret
priority: 0
schedulerName: default-scheduler
enableServiceLinks: true
terminationGracePeriodSeconds: 30
nodeName: 10.188.129.238
securityContext: {}
containers:
- name: hello-openshift
image: openshift/hello-openshift
ports:
- containerPort: 8080
protocol: TCP
resources: {}
volumeMounts:
- name: default-token-h886n
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
serviceAccount: default
volumes:
- name: default-token-h886n
secret:
secretName: default-token-h886n
defaultMode: 420
dnsPolicy: ClusterFirst
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
status:
phase: Running
conditions:
- type: Initialized
status: 'True'
lastProbeTime: null
lastTransitionTime: '2020-01-29T16:10:02Z'
- type: Ready
status: 'True'
lastProbeTime: null
lastTransitionTime: '2020-01-29T16:10:04Z'
- type: ContainersReady
status: 'True'
lastProbeTime: null
lastTransitionTime: '2020-01-29T16:10:04Z'
- type: PodScheduled
status: 'True'
lastProbeTime: null
lastTransitionTime: '2020-01-29T16:10:02Z'
hostIP: 10.188.129.238
podIP: 172.30.235.185
podIPs:
- ip: 172.30.235.185
startTime: '2020-01-29T16:10:02Z'
containerStatuses:
- restartCount: 0
started: true
ready: true
name: hello-openshift
state:
running:
startedAt: '2020-01-29T16:10:03Z'
imageID: >-
docker.io/openshift/hello-openshift@sha256:aaea76ff622d2f8bcb32e538e7b3cd0ef6d291953f3e7c9f556c1ba5baf47e2e
image: 'docker.io/openshift/hello-openshift:latest'
lastState: {}
containerID: 'cri-o://32fbfdf271862f7d0367d0ffb23bdc61f7960bf0a96252dacbe30219263b8bbb'
qosClass: BestEffort

baddocker2 manifest
kind: Pod
apiVersion: v1
metadata:
name: baddocker2
namespace: default
selfLink: /api/v1/namespaces/default/pods/baddocker2
uid: 2dda317f-2a28-4b7a-84a5-262921f66457
resourceVersion: '374314'
creationTimestamp: '2020-01-29T16:42:13Z'
labels:
app: baddocker2
annotations:
cni.projectcalico.org/podIP: 172.30.188.161/32
spec:
restartPolicy: Always
serviceAccountName: default
imagePullSecrets:
- name: aetomala-pull-secret
priority: 0
schedulerName: default-scheduler
enableServiceLinks: true
terminationGracePeriodSeconds: 30
nodeName: 10.188.129.245
securityContext: {}
containers:
- name: hello-openshift
image: >-
quay-enterprise-quay-enterprise.quay-enterprise-stg-7d4bdc08e7ddc90fa89b373d95c240eb-0001.us-east.containers.appdomain.cloud/hybridcloud/badhttpd:latest
resources: {}
volumeMounts:
- name: default-token-h886n
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
serviceAccount: default
volumes:
- name: default-token-h886n
secret:
secretName: default-token-h886n
defaultMode: 420
dnsPolicy: ClusterFirst
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
status:
phase: Running
conditions:
- type: Initialized
status: 'True'
lastProbeTime: null
lastTransitionTime: '2020-01-29T16:42:14Z'
- type: Ready
status: 'True'
lastProbeTime: null
lastTransitionTime: '2020-01-29T16:42:17Z'
- type: ContainersReady
status: 'True'
lastProbeTime: null
lastTransitionTime: '2020-01-29T16:42:17Z'
- type: PodScheduled
status: 'True'
lastProbeTime: null
lastTransitionTime: '2020-01-29T16:42:14Z'
hostIP: 10.188.129.245
podIP: 172.30.188.161
podIPs:
- ip: 172.30.188.161
startTime: '2020-01-29T16:42:14Z'
containerStatuses:
- restartCount: 0
started: true
ready: true
name: hello-openshift
state:
running:
startedAt: '2020-01-29T16:42:17Z'
imageID: >-
quay-enterprise-quay-enterprise.quay-enterprise-stg-7d4bdc08e7ddc90fa89b373d95c240eb-0001.us-east.containers.appdomain.cloud/hybridcloud/badhttpd@sha256:ca908f415a15fdba408f82537d295350772afa985112ee62db6709fea994a682
image: >-
quay-enterprise-quay-enterprise.quay-enterprise-stg-7d4bdc08e7ddc90fa89b373d95c240eb-0001.us-east.containers.appdomain.cloud/hybridcloud/badhttpd:latest
lastState: {}
containerID: 'cri-o://c6c6954ef825568ad0eedeaa40a4dcd7c788eaefdd7da666ed4c32197d26cfbc'
qosClass: BestEffort

from container-security-operator.

aetomala avatar aetomala commented on July 25, 2024

Also here is the SCO log at the point that I deploy an image from our private repo. Notice the No manifest security capabilities"

level=debug msg="Pod added" key=default/baddocker
E0129 18:01:06.195474       1 labeller.go:191] default/baddocker failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/baddocker
E0129 18:01:06.204532       1 labeller.go:191] default/baddocker failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/baddocker
level=debug msg="Pod updated" key=default/baddocker
E0129 18:01:06.208097       1 labeller.go:191] default/baddocker failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/baddocker
E0129 18:01:06.215373       1 labeller.go:191] default/baddocker failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/baddocker
level=debug msg="Pod updated" key=default/baddocker
E0129 18:01:06.242979       1 labeller.go:191] default/baddocker failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/baddocker
E0129 18:01:06.255677       1 labeller.go:191] default/baddocker failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/baddocker
E0129 18:01:06.416010       1 labeller.go:191] default/baddocker failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/baddocker
E0129 18:01:06.736529       1 labeller.go:191] default/baddocker failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/baddocker
level=debug msg="Pod updated" key=default/baddocker
E0129 18:01:07.142644       1 labeller.go:191] default/baddocker failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/baddocker
E0129 18:01:07.379413       1 labeller.go:191] default/baddocker failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/baddocker
level=debug msg="Pod updated" key=default/baddocker
level=info msg="Garbage collecting unreferenced ImageManifestVulns" key=default/baddocker
level=error msg="Failed to sync layer data" key=default/baddocker err="No manifest security capabilities"
level=info msg="Garbage collecting unreferenced ImageManifestVulns" key=default/baddocker
level=error msg="Failed to sync layer data" key=default/baddocker err="No manifest security capabilities"

from container-security-operator.

kleesc avatar kleesc commented on July 25, 2024

@aetomala can you make sure that the pod in which the CSO is running has access to your Quay instance. Specifically, the CSO will use Quay's /.well-known/app-capabilities to do discovery.

from container-security-operator.

aetomala avatar aetomala commented on July 25, 2024

@kleesc The pod was deployed in OpenShiftS 4.3 using the OperatorHub Catalog. When I hit install, there is no option to update anything about the container that is about to be deployed, other than which namespaces I want to monitor. When I look at the yaml for the deployed operator, I don't see how the configuration you are suggesting can be injected. I am not sure if this is the desired behavior from RedHat, but would you take a look at the yaml for the pod and suggest what changes I need to make?
The operator yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  annotations:
    deployment.kubernetes.io/revision: '1'
  selfLink: >-
    /apis/apps/v1/namespaces/openshift-operators/deployments/container-security-operator
  resourceVersion: '365892'
  name: container-security-operator
  uid: e46dea8f-cf9d-4c7c-b484-a25ca4c35865
  creationTimestamp: '2020-01-29T16:14:42Z'
  generation: 1
  namespace: openshift-operators
  ownerReferences:
    - apiVersion: operators.coreos.com/v1alpha1
      kind: ClusterServiceVersion
      name: container-security-operator.v1.0.1
      uid: 6c1c2e9c-7168-48b4-9d33-407c245a4009
      controller: false
      blockOwnerDeletion: false
  labels:
    olm.owner: container-security-operator.v1.0.1
    olm.owner.kind: ClusterServiceVersion
    olm.owner.namespace: openshift-operators
spec:
  replicas: 1
  selector:
    matchLabels:
      name: container-security-operator-alm-owned
  template:
    metadata:
      name: container-security-operator-alm-owned
      creationTimestamp: null
      labels:
        name: container-security-operator-alm-owned
      annotations:
        tectonic-visibility: ocs
        olm.targetNamespaces: ''
        repository: 'https://github.com/quay/container-security-operator'
        alm-examples: |-
          [
            {
              "apiVersion": "secscan.quay.redhat.com/v1alpha1",
              "kind": "ImageManifestVuln",
              "metadata": {
                "name": "example"
              },
              "spec": {}
            }
          ]
        capabilities: Full Lifecycle
        olm.operatorNamespace: openshift-operators
        containerImage: >-
          quay.io/quay/container-security-operator@sha256:15a4b50d847512b5f404ec1cf72c30c98e073a7f26f1588213bd2e8b6331f016
        createdAt: '2019-11-16 01:03:00'
        categories: Security
        description: Identify image vulnerabilities in Kubernetes pods
        olm.operatorGroup: global-operators
    spec:
      containers:
        - name: container-security-operator
          image: >-
            quay.io/quay/container-security-operator@sha256:15a4b50d847512b5f404ec1cf72c30c98e073a7f26f1588213bd2e8b6331f016
          command:
            - /bin/security-labeller
            - '--namespaces=$(WATCH_NAMESPACE)'
          env:
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: 'metadata.annotations[''olm.targetNamespaces'']'
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: container-security-operator
      serviceAccount: container-security-operator
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
status:
  observedGeneration: 1
  replicas: 1
  updatedReplicas: 1
  readyReplicas: 1
  availableReplicas: 1
  conditions:
    - type: Available
      status: 'True'
      lastUpdateTime: '2020-01-29T16:14:46Z'
      lastTransitionTime: '2020-01-29T16:14:46Z'
      reason: MinimumReplicasAvailable
      message: Deployment has minimum availability.
    - type: Progressing
      status: 'True'
      lastUpdateTime: '2020-01-29T16:14:46Z'
      lastTransitionTime: '2020-01-29T16:14:42Z'
      reason: NewReplicaSetAvailable
      message: >-
        ReplicaSet "container-security-operator-7b8fbb6f6" has successfully
        progressed.

the pod yaml

kind: Pod
apiVersion: v1
metadata:
  generateName: container-security-operator-7b8fbb6f6-
  annotations:
    cni.projectcalico.org/podIP: 172.30.235.186/32
    tectonic-visibility: ocs
    olm.targetNamespaces: ''
    repository: 'https://github.com/quay/container-security-operator'
    alm-examples: |-
      [
        {
          "apiVersion": "secscan.quay.redhat.com/v1alpha1",
          "kind": "ImageManifestVuln",
          "metadata": {
            "name": "example"
          },
          "spec": {}
        }
      ]
    capabilities: Full Lifecycle
    olm.operatorNamespace: openshift-operators
    containerImage: >-
      quay.io/quay/container-security-operator@sha256:15a4b50d847512b5f404ec1cf72c30c98e073a7f26f1588213bd2e8b6331f016
    createdAt: '2019-11-16 01:03:00'
    categories: Security
    description: Identify image vulnerabilities in Kubernetes pods
    olm.operatorGroup: global-operators
  selfLink: >-
    /api/v1/namespaces/openshift-operators/pods/container-security-operator-7b8fbb6f6-lvs5w
  resourceVersion: '365889'
  name: container-security-operator-7b8fbb6f6-lvs5w
  uid: ffd5e9ca-1108-455f-a7b7-3b53f1c51733
  creationTimestamp: '2020-01-29T16:14:42Z'
  namespace: openshift-operators
  ownerReferences:
    - apiVersion: apps/v1
      kind: ReplicaSet
      name: container-security-operator-7b8fbb6f6
      uid: e78279c5-4925-4270-a247-6c94797e4695
      controller: true
      blockOwnerDeletion: true
  labels:
    name: container-security-operator-alm-owned
    pod-template-hash: 7b8fbb6f6
spec:
  restartPolicy: Always
  serviceAccountName: container-security-operator
  imagePullSecrets:
    - name: container-security-operator-dockercfg-7bh4t
  priority: 0
  schedulerName: default-scheduler
  enableServiceLinks: true
  terminationGracePeriodSeconds: 30
  nodeName: 10.188.129.238
  securityContext: {}
  containers:
    - resources: {}
      terminationMessagePath: /dev/termination-log
      name: container-security-operator
      command:
        - /bin/security-labeller
        - '--namespaces=$(WATCH_NAMESPACE)'
      env:
        - name: MY_POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: WATCH_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: 'metadata.annotations[''olm.targetNamespaces'']'
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - name: container-security-operator-token-6jr9p
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePolicy: File
      image: >-
        quay.io/quay/container-security-operator@sha256:15a4b50d847512b5f404ec1cf72c30c98e073a7f26f1588213bd2e8b6331f016
  serviceAccount: container-security-operator
  volumes:
    - name: container-security-operator-token-6jr9p
      secret:
        secretName: container-security-operator-token-6jr9p
        defaultMode: 420
  dnsPolicy: ClusterFirst
  tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
status:
  phase: Running
  conditions:
    - type: Initialized
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-01-29T16:14:42Z'
    - type: Ready
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-01-29T16:14:46Z'
    - type: ContainersReady
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-01-29T16:14:46Z'
    - type: PodScheduled
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-01-29T16:14:42Z'
  hostIP: 10.188.129.238
  podIP: 172.30.235.186
  podIPs:
    - ip: 172.30.235.186
  startTime: '2020-01-29T16:14:42Z'
  containerStatuses:
    - restartCount: 0
      started: true
      ready: true
      name: container-security-operator
      state:
        running:
          startedAt: '2020-01-29T16:14:46Z'
      imageID: >-
        quay.io/quay/container-security-operator@sha256:15a4b50d847512b5f404ec1cf72c30c98e073a7f26f1588213bd2e8b6331f016
      image: >-
        quay.io/quay/container-security-operator@sha256:15a4b50d847512b5f404ec1cf72c30c98e073a7f26f1588213bd2e8b6331f016
      lastState: {}
      containerID: 'cri-o://ceeda131092cd7a01bb983c0d5b93a9f61edf0433e1eac54193bc69258984a6c'
  qosClass: BestEffort

from container-security-operator.

aetomala avatar aetomala commented on July 25, 2024

@kleesc If I read the code correctly (1.0.1 release) /.well-known/app-capabilities is already the default value for wellknownEndpoint. What is not clear through the documentation is what that file /.well-known/app-capabilities is supposed to look like or that value can be set to the actual hostname. When I ssh into the pod I cannot locate /.well-known/app-capabilities. This lead me to think that the file needs to be mounted to the pod. Would you clarify those two points?

  • what does /.well-known/app-capabilities look like?
  • can wellknownEndpoint be set to the fully qualified name for my registry? ex. host.name.com

Lastly, I noticed that in your master for for this project, the security-labeller now makes use of another attribute scanner host which makes more sense. https://github.com/jjmengze/container-security-operator/blob/master/cmd/security-labeller/main.go

from container-security-operator.

kleesc avatar kleesc commented on July 25, 2024

@aetomala /.well-known/app-capabilities is the discovery endpoint on Quay itself, and not a file. e.g https://some-quay-host/.well-known/app-capabilities. The CSO infers the registry an image is pulled from from the pods' ImageID it's trying to scan.

The pod in which the CSO is running needs access to that endpoint mentioned above. One way to check would be to SSH in the CSO's pod, and try curling that endpoint from that pod instead. My guess is that since it's working on quay.io images and not your private Quay instance, it has to be something with the CSO being able to reach your private registry.

from container-security-operator.

gorantornqvist avatar gorantornqvist commented on July 25, 2024

Sorry for hijacking. But having the same issue.
Looked at "Example config" and a bit unclear how and where this should be configured.

I would like to get results from both quay.io and my local quay registry if possible ...

from container-security-operator.

aetomala avatar aetomala commented on July 25, 2024

@kleesc I ssh'ed into the CSO pod and I was able to ping and do wget to host https://quay-enterprise-quay-enterprise.quay-enterprise-stg-7d4bdc08e7ddc90fa89b373d95c240eb-0001.us-east.containers.appdomain.cloud I cleaned all pods and added a bad pod from my private registry and no ImageManifestVuln is created or reported. see errors below and notice that it says my registry does not support that capability. However, if delete all of the pods, then deploy your pod example, the CSO creates an ImageManifestVuln (remember this is the same exact image as the one your your example, only hosted in a different registry). If I now deploy the image from my registry, after the one from your example has been deployed, CSO correctly matches the sha and annotates my the Vulnerability record with my newly created pod (from my repo).
This is the logs when I only deploy image from private repo

level=debug msg="Pod added" key=default/badpod
E0131 16:49:58.835722       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 16:49:58.841706       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
level=debug msg="Pod updated" key=default/badpod
E0131 16:49:58.842855       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 16:49:58.851941       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
level=debug msg="Pod updated" key=default/badpod
E0131 16:49:58.872767       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 16:49:58.892308       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 16:49:59.052634       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 16:49:59.373750       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
level=debug msg="Pod updated" key=default/badpod
E0131 16:49:59.723908       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 16:50:00.014145       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
level=debug msg="Pod updated" key=default/badpod
level=info msg="Garbage collecting unreferenced ImageManifestVulns" key=default/badpod
>>>>>level=error msg="Failed to sync layer data" key=default/badpod err="No manifest security capabilities"
level=info msg="Garbage collecting unreferenced ImageManifestVulns" key=default/badpod
>>>>>level=error msg="Failed to sync layer data" key=default/badpod err="No manifest security capabilities"
level=info msg="Removing deleted pod from ImageManifestVulns" key=default/badpodtest
level=info msg="Garbage collecting unreferenced ImageManifestVulns" key=default/badpodtest

these are the logs when I deploy the images in the sequence I described above

level=debug msg="Pod added" key=default/high
E0131 19:24:12.111854       1 labeller.go:191] default/high failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/high
E0131 19:24:12.117057       1 labeller.go:191] default/high failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/high
level=debug msg="Pod updated" key=default/high
E0131 19:24:12.121607       1 labeller.go:191] default/high failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/high
E0131 19:24:12.127339       1 labeller.go:191] default/high failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/high
level=debug msg="Pod updated" key=default/high
E0131 19:24:12.147678       1 labeller.go:191] default/high failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/high
E0131 19:24:12.167683       1 labeller.go:191] default/high failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/high
E0131 19:24:12.328715       1 labeller.go:191] default/high failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/high
E0131 19:24:12.648973       1 labeller.go:191] default/high failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/high
level=debug msg="Pod updated" key=default/high
E0131 19:24:12.985389       1 labeller.go:191] default/high failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/high
E0131 19:24:13.289254       1 labeller.go:191] default/high failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/high
level=debug msg="Pod updated" key=default/high
level=info msg="Garbage collecting unreferenced ImageManifestVulns" key=default/high
level=info msg="Created ImageManifestVuln" manifestKey=default/sha256.ca908f415a15fdba408f82537d295350772afa985112ee62db6709fea994a682 key=default/high
level=debug msg="ImageManifestVuln added" key=default/sha256.ca908f415a15fdba408f82537d295350772afa985112ee62db6709fea994a682
level=debug msg="ImageManifestVuln updated" key=default/sha256.ca908f415a15fdba408f82537d295350772afa985112ee62db6709fea994a682
level=info msg="Garbage collecting unreferenced ImageManifestVulns" key=default/high
level=debug msg="Pod added" key=default/badpod
E0131 19:26:31.150294       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 19:26:31.155528       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
level=debug msg="Pod updated" key=default/badpod
E0131 19:26:31.162084       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 19:26:31.165837       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
level=debug msg="Pod updated" key=default/badpod
E0131 19:26:31.189909       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 19:26:31.206051       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 19:26:31.366321       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 19:26:31.686625       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
level=debug msg="Pod updated" key=default/badpod
E0131 19:26:32.070128       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
E0131 19:26:32.330925       1 labeller.go:191] default/badpod failed with : &{%!w(string=Pod phase not running: Pending)}
level=info msg="Requeued item" key=default/badpod
level=debug msg="Pod updated" key=default/badpod
level=info msg="Garbage collecting unreferenced ImageManifestVulns" key=default/badpod
level=debug msg="ImageManifestVuln updated" key=default/sha256.ca908f415a15fdba408f82537d295350772afa985112ee62db6709fea994a682
level=debug msg="ImageManifestVuln updated" key=default/sha256.ca908f415a15fdba408f82537d295350772afa985112ee62db6709fea994a682
level=info msg="Garbage collecting unreferenced ImageManifestVulns" key=default/badpod
level=debug msg="Pod updated" key=default/badpod
level=debug msg="Pod updated" key=openshift-console/downloads-75b97dcb56-h8sr4
level=info msg="Garbage collecting unreferenced ImageManifestVulns" key=default/badpod

from container-security-operator.

mransonwang avatar mransonwang commented on July 25, 2024

After researched source code, the quay.io was hardcode in source code, so it's no possible to change quay.io to point to another private registry, look forward to enhance the function to support on-premise quay registry

from container-security-operator.

Related Issues (11)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.