Git Product home page Git Product logo

velero-plugin-for-openstack's People

Contributors

jonher937 avatar kayrus avatar lirt avatar the-so6 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

velero-plugin-for-openstack's Issues

Restic backups fail

Hi there,

I'm not even sure the problem comes from your plugin, but I'm trying to use restic backups (as there is no Block Store for now) with no success.

Do you happen to have any issues with your stack?

Configuration :

  • Velero 1.5 with restic enabled :
    velero install --use-restic
  • Velero plugin Swift v0.1.1
  • OS Authentication config in velero deployment (keystone v3):
    OS_AUTH_URL, OS_USERNAME, OS_PASSWORD, OS_PROJECT_ID, OS_PROJECT_NAME, OS_REGION_NAME, OS_DOMAIN_NAME
  • A bucket in Openstack :
    velero_backups
  • A default BackupStorageLocation for Velero:
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  name: default
  namespace: velero
spec:
  config:
    resticRepoPrefix: public_volumes
  objectStorage:
    bucket: velero_backups
    prefix: public_resources
  provider: swift
  • An annotated pod for restic backups:
$ k -n test-velero get pods ubuntu-75d9d656-29mbr -o yaml | grep -A 1 "annotations:\| volumes:"
  annotations:
    backup.velero.io/backup-volumes: test-vol
--
  volumes:
  - name: test-vol

Symptoms:

  • No repo argument in the restic unlock command made by Velero
$ k -n velero logs  -f velero-bf8848f55-8x6jl
[...]
time="2020-10-15T12:42:44Z" level=error msg="Error checking repository for stale locks" controller=restic-repository error="error running command=restic unlock --repo= --password-file=/tmp/velero-restic-credentials-158994766 --cache-dir=/scratch/.cache/restic, stdout=, stderr=Fatal: Please specify repository location (-r)\n: exit status 1"
  • Backup partially fails (all namespace resources are backed-up, volumes aren't)
$ velero backup create test-velero-backup --include-namespaces=test-velero
Backup request "test-velero-backup" submitted successfully.
Run `velero backup describe test-velero-backup` or `velero backup logs test-velero-backup` for more details.

$ velero backup describe test-velero-backup
[...]
Phase:  PartiallyFailed (run `velero backup logs test-velero-backup` for more information)
Errors:    1
Warnings:  0
[...]
Started:    2020-10-15 15:41:49 +0200 CEST
Completed:  2020-10-15 15:41:55 +0200 CEST
[...]
Total items to be backed up:  10
Items backed up:              10

$ velero backup logs test-velero-backup
An error occurred: request failed: 401 Unauthorized: Temp URL invalid
  1. There is no podvolumebackup or resticrepository created.

It'd be awesome if you could share your experiences on the matter.

Compatibility with OpenStack Keystone auth v3

Hello,
Is this plugin compatible with OpenStack Keystone auth v3?
Gophercloud library seems to be ready for v3 (http://gophercloud.io/docs/identity/v3/)

Velero backup pod keeps crashing with error message:

time="2020-11-05T07:28:49Z" level=info msg="ObjectStore.Init called" cmd=/plugins/velero-plugin-swift logSource="/go/src/github.com/Lirt/velero-plugin-swift/src/object_store.go:61" pluginName=velero-plugin-swift
An error occurred: some backup storage locations are invalid: error getting backup store for location "default": rpc error: code = Unknown desc = Failed to authenticate against Swift: You must provide exactly one of DomainID or DomainName to authenticate by Username

For testing purposes, I set manually all the OS variables (Taken from OpenStack RC3) file in the velero deployment like:

    spec:
      containers:
      - args:
        - server
        - --features=
        command:
        - /velero
        env:
        - name: VELERO_SCRATCH_DIR
          value: /scratch
        - name: VELERO_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: LD_LIBRARY_PATH
          value: /plugins
        - name: OS_AUTH_URL
          value: https://xxx.zzz.com:5000/v3  << Auth v3
        - name: OS_PASSWORD
          value: passwordxxx
        - name: OS_REGION_NAME
          value: regionxxx
        - name: OS_USERNAME
          value: usernamexxx
        - name: OS_PROJECT_DOMAIN_ID
          value: domainidxxx
        - name: OS_USER_DOMAIN_ID
          value: domainidxxx

Do you need any additional info?

Anyway thanks for your work! I would be very happy if I can use it in our environment.

[FEAT] Restore a snapshot to a different zone

Is your feature request related to a problem? Please describe.

Our team uses an undocumented velero feature to restore a snapshot to a different AZ (spoof a new AZ in velero backup manifest in the object storage). We use this hack to restore snapshots in AWS. Would be nice if this plugin could support this.

Describe the solution you'd like

For cinder it's possible to create a snapshot, then make its backup and then restore a backup to a new AZ.

For manila shares it's possible to use share replicas to move a share from one AZ to another.

The only question is how to toggle this new logic. Using env variables? Or with a new snapshot location?

See also vmware-tanzu/velero#103

[FEAT] Encrypt swift data at rest

Is your feature request related to a problem? Please describe.

k8s resources backups are stored in plain format in Swift, this is not secure if you backup secrets.

Describe the solution you'd like

Velero doesn't support encryption (see), but it supports restic for the file system backup (FSB).
I propose a feature to encrypt the data before it's stored in swift. A proposed encryption method should correspond to the one used in restic

Restic's license is BSD 2-clause, therefore it should be safe (this repo has compatible MIT license) to use it's source code for the encryption.

"Temp URL invalid" error when uploading backups to swift

Hi @Lirt
Currently, I'm doing some testing about backup&restore and facing an error when uploading backups to swift. I'm trying to investigate but still no clues so I'd like to ask for help if you have time on this. Thanks in advance.

Environment
velero: 1.4.0
velero-plugin-swift: latest code
OS variable I set:

  • OS_AUTH_URL
  • OS_USERNAME
  • OS_PASSWORD
  • OS_REGION_NAME
  • OS_PROJECT_NAME
  • OS_PROJECT_ID
  • OS_DOMAIN_ID

Symptom
Basically, there are three kinds of phenomena after executing 'velero backup create xxx'.

  • Failed.
  • image
  • PartiallyFailed
  • image
  • Completed
  • image

The weird thing is if you execute 'velero backup logs' to check the log message, it shows identical content even the Completed one. The log shows below.
image

Reproduce procedure

  1. Set OS variables aforementioned.
  2. Execute velero backup create xxx.
  3. Wait for a while and execute velero backup logs xxx.

Velero with restic doesn't create a volume in Openstack during restore

Hello,

To test out velero's backup and restore features with restic, I am trying to restore EFK stack from Azure AKS. I have an EFK stack setup and running in Azure. I installed velero with restic in Azure using helm charts and azure plugin and tested out velero backup and restore features in Azure and verified that it worked.

Now, we have to deal with multiple cloud providers and we decided to take backups in Azure and restore them to Openstack (in this case) instead of taking backups in each cloud provider for obvious reasons. I made modifications to my helm installation to the OpenStack cloud as per the values.yaml file

credentials:
  extraSecretRef: "velero-credentials"
configuration:
  provider: azure
#backupstoragelocation of azure
  backupStorageLocation:
    name: default
    bucket: velero
    #bucket: test
    provider: velero.io/azure
    config:
      resourceGroup: resourcegroup
      storageAccount: storageaccount
      subscriptionId: SID
  volumeSnapshotLocation:
    name: azure-default
    provider: velero.io/azure
initContainers:
  - name: velero-plugin-openstack
    image: lirt/velero-plugin-for-openstack:v0.3.1
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - mountPath: /target
        name: plugins
  - name: velero-plugin-for-microsoft-azure
    image: velero/velero-plugin-for-microsoft-azure:v1.5.0
    volumeMounts:
      - mountPath: /target
        name: plugins
snapshotsEnabled: true
backupsEnabled: true
deployRestic: true

The velero namespace and credentials were created before running the velero installation command in Openstack. The credentials file has all the Openstack variables with values in b64 format. After installation, running

velero backup get

in my OS tenant, gives me the backups I created in Azure which is fine.

But when I restore, I see the PV that is created in Azure restored to OpenStack but a new volume is not created in OpenStack.

The parameters of the restored PV in OS are:

 csi:
    driver: disk.csi.azure.com
    volumeHandle: >-
      /subscriptions/SID/resourceGroups/REesourceGroup/providers/Microsoft.Compute/disks/restore-xxxxx
    volumeAttributes:
      csi.storage.k8s.io/pv/name: pvc-xxxx-yyyy
      csi.storage.k8s.io/pvc/name: elastic-data-elasticsearch-0

instead of something like

csi:
    driver: cinder.csi.openstack.org
    volumeHandle: VH
    fsType: ext4
    volumeAttributes:
      storage.kubernetes.io/csiProvisionerIdentity: xxxx-yyy-cinder.csi.openstack.org

I have also added the configmap files that can change the storage classes from backup to restore mentioned here before running the restore.

I am pretty sure I've overlooked something in order to get this idea working but if not, can you suggest a workaround for this?

Add an ability to override swift endpoint URL

Swift supports ACLs (https://docs.openstack.org/swift/latest/overview_acl.html) and it is possible to grant an access to http://swift/v1/AUTH_project1/container1 for a user that has a project2 scope token, e.g.

.r:*,.rlistings,user2:project2

By default gophercloud extracts the endpoint URL using the token catalog:

curl -s http://identity/v3/auth/tokens -H "Content-Type: application/json" -d'AUTH_PARAMS'| jq -r '.token.catalog[]|select(.type=="object-store")|.endpoints[]|select(.interface=="public")'
{
  "id": "project2",
  "interface": "public",
  "region_id": "region1",
  "url": "http://swift/v1/AUTH_project2",
  "region": "region1"
}

In order to override the default catalog URL, gophercloud supports specifying the custom Endpoint for the ServiceClient.

It'd be great to have a custom URL or custom projectID option for openstack velero project.

Improve unit tests

There are some unit tests for object store but no tests for block storage and utils.

[FEAT] Implement debug flag

Is your feature request related to a problem? Please describe.

It's hard to identity the root cause for a problem if you have one.

Describe the solution you'd like

There should be a way to enable API debug like in other gophercloud based apps.

Error listing backups in backup store

Hi Lirt,

Thanks for your reactivity regarding our previous issue.
By the way, we are facing an other problem. We cannot restore from a completed backup.
We have this error all the time into logs :

time="2021-11-19T14:41:46Z" level=info msg="ObjectStore.Init called" cmd=/plugins/velero-plugin-swift controller=backup-sync logSource="/go/src/github.com/Lirt/velero-plugin-swift/src/swift/object_store.go:31" pluginName=velero-plugin-swift
time="2021-11-19T14:41:46Z" level=info msg="Trying to authenticate against Openstack using environment variables (including application credentials) or using files ~/.config/openstack/clouds.yaml, /etc/openstack/clouds.yaml and ./clouds.yaml" cmd=/plugins/velero-plugin-swift controller=backup-sync logSource="/go/src/github.com/Lirt/velero-plugin-swift/src/utils/auth.go:50" pluginName=velero-plugin-swift
time="2021-11-19T14:41:46Z" level=info msg="Authentication successful" cmd=/plugins/velero-plugin-swift controller=backup-sync logSource="/go/src/github.com/Lirt/velero-plugin-swift/src/utils/auth.go:70" pluginName=velero-plugin-swift
time="2021-11-19T14:41:46Z" level=info msg=ListCommonPrefixes bucket=velero-test cmd=/plugins/velero-plugin-swift controller=backup-sync delimiter=/ logSource="/go/src/github.com/Lirt/velero-plugin-swift/src/swift/object_store.go:117" pluginName=velero-plugin-swift prefix=backups/
time="2021-11-19T14:41:46Z" level=error msg="Error listing backups in backup store" backupLocation=default controller=backup-sync error="rpc error: code = Unknown desc = failed to list objects in bucket velero-test: invalid character 'b' looking for beginning of value" logSource="pkg/controller/backup_sync_controller.go:182"

The b character is the first letter from the prefix (backups), we verified that by changing the prefix.
This error appears all the time but is not blocking the backup.

We are using velero 1.7.0 and tried with 1.6.3 without restic. It seems that is blocking the restore, this process seems very long (more than 10min for one PV)

Do not hesitate if you need further information

Thanks in advance

Auth issues when deploying with helm chart

I have trouble understanding how auth should be configured when deploying with a helm chart. Especially what is the proper format for the velero-crendentials secret.

So far I created a clouds.yaml that is deployed as as a velero-credentials secret:

clouds:
  ovh:
    region_name: GRA7
    auth:
      auth_url: https://auth.cloud.ovh.net/v3
      tenant_id: XXX
      tenant_name: 'XXX'
      username: 'user-XXX'
      password: 'XXX'
      allow_reauth: true

So my secret looks likes this:

data:
  clouds.yaml: >-
    Y2xvdWRzOgoXXXXX

But encountering failed to authenticate against Openstack: unable to load clouds.yaml: no clouds.yml file found: file does not exist.

Thanks!

Incremental pv backup

Hi @Lirt
Have to bug you again. :)
I'm just wondering if plugin supports incremental backup for snapshot? I know restic might be an alternative but it's kind of a big change for us. Would you share some insights? Thanks ahead.

[BUG] don't require access rules for manila shares

Describe the bug

Currently velero returns an error, when creating a share/snapshot from a volume that doesn't have access rules

Steps to reproduce the behavior

Create a manila snapshot from the share with no access rules

Expected behavior

velero shouldn't fail, when the source share doesn't have access rules

Used versions

  • Velero version(velero version): -
  • Plugin version(kubectl describe pod velero-...): master
  • Kubernetes version(kubectl version): -
  • Openstack version: -

[BUG] Manila snapshots/shares sometimes stuck in "error_deleting" status

Describe the bug

We faced an issue in the manila: https://bugs.launchpad.net/manila/+bug/2025641 https://bugs.launchpad.net/manila/+bug/1960239

A Manila share or snapshot can stuck in "error_deleting" status, consuming the quota resources.

This is not caught due to async delete action.

Steps to reproduce the behavior

Just create a snapshot or clone, and once done, try to delete a resource, there will be leftovers with the "error_deleting" status.

Expected behavior

There should be an option to reset the resource status back to "available", then try to delete it once again.

[FEAT] Add alternative snapshot methods for cinder

Is your feature request related to a problem? Please describe.

Cinder supports alternative snapshot methods: backups (compressed and stored in swift), glance image (also stored in swift).
These snapshot methods may save costs in public OpenStack cloud providers.

Describe the solution you'd like

Add extra snapshot methods support into config and process the snapshots accordingly.

[FEAT] Add user agent

Is your feature request related to a problem? Please describe.

It's not possible to identify which application performed an action in OpenStack server logs. Currently gophercloud sends the default gophercloud/v1.3.0 user agent.

Describe the solution you'd like

velero plugin should identify itself with a proper user agent and proper plugin version.

[FEAT] Retry snapshots deletion

Is your feature request related to a problem? Please describe.

Snapshots can be in "creating" or "migrating" status while they're being removed using the velero backup delete example comand. In this case OpenStack returns 409 response code and velero fails.

Describe the solution you'd like

Delete actions should retry on 409 response code until the timeout.

Open source license

Hi there,

I'm wondering if there is a open source license for this project? Thanks.
BTW, I tried to use https://github.com/cisco-sso/velero-plugin-openstack on my project but it seems to be not maintained any longer and it is not supported by the latest version of velero as well. Therefore, I'd like to know if this plugin is compatible with the latest version of velero?
Besides, would you mind telling me the time of cinder block storage could be supported? I noticed you've already created a PR for this. Really appreciate for your patience.

[FEAT] Change go package name

Is your feature request related to a problem? Please describe.

No problems, just confusing package name.

Describe the solution you'd like

Change package name:

diff --git a/go.mod b/go.mod
index 28c5e99..e0a50a8 100644
--- a/go.mod
+++ b/go.mod
@@ -1,4 +1,4 @@
-module github.com/Lirt/velero-plugin-swift
+module github.com/Lirt/velero-plugin-for-openstack
 
 go 1.19
 

This fix is required for a proper user agent version in #75

Apply openstack credentials for authentication

Hi @Lirt
Currently I'm doing some development with openstack credentials and considering the environment, have you ever considering using application credentials to authenticate with keystone? Could you provide me with your suggestions or opinions? Thank you.

Capture

Can not create backup with cinder backend

Hi,
I 'm trying to use velero-plugin-for-openstack for velero. I installed k8s in VM of Openstack Rocky. My openstack uses ceph-backend. I deployed velero include plugin from CLI:

velero install --provider "community.openstack.org/openstack" --plugins lirt/velero-plugin-for-openstack:v0.3.0 --bucket velero-bucket  --no-secret

After authentication is successful, there is error when create backup store. Seem, default plugin use swift but I only want to use cinder driver. How to get that ?

time="2021-10-25T06:58:30Z" level=info msg="Trying to authenticate against Openstack using environment variables (including application credentials) or using files ~/.config/openstack/clouds.yaml, /etc/openstack/clouds.yaml and ./clouds.yaml" cmd=/plugins/velero-plugin-swift controller=backup-sync logSource="/go/src/github.com/Lirt/velero-plugin-swift/src/utils/auth.go:50" pluginName=velero-plugin-swift
time="2021-10-25T06:58:31Z" level=info msg="Authentication successful" cmd=/plugins/velero-plugin-swift controller=backup-sync logSource="/go/src/github.com/Lirt/velero-plugin-swift/src/utils/auth.go:70" pluginName=velero-plugin-swift
time="2021-10-25T06:58:31Z" level=error msg="Error getting backup store for this location" backupLocation=default controller=backup-sync error="rpc error: code = Unknown desc = failed to create swift storage object: No suitable endpoint could be found in the service catalog." logSource="pkg/controller/backup_sync_controller.go:175"

Hope your reply soon, @Lirt

support everest CSI from Huawei

Hi Lirt,
Thanks for your plugin for Openstack.
We are working on Open Telekom Cloud who is based with Huawei on top of Openstack.
When we tried to make PV snapshot, the volume ID is null because of the CSI used.

We found a work around by adding this :

else if pv.Spec.CSI.Driver == "cinder.csi.openstack.org" || pv.Spec.CSI.Driver == disk.csi.everest.io" {
		volumeID = pv.Spec.CSI.VolumeHandle

on src/cinder/block_store.go line 191.

Do you want that I proposed a PR or you think that is useless ?

Leverage secret file for authentication

Hi @Lirt

I notice that "--no-secret" is used when installing plugin which means the users have to declare all the variables in deployment or daemonset or something like that. It's fine in local but sort of inappropriate in production I suppose. For example, I have to declare these variables below in deployment.yaml to ensure authentication is passed.
image

  • Have you ever considered creating a secret file for authentication?
  • Besides, the "openstack" is not yet presented in supported provider lists of velero official doc. Would you like to add openstack into the doc for the convenience of others as well?
  • Would you like to give me any suggestions on how to deal with those OS variables if no secret file used?

I suppose there are two parts of modification if using secret file on velero side:

  • Add openstack credentials file in pkg/install/daemonset.go and pkg/install/deployment.go. The value should be /credentials/cloud. The aws credentials file is described as below:
The cloud-credentials secret exists in the Velero server's namespace

The cloud-credentials secret has a single key, cloud, whose value is the contents of the credentials-velero file

The credentials-velero file is formatted properly and has the correct values:

[default]
aws_access_key_id=<your AWS access key ID>
aws_secret_access_key=<your AWS secret access key>
The cloud-credentials secret is defined as a volume for the Velero deployment

The cloud-credentials secret is being mounted into the Velero server pod at /credentials
  • Add openstack related information into site/docs/master/supported-providers.md

Nevertheless, it's just my thoughts and I believe I'm not 100% correct obviously because I'm not expert on this part. Could you please share your opinion or recommendation? Thanks.

Update all dependencies

There are 2 security vulnerabilities inherited from dependencies and old versions of libraries used.

[BUG] region_name is not correctly pulled from clouds.yaml

Describe the bug
When authenticating towards openstack using a clouds.yaml file the region_name is not correctly read and results in the following error: failed to create cinder storage client: No suitable endpoint could be found in the service catalog. when backing up cinder backed persistent volumes. Currently using AWS provider plugin to connect to object storage so cannot speak for how this would affect backups of other resources.

Setting the variable OS_REGION_NAME in the velero pod to the correct region solves the problem.

Steps to reproduce the behavior

  1. Deploy configmap with clouds.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
  name: openstack-credentials
  namespace: velero
data:
  clouds.yaml: |
    clouds:
      openstack:
        region_name: myRegion
        auth:
          auth_url: "myOpenstackUrl:PORT/v3"
          application_credential_name: <credentialName>
          application_credential_id: <credentialID>
          application_credential_secret: <credentialSecret>
        insecure: true
  1. deploy velero with clouds.yaml mounted as volume using the following snippet:
...

extraVolumes:
  - name: openstack-config
    configMap:
      name: openstack-credentials

extraVolumeMounts:
  - name: openstack-config
    mountPath: /etc/openstack

configuration:
  volumeSnapshotLocation:
  - name: default
    provider: community.openstack.org/openstack-cinder
    config:
      method: backup
      volumeTimeout: 5m
      snapshotTimeout: 5m
      cloneTimeout: 5m
      backupTimeout: 5m
      imageTimeout: 5m
      ensureDeleted: "true"
      ensureDeletedDelay: 10s
      cascadeDelete: "true"
      containerName: "myBackend"
  extraEnvVars:
    OS_CLOUD: openstack
    
...
    
  1. Create PV with label backup-test: true
  2. Create a backup including persistent volumes backed by openstack cinder volume.
apiVersion: velero.io/v1
kind: Backup
metadata:
  name: demo-backup-3
  namespace: velero
spec:
  labelSelector:
    matchLabels:
      backup-test: "true"
  includedNamespaces:
    - '*'
  includedResources:
    - persistentvolumes

Expected behavior
Velero would create a backup in openstack of persistent volumes backed by cinder in the cluster by using the declared region in the clouds.yaml file. Just as it does if i add ENV var OS_REGION_NAME: myRegion.

Used versions

  • Velero version(velero version): v1.13.0
  • Plugin version(kubectl describe pod velero-...): lirt/velero-plugin-for-openstack:v0.6.1
  • Kubernetes version(kubectl version): Server Version: v1.25.3+rke2r1
  • Openstack version(openstack --version): openstack 6.3.0

Link to velero or backup log
velero backup log demo-backup-3

Log snippet around error:

time="2024-03-28T10:22:21Z" level=info msg="Trying to authenticate against OpenStack using environment variables (including application credentials) or using files ~/.config/openstack/clouds.yaml, /etc/openstack/clouds.yaml and ./clouds.yaml" backup=velero/demo-backup-3 cmd=/plugins/velero-plugin-for-openstack logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/utils/auth.go:68" pluginName=velero-plugin-for-openstack
time="2024-03-28T10:22:22Z" level=info msg="Authentication against identity endpoint https://myOpenstackUrl:PORT/ was successful" backup=velero/demo-backup-3 cmd=/plugins/velero-plugin-for-openstack logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/utils/auth.go:113" pluginName=velero-plugin-for-openstack
time="2024-03-28T10:22:22Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/demo-backup-3 error="rpc error: code = Unknown desc = failed to create cinder storage client: No suitable endpoint could be found in the service catalog." logSource="pkg/backup/item_backupper.go:591" name=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a namespace= persistentVolume=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a resource=persistentvolumes volumeSnapshotLocation=default
time="2024-03-28T10:22:22Z" level=info msg="Persistent volume is not a supported volume type for Velero-native volumeSnapshotter snapshot, skipping." backup=velero/demo-backup-3 logSource="pkg/backup/item_backupper.go:612" name=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a namespace= persistentVolume=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a resource=persistentvolumes

Full log from backup:

time="2024-03-28T10:22:21Z" level=info msg="Setting up backup temp file" backup=velero/demo-backup-3 logSource="pkg/controller/backup_controller.go:620"
time="2024-03-28T10:22:21Z" level=info msg="Setting up plugin manager" backup=velero/demo-backup-3 logSource="pkg/controller/backup_controller.go:627"
time="2024-03-28T10:22:21Z" level=info msg="Getting backup item actions" backup=velero/demo-backup-3 logSource="pkg/controller/backup_controller.go:631"
time="2024-03-28T10:22:21Z" level=info msg="Setting up backup store to check for backup existence" backup=velero/demo-backup-3 logSource="pkg/controller/backup_controller.go:636"
time="2024-03-28T10:22:21Z" level=info msg="Writing backup version file" backup=velero/demo-backup-3 logSource="pkg/backup/backup.go:197"
time="2024-03-28T10:22:21Z" level=info msg="Including namespaces: *" backup=velero/demo-backup-3 logSource="pkg/backup/backup.go:203"
time="2024-03-28T10:22:21Z" level=info msg="Excluding namespaces: <none>" backup=velero/demo-backup-3 logSource="pkg/backup/backup.go:204"
time="2024-03-28T10:22:21Z" level=info msg="Including resources: persistentvolumes" backup=velero/demo-backup-3 logSource="pkg/util/collections/includes_excludes.go:506"
time="2024-03-28T10:22:21Z" level=info msg="Excluding resources: <none>" backup=velero/demo-backup-3 logSource="pkg/util/collections/includes_excludes.go:507"
time="2024-03-28T10:22:21Z" level=info msg="Backing up all volumes using pod volume backup: false" backup=velero/demo-backup-3 logSource="pkg/backup/backup.go:222"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=pods
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=pods
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=persistentvolumeclaims
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=persistentvolumeclaims
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=persistentvolumes
time="2024-03-28T10:22:21Z" level=info msg="Listing items" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:323" namespace= resource=persistentvolumes
time="2024-03-28T10:22:21Z" level=info msg="list for groupResource persistentvolumes was not paginated" backup=velero/demo-backup-3 logSource="pkg/backup/item_collector.go:496"
time="2024-03-28T10:22:21Z" level=info msg="Retrieved 1 items" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:354" namespace= resource=persistentvolumes
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=resourcequotas
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=resourcequotas
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=podtemplates
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=podtemplates
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=limitranges
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=limitranges
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=secrets
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=secrets
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=services
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=services
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=nodes
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=nodes
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=serviceaccounts
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=serviceaccounts
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=replicationcontrollers
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=replicationcontrollers
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=endpoints
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=endpoints
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=configmaps
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=configmaps
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=events
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=events
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:196" resource=namespaces
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=v1 logSource="pkg/backup/item_collector.go:255" resource=namespaces
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=apiregistration.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=apiregistration.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=apiservices
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=apiregistration.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=apiservices
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=apps/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=apps/v1 logSource="pkg/backup/item_collector.go:196" resource=deployments
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=apps/v1 logSource="pkg/backup/item_collector.go:255" resource=deployments
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=apps/v1 logSource="pkg/backup/item_collector.go:196" resource=controllerrevisions
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=apps/v1 logSource="pkg/backup/item_collector.go:255" resource=controllerrevisions
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=apps/v1 logSource="pkg/backup/item_collector.go:196" resource=replicasets
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=apps/v1 logSource="pkg/backup/item_collector.go:255" resource=replicasets
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=apps/v1 logSource="pkg/backup/item_collector.go:196" resource=statefulsets
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=apps/v1 logSource="pkg/backup/item_collector.go:255" resource=statefulsets
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=apps/v1 logSource="pkg/backup/item_collector.go:196" resource=daemonsets
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=apps/v1 logSource="pkg/backup/item_collector.go:255" resource=daemonsets
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=events.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=events.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=events
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=events.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=events
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=autoscaling/v2 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=autoscaling/v2 logSource="pkg/backup/item_collector.go:196" resource=horizontalpodautoscalers
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=autoscaling/v2 logSource="pkg/backup/item_collector.go:255" resource=horizontalpodautoscalers
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=batch/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=batch/v1 logSource="pkg/backup/item_collector.go:196" resource=cronjobs
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=batch/v1 logSource="pkg/backup/item_collector.go:255" resource=cronjobs
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=batch/v1 logSource="pkg/backup/item_collector.go:196" resource=jobs
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=batch/v1 logSource="pkg/backup/item_collector.go:255" resource=jobs
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=certificates.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=certificates.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=certificatesigningrequests
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=certificates.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=certificatesigningrequests
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=networking.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=networking.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=ingressclasses
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=networking.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=ingressclasses
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=networking.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=networkpolicies
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=networking.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=networkpolicies
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=networking.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=ingresses
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=networking.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=ingresses
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=policy/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=policy/v1 logSource="pkg/backup/item_collector.go:196" resource=poddisruptionbudgets
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=policy/v1 logSource="pkg/backup/item_collector.go:255" resource=poddisruptionbudgets
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=rbac.authorization.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=rbac.authorization.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=clusterroles
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=rbac.authorization.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=clusterroles
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=rbac.authorization.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=clusterrolebindings
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=rbac.authorization.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=clusterrolebindings
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=rbac.authorization.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=roles
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=rbac.authorization.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=roles
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=rbac.authorization.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=rolebindings
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=rbac.authorization.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=rolebindings
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=storage.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=storage.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=storageclasses
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=storage.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=storageclasses
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=storage.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=csistoragecapacities
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=storage.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=csistoragecapacities
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=storage.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=csidrivers
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=storage.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=csidrivers
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=storage.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=csinodes
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=storage.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=csinodes
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=storage.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=volumeattachments
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=storage.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=volumeattachments
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=admissionregistration.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=admissionregistration.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=mutatingwebhookconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=admissionregistration.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=mutatingwebhookconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=admissionregistration.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=validatingwebhookconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=admissionregistration.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=validatingwebhookconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=apiextensions.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=apiextensions.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=customresourcedefinitions
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=apiextensions.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=customresourcedefinitions
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=scheduling.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=scheduling.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=priorityclasses
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=scheduling.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=priorityclasses
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=coordination.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=coordination.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=leases
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=coordination.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=leases
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=node.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=node.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=runtimeclasses
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=node.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=runtimeclasses
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=discovery.k8s.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=discovery.k8s.io/v1 logSource="pkg/backup/item_collector.go:196" resource=endpointslices
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=discovery.k8s.io/v1 logSource="pkg/backup/item_collector.go:255" resource=endpointslices
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=flowcontrol.apiserver.k8s.io/v1beta2 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=flowcontrol.apiserver.k8s.io/v1beta2 logSource="pkg/backup/item_collector.go:196" resource=flowschemas
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=flowcontrol.apiserver.k8s.io/v1beta2 logSource="pkg/backup/item_collector.go:255" resource=flowschemas
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=flowcontrol.apiserver.k8s.io/v1beta2 logSource="pkg/backup/item_collector.go:196" resource=prioritylevelconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=flowcontrol.apiserver.k8s.io/v1beta2 logSource="pkg/backup/item_collector.go:255" resource=prioritylevelconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=acme.cert-manager.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=acme.cert-manager.io/v1 logSource="pkg/backup/item_collector.go:196" resource=challenges
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=acme.cert-manager.io/v1 logSource="pkg/backup/item_collector.go:255" resource=challenges
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=acme.cert-manager.io/v1 logSource="pkg/backup/item_collector.go:196" resource=orders
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=acme.cert-manager.io/v1 logSource="pkg/backup/item_collector.go:255" resource=orders
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=cert-manager.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=cert-manager.io/v1 logSource="pkg/backup/item_collector.go:196" resource=clusterissuers
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=cert-manager.io/v1 logSource="pkg/backup/item_collector.go:255" resource=clusterissuers
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=cert-manager.io/v1 logSource="pkg/backup/item_collector.go:196" resource=certificaterequests
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=cert-manager.io/v1 logSource="pkg/backup/item_collector.go:255" resource=certificaterequests
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=cert-manager.io/v1 logSource="pkg/backup/item_collector.go:196" resource=issuers
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=cert-manager.io/v1 logSource="pkg/backup/item_collector.go:255" resource=issuers
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=cert-manager.io/v1 logSource="pkg/backup/item_collector.go:196" resource=certificates
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=cert-manager.io/v1 logSource="pkg/backup/item_collector.go:255" resource=certificates
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=globalnetworksets
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=globalnetworksets
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=bgppeers
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=bgppeers
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=ippools
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=ippools
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=bgpconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=bgpconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=caliconodestatuses
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=caliconodestatuses
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=blockaffinities
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=blockaffinities
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=felixconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=felixconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=ipreservations
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=ipreservations
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=ipamhandles
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=ipamhandles
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=ipamconfigs
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=ipamconfigs
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=networksets
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=networksets
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=networkpolicies
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=networkpolicies
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=globalnetworkpolicies
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=globalnetworkpolicies
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=kubecontrollersconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=kubecontrollersconfigurations
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=ipamblocks
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=ipamblocks
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=hostendpoints
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=hostendpoints
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:196" resource=clusterinformations
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=crd.projectcalico.org/v1 logSource="pkg/backup/item_collector.go:255" resource=clusterinformations
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=helm.cattle.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=helm.cattle.io/v1 logSource="pkg/backup/item_collector.go:196" resource=helmchartconfigs
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=helm.cattle.io/v1 logSource="pkg/backup/item_collector.go:255" resource=helmchartconfigs
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=helm.cattle.io/v1 logSource="pkg/backup/item_collector.go:196" resource=helmcharts
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=helm.cattle.io/v1 logSource="pkg/backup/item_collector.go:255" resource=helmcharts
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=k3s.cattle.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=k3s.cattle.io/v1 logSource="pkg/backup/item_collector.go:196" resource=addons
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=k3s.cattle.io/v1 logSource="pkg/backup/item_collector.go:255" resource=addons
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=kustomize.toolkit.fluxcd.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=kustomize.toolkit.fluxcd.io/v1 logSource="pkg/backup/item_collector.go:196" resource=kustomizations
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=kustomize.toolkit.fluxcd.io/v1 logSource="pkg/backup/item_collector.go:255" resource=kustomizations
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=notification.toolkit.fluxcd.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=notification.toolkit.fluxcd.io/v1 logSource="pkg/backup/item_collector.go:196" resource=receivers
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=notification.toolkit.fluxcd.io/v1 logSource="pkg/backup/item_collector.go:255" resource=receivers
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=notification.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=notification.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:196" resource=providers
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=notification.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:255" resource=providers
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=notification.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:196" resource=alerts
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=notification.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:255" resource=alerts
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=postgresql.cnpg.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=postgresql.cnpg.io/v1 logSource="pkg/backup/item_collector.go:196" resource=poolers
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=postgresql.cnpg.io/v1 logSource="pkg/backup/item_collector.go:255" resource=poolers
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=postgresql.cnpg.io/v1 logSource="pkg/backup/item_collector.go:196" resource=scheduledbackups
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=postgresql.cnpg.io/v1 logSource="pkg/backup/item_collector.go:255" resource=scheduledbackups
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=postgresql.cnpg.io/v1 logSource="pkg/backup/item_collector.go:196" resource=backups
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=postgresql.cnpg.io/v1 logSource="pkg/backup/item_collector.go:255" resource=backups
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=postgresql.cnpg.io/v1 logSource="pkg/backup/item_collector.go:196" resource=clusters
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=postgresql.cnpg.io/v1 logSource="pkg/backup/item_collector.go:255" resource=clusters
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1 logSource="pkg/backup/item_collector.go:196" resource=gitrepositories
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1 logSource="pkg/backup/item_collector.go:255" resource=gitrepositories
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:196" resource=ocirepositories
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:255" resource=ocirepositories
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:196" resource=helmrepositories
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:255" resource=helmrepositories
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:196" resource=buckets
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:255" resource=buckets
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:196" resource=helmcharts
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=source.toolkit.fluxcd.io/v1beta2 logSource="pkg/backup/item_collector.go:255" resource=helmcharts
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:196" resource=downloadrequests
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:255" resource=downloadrequests
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:196" resource=volumesnapshotlocations
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:255" resource=volumesnapshotlocations
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:196" resource=deletebackuprequests
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:255" resource=deletebackuprequests
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:196" resource=backups
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:255" resource=backups
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:196" resource=schedules
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:255" resource=schedules
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:196" resource=podvolumerestores
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:255" resource=podvolumerestores
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:196" resource=serverstatusrequests
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:255" resource=serverstatusrequests
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:196" resource=backuprepositories
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:255" resource=backuprepositories
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:196" resource=restores
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:255" resource=restores
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:196" resource=backupstoragelocations
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:255" resource=backupstoragelocations
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:196" resource=podvolumebackups
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v1 logSource="pkg/backup/item_collector.go:255" resource=podvolumebackups
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=velero.io/v2alpha1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v2alpha1 logSource="pkg/backup/item_collector.go:196" resource=datauploads
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v2alpha1 logSource="pkg/backup/item_collector.go:255" resource=datauploads
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=velero.io/v2alpha1 logSource="pkg/backup/item_collector.go:196" resource=datadownloads
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=velero.io/v2alpha1 logSource="pkg/backup/item_collector.go:255" resource=datadownloads
time="2024-03-28T10:22:21Z" level=info msg="Getting items for group" backup=velero/demo-backup-3 group=helm.toolkit.fluxcd.io/v2beta1 logSource="pkg/backup/item_collector.go:105"
time="2024-03-28T10:22:21Z" level=info msg="Getting items for resource" backup=velero/demo-backup-3 group=helm.toolkit.fluxcd.io/v2beta1 logSource="pkg/backup/item_collector.go:196" resource=helmreleases
time="2024-03-28T10:22:21Z" level=info msg="Skipping resource because it's excluded" backup=velero/demo-backup-3 group=helm.toolkit.fluxcd.io/v2beta1 logSource="pkg/backup/item_collector.go:255" resource=helmreleases
time="2024-03-28T10:22:21Z" level=info msg="Collected 1 items matching the backup spec from the Kubernetes API (actual number of items backed up may be more or less depending on velero.io/exclude-from-backup annotation, plugins returning additional related items to back up, etc.)" backup=velero/demo-backup-3 logSource="pkg/backup/backup.go:280" progress=
time="2024-03-28T10:22:21Z" level=info msg="Processing item" backup=velero/demo-backup-3 logSource="pkg/backup/backup.go:365" name=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a namespace= progress= resource=persistentvolumes
time="2024-03-28T10:22:21Z" level=info msg="Backing up item" backup=velero/demo-backup-3 logSource="pkg/backup/item_backupper.go:179" name=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a namespace= resource=persistentvolumes
time="2024-03-28T10:22:21Z" level=info msg="Executing takePVSnapshot" backup=velero/demo-backup-3 logSource="pkg/backup/item_backupper.go:509" name=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a namespace= resource=persistentvolumes
time="2024-03-28T10:22:21Z" level=info msg="label \"topology.kubernetes.io/zone\" is not present on PersistentVolume, checking deprecated label..." backup=velero/demo-backup-3 logSource="pkg/backup/item_backupper.go:567" name=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a namespace= persistentVolume=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a resource=persistentvolumes
time="2024-03-28T10:22:21Z" level=info msg="label \"failure-domain.beta.kubernetes.io/zone\" is not present on PersistentVolume" backup=velero/demo-backup-3 logSource="pkg/backup/item_backupper.go:571" name=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a namespace= persistentVolume=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a resource=persistentvolumes
time="2024-03-28T10:22:21Z" level=info msg="zone info from nodeAffinity requirements: az2, key: topology.cinder.csi.openstack.org/zone" backup=velero/demo-backup-3 logSource="pkg/backup/item_backupper.go:574" name=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a namespace= persistentVolume=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a resource=persistentvolumes
time="2024-03-28T10:22:21Z" level=info msg="BlockStore.Init called" backup=velero/demo-backup-3 cmd=/plugins/velero-plugin-for-openstack config="map[backupTimeout:5m cascadeDelete:true cloneTimeout:5m containerName:backups ensureDeleted:true ensureDeletedDelay:10s imageTimeout:5m method:backup snapshotTimeout:5m volumeTimeout:5m]" logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/cinder/block_store.go:117" pluginName=velero-plugin-for-openstack
time="2024-03-28T10:22:21Z" level=info msg="Trying to authenticate against OpenStack using environment variables (including application credentials) or using files ~/.config/openstack/clouds.yaml, /etc/openstack/clouds.yaml and ./clouds.yaml" backup=velero/demo-backup-3 cmd=/plugins/velero-plugin-for-openstack logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/utils/auth.go:68" pluginName=velero-plugin-for-openstack
time="2024-03-28T10:22:22Z" level=info msg="Authentication against identity endpoint https://myOpenstackUrl:PORT/ was successful" backup=velero/demo-backup-3 cmd=/plugins/velero-plugin-for-openstack logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/utils/auth.go:113" pluginName=velero-plugin-for-openstack
time="2024-03-28T10:22:22Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/demo-backup-3 error="rpc error: code = Unknown desc = failed to create cinder storage client: No suitable endpoint could be found in the service catalog." logSource="pkg/backup/item_backupper.go:591" name=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a namespace= persistentVolume=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a resource=persistentvolumes volumeSnapshotLocation=default
time="2024-03-28T10:22:22Z" level=info msg="Persistent volume is not a supported volume type for Velero-native volumeSnapshotter snapshot, skipping." backup=velero/demo-backup-3 logSource="pkg/backup/item_backupper.go:612" name=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a namespace= persistentVolume=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a resource=persistentvolumes
time="2024-03-28T10:22:22Z" level=info msg="Backed up 1 items out of an estimated total of 1 (estimate will change throughout the backup)" backup=velero/demo-backup-3 logSource="pkg/backup/backup.go:405" name=pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a namespace= progress= resource=persistentvolumes
time="2024-03-28T10:22:22Z" level=info msg="hookTracker: map[], hookAttempted: 0, hookFailed: 0" backup=velero/demo-backup-3 logSource="pkg/backup/backup.go:436"
time="2024-03-28T10:22:22Z" level=info msg="Summary for skipped PVs: [{\"name\":\"pvc-d20dfb58-a6a7-45bb-b65b-c7cfac3f557a\",\"reasons\":[{\"approach\":\"volumeSnapshot\",\"reason\":\"no applicable volumesnapshotter found\"}]}]" backup=velero/demo-backup-3 logSource="pkg/backup/backup.go:445"
time="2024-03-28T10:22:22Z" level=info msg="Backed up a total of 1 items" backup=velero/demo-backup-3 logSource="pkg/backup/backup.go:449" progress=

Restore partially failed due to 'failed to allocate requested HealthCheck NodePort'

Hi @Lirt

I met an error when restoring and I'm not sure whether or not it is from the plugin side. I just post it here and I'm eager to look for your help if you have time on this. Thanks ahead.

Environment

  • velero: 1.4.0
  • velero-plugin-for-openstack: 0.2.0

How to reproduce

  1. Create a backup via velero backup create test.
  2. Delete pvc, pv manually to produce a so-called disaster.
  3. Restore with the backup via velero restore create restore-test --from-backup test.

Symptom

Pv and pvc are restored successfully but there are errors in restore log. The restore status is "partially failed".

time="2021-03-31T07:43:03Z" level=info msg="error restoring test-elb: Internal error occurred: failed to allocate requested HealthCheck NodePort 32034: provided port is already allocated" logSource="pkg/restore/restore.go:1152" restore=default/restore-3-31-1
time="2021-03-31T07:43:06Z" level=info msg="error restoring haproxy-nlb: Internal error occurred: failed to allocate requested HealthCheck NodePort 31568: provided port is already allocated" logSource="pkg/restore/restore.go:1152" restore=default/restore-3-31-1
time="2021-03-31T07:43:07Z" level=info msg="error restoring kong-proxy: Internal error occurred: failed to allocate requested HealthCheck NodePort 30983: provided port is already allocated" logSource="pkg/restore/restore.go:1152" restore=default/restore-3-31-1

Questions

  • I'd like to ask the reason why the loadbalancer still need to be restored anyway? What I did is deleting the pv and pvc and what I expected is skipping the loadbalancer restoration just like any other irrelevant resources.
  • The issue seems to be like velero is trying to create a new loadbalancer but it turns out to be impossible because the old one is still running and of course the port is already allocated. Could you please share more insights of this?

BTW, I put haproxy-nlb service yaml here for reference.

apiVersion: v1
kind: Service
metadata:
  annotations:
    dns.gardener.cloud/class: garden
    dns.gardener.cloud/dnsnames: '*.com'
    dns.gardener.cloud/ttl: "500"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"dns.gardener.cloud/class":"garden","dns.gardener.cloud/dnsnames":"*.com","dns.gardener.cloud/ttl":"500","service.beta.kubernetes.io/aws-load-balancer-type":"nlb"},"labels":{"run":"haproxy-ingress-nlb"},"name":"haproxy-nlb","namespace":"default"},"spec":{"externalTrafficPolicy":"Local","ports":[{"name":"https-external-lb","port":443,"protocol":"TCP","targetPort":443}],"selector":{"run":"haproxy-ingress-nlb"},"type":"LoadBalancer"}}
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  creationTimestamp: "2021-01-14T05:41:36Z"
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  - garden.dns.gardener.cloud/service-dns
  labels:
    run: haproxy-ingress-nlb
  name: haproxy-nlb
  namespace: default
  resourceVersion: "33162"
  selfLink: /api/v1/namespaces/default/services/haproxy-nlb
  uid: a17094c4-0f32-4475-80a5-3d34a1cefa72
spec:
  clusterIP: *.*.*.*
  externalTrafficPolicy: Local
  healthCheckNodePort: 31568
  ports:
  - name: https-external-lb
    nodePort: 30210
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    run: haproxy-ingress-nlb
  sessionAffinity: None
  type: LoadBalancer

Unable to create a backup in openstack - temporary URL/volume in-use issue

Describe the bug
I'm deploying Velcro using helm chart, and I'm not able to successfully create a backup. They are always in the PartiallyFailed state.

Steps to reproduce the behavior

  1. Deploy using helm chart
  2. Create a backup using :
velero backup create test4  --include-namespaces=default --default-volumes-to-fs-backup --snapshot-volumes --ttl 30m

Expected behavior
A successful backup is created and uploaded to openstack Shared Object Storage.

Used versions

  • Velero version(velero version): velero/velero:v1.13.0
  • Plugin version(kubectl describe pod velero-...): lirt/velero-plugin-for-openstack:v0.6.0, velero/velero-plugin-for-csi:v0.7.0
  • Kubernetes version(kubectl version): 1.27.8
  • Openstack version:

Link to velero or backup log

time="2024-02-18T03:28:42Z" level=info msg="Authentication will be done for cloud main" cmd=/plugins/velero-plugin-for-openstack controller=download-request downloadRequest=velero/test4-4036bc95-faac-472e-ac17-f16b167c98cc logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/utils/auth.go:33" pluginName=velero-plugin-for-openstack
time="2024-02-18T03:28:42Z" level=info msg="Trying to authenticate against OpenStack using environment variables (including application credentials) or using files ~/.config/openstack/clouds.yaml, /etc/openstack/clouds.yaml and ./clouds.yaml" cmd=/plugins/velero-plugin-for-openstack controller=download-request downloadRequest=velero/test4-4036bc95-faac-472e-ac17-f16b167c98cc logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/utils/auth.go:68" pluginName=velero-plugin-for-openstack
time="2024-02-18T03:28:42Z" level=info msg="Authentication against identity endpoint https://identity-3.eu-nl-1.cloud.sap/v3/ was successful" cmd=/plugins/velero-plugin-for-openstack controller=download-request downloadRequest=velero/test4-4036bc95-faac-472e-ac17-f16b167c98cc logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/utils/auth.go:113" pluginName=velero-plugin-for-openstack
time="2024-02-18T03:28:42Z" level=info msg="Successfully created object storage service client" cmd=/plugins/velero-plugin-for-openstack controller=download-request downloadRequest=velero/test4-4036bc95-faac-472e-ac17-f16b167c98cc logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/swift/object_store.go:66" pluginName=velero-plugin-for-openstack region=eu-nl-1
time="2024-02-18T03:28:42Z" level=info msg="ObjectStore.CreateSignedURL called" cmd=/plugins/velero-plugin-for-openstack container=velero-backups-can controller=download-request downloadRequest=velero/test4-4036bc95-faac-472e-ac17-f16b167c98cc logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/swift/object_store.go:252" object=backups/test4/test4-results.gz pluginName=velero-plugin-for-openstack ttl=6e+11
time="2024-02-18T03:28:42Z" level=warning msg="fail to get Backup metadata file's download URL {BackupResults test4}, retry later: rpc error: code = Unknown desc = failed to create temporary URL for \"backups/test4/test4-results.gz\" object in \"velero-backups-can\" container: Unable to obtain the Temp URL key." controller=download-request downloadRequest=velero/test4-4036bc95-faac-472e-ac17-f16b167c98cc logSource="pkg/controller/download_request_controller.go:206"
time="2024-02-18T03:28:43Z" level=error msg="Reconciler error" controller=downloadrequest controllerGroup=velero.io controllerKind=DownloadRequest downloadRequest="{\"name\":\"test4-4036bc95-faac-472e-ac17-f16b167c98cc\",\"namespace\":\"velero\"}" error="rpc error: code = Unknown desc = failed to create temporary URL for \"backups/test4/test4-results.gz\" object in \"velero-backups-can\" container: Unable to obtain the Temp URL key." error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/download_request_controller.go:207" error.function="github.com/vmware-tanzu/velero/pkg/controller.(*downloadRequestReconciler).Reconcile" logSource="/go/pkg/mod/github.com/bombsimon/logrusr/[email protected]/logrusr.go:123" name=test4-4036bc95-faac-472e-ac17-f16b167c98cc namespace=velero reconcileID="\"1041cd96-1e2c-4ad0-823d-2d0c1438b387\""
time="2024-02-18T03:28:43Z" level=info msg="ObjectStore.Init called" cmd=/plugins/velero-plugin-for-openstack config="map[bucket:velero-backups-can cloud:main prefix: region:eu-nl-1]" controller=download-request downloadRequest=velero/test4-4036bc95-faac-472e-ac17-f16b167c98cc logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/swift/object_store.go:38" pluginName=velero-plugin-for-openstack
time="2024-02-18T03:28:43Z" level=info msg="Authentication will be done for cloud main" cmd=/plugins/velero-plugin-for-openstack controller=download-request downloadRequest=velero/test4-4036bc95-faac-472e-ac17-f16b167c98cc logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/utils/auth.go:33" pluginName=velero-plugin-for-openstack
time="2024-02-18T03:28:43Z" level=info msg="Trying to authenticate against OpenStack using environment variables (including application credentials) or using files ~/.config/openstack/clouds.yaml, /etc/openstack/clouds.yaml and ./clouds.yaml" cmd=/plugins/velero-plugin-for-openstack controller=download-request downloadRequest=velero/test4-4036bc95-faac-472e-ac17-f16b167c98cc logSource="/go/src/github.com/Lirt/velero-plugin-for-openstack/src/utils/auth.go:68" pluginName=velero-plugin-for-openstack

My guess is that it's related to that step but I'm not able to exec into the container to run those command

[FEAT] "clone" as a "snapshot"

Is your feature request related to a problem? Please describe.

Both Cinder Volume and Manila Share snapshots make unavailable the source resource to be deleted. Causing PV/PVC k8s resources to stuck on deletion.

Describe the solution you'd like

I propose an option (or a different plugin key, e.g. a new snapshot driver name with the -clone suffix at the end) which will make a full Volume/Share clone. This will allow not to lock the source share from being deleted.

[FEAT] Add configurable timeouts

Is your feature request related to a problem? Please describe.

Current logic has hardcoded timeouts to wait for volumes/snapshots (5m). Sometimes it may take a while for a snapshot/volume to be created, which may fail a backup/restore.

Describe the solution you'd like

Add configurable timeouts into a backupStorageLocation provider config.

"Invalid input for field/attribute metadata" when creating a snapshot

Hi @Lirt

Sorry to trouble you again but I meet an error when I try to create a volume backup. Here are the partial logs:

time="2021-03-22T06:01:00Z" level=info msg="Trying to create snapshot%!(EXTRA string=e7ec5394-114b-4181-9fa0-a81f3c1c6d76.snap.11963748953446345529)" backup=default/velero-backup-pvs-20210322060028 cmd=/plugins/velero-plugin-openstack logSource="/go/src/github.com/Lirt/velero-plugin-openstack/src/cinder/block_store.go:144" pluginName=velero-plugin-openstack
time="2021-03-22T06:01:00Z" level=info msg="1 errors encountered backup up item" backup=default/velero-backup-pvs-20210322060028 logSource="pkg/backup/backup.go:451" name=storage-grafana-loki
time="2021-03-22T06:01:00Z" level=error msg="Error backing up item" backup=default/velero-backup-pvs-20210322060028 error="error taking snapshot of volume: rpc error: code = Unknown desc = Bad request with: [POST https://volume-3.test.com:443/v3/x/snapshots], error message: {\"badRequest\": {\"message\": \"Invalid input for field/attribute metadata. Value: {u'velero.io/pv': u'pvc-6ae4002e-f715-4887-a6b4-41377d984f3c', u'velero.io/storage-location': u'default', u'helm.sh/chart': u'velero-test', u'app.kubernetes.io/name': u'velero', u'velero.io/backup': u'velero-backup-pvs-20210322060028', u'test/is-provisioned': u'True', u'velero.io/schedule-name': u'velero-backup-pvs'}. u'app.kubernetes.io/name', u'test/is-provisioned', u'helm.sh/chart', u'velero.io/backup', u'velero.io/pv', u'velero.io/schedule-name', u'velero.io/storage-location' do not match any of the regexes: '^[a-zA-Z0-9-_:. ]{1,255}$'\", \"code\": 400}}" logSource="pkg/backup/backup.go:455" name=storage-grafana-loki

I did some investigation though, still no explicit solution for this. I guess it is due to the '/' disallowed likely or something else. Moreover, I found this https://bugs.launchpad.net/cinder/+bug/1798798 bug reported for cinder and I'm not sure whether or not it is relevant. Could you please check with that if you have time? Thanks ahead.

velero snapshots not deleted after expired velero backups are deleted.

what is happening ?
velero snapshots not deleted after expired velero backups are deleted.

what is expected ?
after TTL for velero backups has expired velero backups will get deleted and all associated velero snapshots will get deleted.

Details:
we have a velero backup schedule that runs creates a backup every 15 minutes. Each of these velero backups has a TTL of only 02 hours. After the velero backup has expired the backup will of into a "deleting" state and eventually the velero backup gets deleted. The problem we are seeing is. that the associated velero snapshots are not getting deleted. This then creates a large amount of orphaned snapshots that are associated with velero backups that longer exist.

No `arm64` build available

Hi,

It's not possible to use this plugin on a raspberry pi 4 cluster, as it would require an arm64 build.
Can this be done (in the form of a custom tag for example: :v.1.3.0-arm64 and published ?

Hope you can find some time for this small issue :)
Thanks for the plugin!
Clément

failed to create cinder storage client

Hi.

I'm currently on OVH cloud and want to create snapshots for my PVCs.

I'm facing the following errors

time="2021-05-06T15:15:02Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/platform-backup-3 error="rpc error: code = Unknown desc = failed to create cinder storage client: No suitable endpoint could be found in the service catalog." logSource="pkg/backup/item_backupper.go:449" name=ovh-managed-kubernetes-u359ha-pvc-14db48b5-590d-4119-a063-c64832cc7231 namespace= persistentVolume=ovh-managed-kubernetes-u359ha-pvc-14db48b5-590d-4119-a063-c64832cc7231 resource=persistentvolumes volumeSnapshotLocation=default
time="2021-05-06T15:25:03Z" level=error msg="Timed out awaiting reconciliation of volumesnapshot platform-production/velero-mysql-claim-lcck5" backup=velero/platform-backup-3 cmd=/plugins/velero-plugin-for-csi logSource="/go/src/velero-plugin-for-csi/internal/util/util.go:188" pluginName=velero-plugin-for-csi
time="2021-05-06T15:25:03Z" level=info msg="1 errors encountered backup up item" backup=velero/platform-backup-3 logSource="pkg/backup/backup.go:427" name=db-544987c5f7-nv777
time="2021-05-06T15:25:03Z" level=error msg="Error backing up item" backup=velero/platform-backup-3 error="error executing custom action (groupResource=volumesnapshots.snapshot.storage.k8s.io, namespace=platform-production, name=velero-mysql-claim-lcck5): rpc error: code = Unknown desc = timed out waiting for the condition" logSource="pkg/backup/backup.go:431" name=db-544987c5f7-nv777

I have the following volume snapshotclass:

  apiVersion: snapshot.storage.k8s.io/v1beta1
  deletionPolicy: Retain
  driver: cinder.csi.openstack.org
  kind: VolumeSnapshotClass
  metadata:
      labels:
      velero.io/csi-volumesnapshot-class: "true"
    name: csi-cinder-snapclass

and the following StorageClass:

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  managedFields:
  - apiVersion: storage.k8s.io/v1
    operation: Update
  name: csi-cinder-high-speed
parameters:
  availability: nova
  type: high-speed
provisioner: cinder.csi.openstack.org
reclaimPolicy: Delete
volumeBindingMode: Immediate

Issues accessing the Swift object store

Hi,

we would like to use Velero on Kubernetes clusters built on our OpenStack cloud platfort, where the object storage is based Ceph radosgw.

We have done the following:

  • install velero cli
  • install velero server:
$ velero install --provider "community.openstack.org/openstack" --plugins lirt/velero-plugin-for-openstack:v0.2.1 --bucket velero --no-secret
  • create public container "velero" on the project hosting the k8s cluster.
  • create "velero" user with Member role on the project.
  • create secret "openstack-cloud-credentials" with velero user credentials:
kubectl -n velero create secret generic openstack-cloud-credentials 
--from-literal OS_REGION_NAME=$OS_REGION_NAME  
--from-literal OS_USER_DOMAIN_NAME=$OS_USER_DOMAIN_NAME 
--from-literal OS_PASSWORD=$OS_PASSWORD 
--from-literal OS_AUTH_URL=$OS_AUTH_URL 
--from-literal OS_USERNAME=$OS_USERNAME 
--from-literal OS_INTERFACE=$OS_INTERFACE 
--from-literal OS_PROJECT_NAME=$OS_PROJECT_NAME 
--from-literal OS_PROJECT_ID=$OS_PROJECT_ID 
--from-literal OS_DOMAIN_NAME=$OS_DOMAIN_NAME 
-o yaml

We have checked that with this set of credentials we manage to authenticate to the project using both openstack and swift client.

  • modify deployment velero to inject the openstack credentials into velero container:
kubectl edit deployment velero -n velero
...
    env:
        - name: OS_AUTH_URL
          valueFrom:
            secretKeyRef:
              key: OS_AUTH_URL
              name: openstack-cloud-credentials
       ...

The setup looks OK, however velero backups fail:

$ velero backup create wordpress-backup --include-namespaces wordpress
Backup request "wordpress-backup" submitted successfully.
Run `velero backup describe wordpress-backup` or `velero backup logs wordpress-backup` for more details.

$ velero get backup
NAME               STATUS   ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
wordpress-backup   Failed   0        0          2021-07-28 14:48:26 +0000 UTC   29d       default            <none>

$ velero describe backup wordpress-backup
Name:         wordpress-backup
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  velero.io/source-cluster-k8s-gitversion=v1.16.15
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=16
Phase:  Failed (run `velero backup logs wordpress-backup` for more information)
Errors:    0
Warnings:  0
Namespaces:
  Included:  wordpress
  Excluded:  <none>
Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto
Label selector:  <none>
Storage Location:  default
Velero-Native Snapshot PVs:  auto
TTL:  720h0m0s
Hooks:  <none>
Backup Format Version:  1.1.0
Started:    2021-07-28 14:48:26 +0000 UTC
Completed:  2021-07-28 14:48:30 +0000 UTC
Expiration:  2021-08-27 14:48:26 +0000 UTC
Total items to be backed up:  19
Items backed up:              19
Velero-Native Snapshots:  2 of 2 snapshots completed successfully (specify --details for more information)

In particular, velero manages to make snapshots of the persistent volumes, but cannot write into the object store.

Among the velero logs we found these messages:

time="2021-07-28T14:48:30Z" level=error msg="Error uploading log file" backup=wordpress-backup bucket=velero error="rpc error: code = Unknown desc = failed to create new object in bucket velero with key backups/wordpress-backup/wordpress-backup-logs.gz: Resource not found" logSource="pkg/persistence/object_store.go:231" prefix=

time="2021-07-28T14:48:30Z" level=error msg="backup failed" controller=backup error="rpc error: code = Unknown desc = failed to create new object in bucket velero with key backups/wordpress-backup/velero-backup.json: Resource not found" key=velero/wordpress-backup logSource="pkg/controller/backup_controller.go:281"

It looks like velero can read the container: we uploaded a test object on the container and found that velero complains about it:

time="2021-07-28T12:35:06Z" level=error msg="Current backup storage locations available/unavailable/unknown: 0/1/0, Backup storage location \"default\" is unavailable: Backup store contains invalid top-level directories: [test.png])" controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:164"

Can you help us understanding why velero cannot use the object store to complete the backups?

Thank you in advance,

Alberto

[BUG] Random suffix for snapshot/backup names is not random

Describe the bug

Golang prior to 1.20 has a fixed seed for math/rand.globalRand and backup/snapshots have the same suffix in a name.

Try to run https://go.dev/play/p/qKmDuwawK4H?v=goprev (go 1.19) several times and every time it will:

5577006791947779410
8674665223082153551
15352856648520921629
13260572831089785859
3916589616287113937

Comparing to https://go.dev/play/p/qKmDuwawK4H (go 1.20)

Steps to reproduce the behavior

Restore a snapshot and restart the velero. Then restore a new snapshot, the resulting snapshot names will have the same suffix:

2d9835a7-65e1-4728-97a0-9010a0e2a418.backup.5577006791947779410
2d9835a7-65e1-4728-97a0-9010a0e2a418.backup.5577006791947779410

Expected behavior

Velero plugin must generate unpredictable suffixes for volume/snapshot names.

This can be done by upgrading the go version to 1.20, or using the private rand object with its own seed based on timestamp.

See also golang/go#54880

Metadata from Volume is not included in Volume Snapshot

Hi guys,

as part of an internal proof of concept, we're evaluating Velero to backup cluster resources and volumes. The volume snapshots are successfully created in OpenStack and visible. Restoring them also works properly. We really appreciate the work you put into this plugin.

We have noticed, that the metadata from volumes is not included in the backup.
See this example of Metadata:
image

In the restored volume, Metadata is 'none'. Is there any way to extend capabilities to include this? It's more of a nice-to-have, as it makes mapping volumes to cluster resources easier.

Best regards,
Jacob

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.