Comments (12)
Yeah, we may be able to have the Ceph driver do that, we'd just need to validate Ceph's behavior when a name gets reused, basically on how to name the trash entries so conflicts can be avoided.
from lxd.
I've filed #745 for the other part of this issue.
The last thing mentioned here which are issues with deleting snapshots from Incus when the underlying storage-level snapshot is gone already should be filed separately if that still occurs today.
from lxd.
Hi! I'm a UT student working on a group project and would love to take on this issue if itβs still available
from lxd.
Hey @benginow,
This issue is currently marked as a maybe as it's not clear whether what's requested here can actually be made to fit Incus' design.
So I wouldn't recommend starting with this particular issue :)
from lxd.
I'd be happy to discuss this in more detail if there's anything about this request that could/should be tweaked to make it align better with Incus' design
from lxd.
So in general, I'm definitely not in favor of adding support for creating metadata-only snapshots as without something immediately filling them it effectively leads to a broken database state which may prevent Incus from performing needed data transitions on update or even routine background tasks that span all instances and their snapshots.
If you absolutely want to go down that path, your best bet is to basically do:
incus snapshot create INSTANCE SNAPSHOTNAME
zfs destroy POOL/containers/INSTANCE@SNAPSHOTNAME
That should result in the same thing but without us having to actually support it ;)
Snapshot retention is also a bit tricky because our fundamental design is based around each snapshot having an expiry date. That's a bit different from backup systems where that's often not true and instead rely on an overall snapshot retention policy to consider the current snapshot set and trim it based on the policy.
It'd be interesting to figure out if given say:
- snapshot.retention.day=4 (keep max 4 snapshots for the past 24h period)
- snapshot.retention.week=3 (keep max 3 snapshots for the past 7 days period, excluding current day)
- snapshot.retention.month=4 (keep max 4 snapshots for the past month, excluding current week)
- snapshot.retention.year=10 (keep max 10 snapshots for the past year, excluding current month)
It would be possible to already assign an expiry date on the snapshot at the time it's created by basically looking at the available snapshots at the time a new snapshot is created to determine how long it should be kept overall.
If that's possible to do and leads to a useful result, then I think we could implement that.
If not, then the best option would be for 3rd party backup tools that want to manage snapshots and backups on Incus to trigger expiry-less snapshots themselves and then delete them based on their own policy.
from lxd.
It'd be interesting to figure out if given say:
snapshot.retention.day=4 (keep max 4 snapshots for the past 24h period)
snapshot.retention.week=3 (keep max 3 snapshots for the past 7 days period, excluding current day)
snapshot.retention.month=4 (keep max 4 snapshots for the past month, excluding current week)
snapshot.retention.year=10 (keep max 10 snapshots for the past year, excluding current month)
It would be possible to already assign an expiry date on the snapshot at the time it's created by basically looking at the available snapshots at the time a new snapshot is created to determine how long it should be kept overall.
I think this would work. The only case this doesn't cover is differing retention policies across servers. For example, if you host a container with a busy database, you might not want to keep any snapshots from more than a couple days ago on the running Incus server itself due to the size of the snapshots, but the backup server (which may have more space), could keep additional snapshots. Would these snapshot.retention.<period>
attributes be set per instance (and be replicated with incus copy --refresh
to other Incus servers) or globally per Incus server?
If not, then the best option would be for 3rd party backup tools that want to manage snapshots and backups on Incus to trigger expiry-less snapshots themselves and then delete them based on their own policy.
I think the problem with this route, at least historically, is if you delete the ZFS snapshot out from under Incus (e.g. sanoid cleans up a snapshot and then tries to run incus snapshot delete
as a post-prune hook to clean up the metadata), Incus fails with an error (understandably since it can't find the snapshot it expects to find in the zpool). I suppose an alternative way to handle this would be with some type of incus snapshot clean
command that would check all snapshots and delete any from the database which don't have a corresponding snapshot in ZFS anymore (e.g. something else modified/deleted it). That would allow a 3rd-party tool to modify the snapshots and then let Incus "catch up" with those changes.
from lxd.
Those proposed config keys would be instance settings, you could make them apply to multiple instances through profiles.
For the --refresh
case with a remote server, differing retention settings based on what I proposed above will not really work since --refresh
will overwrite the instance config to match the source, though using a profile would avoid that issue. But more importantly because those config keys will just be used to calculate the correct snapshot expiry, a different policy on the target won't have any effect.
One thing that I think we could do is incus copy --refresh --new-snapshots-only
or something along those lines, so only transferring the newer snapshots and not backfill anything the target may have deleted. Again, that's of limited use since the expiry of the snapshots are pre-calculated, so you'd still need something to trim snapshots after the transfer, but at least you'd only need to trim the new ones and not constantly battle the old ones being copied again.
I'd consider incus snapshot delete INSTANCE SNAP
failing due to the snapshot already being gone as a bit of a bug and something we could fix (assuming we can accurately differentiate an error about a missing snapshot from another more important error).
from lxd.
I'd consider incus snapshot delete INSTANCE SNAP failing due to the snapshot already being gone as a bit of a bug and something we could fix (assuming we can accurately differentiate an error about a missing snapshot from another more important error).
This would be great if possible (and useful even irrespective of this feature request).
The proposed instance config keys along with incus copy --refresh --new-snapshots-only
implements most of this desired capability (except for the ability to have a different retention on a backup server), so I think it's sufficient to meet this need.
from lxd.
Updated the issue to focus on those proposed additional config keys for both instances and storage volumes:
- snapshot.retention.day=4 (keep max 4 snapshots for the past 24h period)
- snapshot.retention.week=3 (keep max 3 snapshots for the past 7 days period, excluding current day)
- snapshot.retention.month=4 (keep max 4 snapshots for the past month, excluding current week)
- snapshot.retention.year=10 (keep max 10 snapshots for the past year, excluding current month)
As mentioned above, those will effectively have to conflict with snapshot.expiry
and will be used to determine an ultimate snapshot expiry date at the time of snapshot creation based on pre-existing snapshots on the instance or storage volume.
from lxd.
Thank you!
The last thing mentioned here which are issues with deleting snapshots from Incus when the underlying storage-level snapshot is gone already should be filed separately if that still occurs today.
I just tested and am unable to recreate this issue on Incus 6.0 LTS so no further action is needed for this.
from lxd.
Nice feature ! I'm doing this via cron script every day ! I had a nice to have feature for this retention policy : after delete account a customer has a legal delay to ask for their data so we must keep these data for several months. In Ceph deleting a volume on a mirror system causes a delete on the destination mirror (and stop mirroring is deleting image also). incus delete image by default. It might be nice to have a switch to move to trash policy on delete. Trash in ceph has a policy to remove from trash after XXX days (so it's perfect !). Difference is : "rbd image rm" ... became "rbd trash mv ... " .
from lxd.
Related Issues (20)
- Configurable Infiniband port_guid/node_guid HOT 1
- Root filesystem with nodev mounted may case SRIOV device configure problem. HOT 2
- Unprivileged Incus adds its own interface default route on void-linux? HOT 4
- Don't return filesystem metrics for filesystems that can't report instance-specific usage
- CLI configuration option for default table layout HOT 2
- incus ls, does not show Number of Processes HOT 6
- Creating user network fails with long uids HOT 3
- Cosmetic: Enhance error message for "profile rename" command
- Feature: Add storage pools isolation per project HOT 1
- Not able to start vm: qemu-system-aarch64: device requires x bytes, block backend provides x bytes HOT 2
- Request to clean up instance's previous state's virtual NIC from host's stack. HOT 2
- Implement per cluster group baseline CPU definitions for VMs
- Memory hotplug support for VMs HOT 2
- cluster member evacuation via web UI fails, but works via CLI HOT 2
- Allow passing sub-directories of custom volumes
- incus-simplestreams remove does not remove combined images HOT 1
- Archlinux - 6.9.9-arch1-1 - Unable to locate a UEFI firmware HOT 3
- Errors encountered when upgrading incus with sudo apt upgrade HOT 6
- Don't block cluster upgrades on API additions only
- [regression] BSD VM not starting after upgrade to 6.3 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lxd.