Git Product home page Git Product logo

zfs-auto-snapshot's People

Contributors

aimileus avatar aphor avatar araknid avatar attie avatar bernhardschmidt avatar chungy avatar dajhorn avatar diablodale avatar fransurbo avatar hawkowl avatar highvoltage avatar jbnance avatar jsoref avatar lindhe avatar mailinglists35 avatar mbaynton avatar mergwyn avatar mmalecki avatar riyad avatar rkarlsba avatar schors avatar stuehmer avatar tisoft avatar virtualguy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zfs-auto-snapshot's Issues

'//' only takes snapshot of the first

Hi.

I have been using zfs-auto-snapshot on a server for quite a while, and it's been working great.
Now I converted to zfs on another linux box, and downloaded zfs-auto-snapshot from this git again.

This newer version only took snapshot of the first partition.

My layout is like this, on Ubuntu 16.04 x64:
HDRAID/partimag 13,2G 3,86T 13,2G /home/partimag
SSD 99,5G 125G 96K /
SSD/ROOT 45,1G 125G 43,9G /
SSD/SWAP 4,25G 130G 64K -
SSD/home 49,5G 125G 49,5G /home

But only "SSD/ROOT" was taken snapshot of, when run from cron ( zfs-auto-snapshot --quiet --syslog --label=frequent --keep=4 // )

So I copied over the earlier version from my other server - and everything was taken snapshots of.

I can't find any version-string in the script, so I'm unsure of how I can help to solve this....

Disable/Remove zfs-auto-snapshot

Hello,

Was not sure where to post this question.

I was looking for a way to disable "zfs-auto-snapshot" on one of my systems(ubuntu).

I have set the following zfs parameters, but it does not seem to work:
"zfs set com.sun:auto-snapshot=false pool/dataset"

I also commented out lines in the cron directories:

exec zfs-auto-snapshot --quiet --syslog --label=daily --keep=31 //

exec zfs-auto-snapshot --quiet --syslog --label=hourly --keep=24 //

exec zfs-auto-snapshot --quiet --syslog --label=monthly --keep=12 //

exec zfs-auto-snapshot --quiet --syslog --label=weekly --keep=8 //

*/15 * * * * root zfs-auto-snapshot --quiet --syslog --label=frequent --keep=4 //

Am I missing some other parameters and or Is there a better way to turn these off?

Thanks!

Need selective exclusion variable option

Need a way to define selective zfs pools/datasets/zvols to be excluded from automated snapshots. Using Ubuntu 12.04 so perhaps a variable read in from /etc/default/zfs-auto-snapshot? zfs-auto-snapshot seems like an all or nothing option right now.

exclude/include for different runs of zfs-auto-snapshot

I have been trying to use --default-exclude and com.sun:auto-snapshot=false/true in different ways to configure what I want but I can't find any way.

Datasets:
tank
tank/datasets
tank/datasets/dataset1
tank/datasets/dataset2
tank/datasets/dataset3

I have the following scenarios that I would like to do but with different intervals:

  1. Run zfs-auto-snapshot for tank only and not any of it's sub datasets
  2. Run zfs-auto-snapshot for tank/datasets and all of it's children

I succeed in running 1 or 2 but I don't see how to do configure it so that I can run them both as different commands without changing the com.sun:auto-snapshot property forth and back.

Pull requests ignored

Seems pull requests on this one are all ignored, which is quite annoying. Who is responsible for this thing? If noone wants it, I can take over

In do_snapshots() $opt_pre_snapshot only works for 1st dataset

The routine do_snapshots() uses a local variable RUNSNAP to indicate whether to run a snapshot or not. It defaults to RUNSNAP=1. It can be possibly modified by the execution of $opt_pre_snapshot if that is defined.

The problem is if that sets RUNSNAP=0, then it stays at 0 for any subsequent datasets.
The solution is to set RUNSNAP=1 at the top of the loop of target datasets :

for ii in $TARGETS
do
    RUNSNAP=1
    if [ -n "$opt_do_snapshots" ]
    then
        if [ "$opt_pre_snapshot" != "" ]
        then
            do_run "$opt_pre_snapshot $ii $NAME" || RUNSNAP=0
        fi  

That forces each target to default to RUNSNAP=1 to do the snapshot.

I use a threshold check for $opt_pre_snapshot so it doesn't create lots of empty unnecessary snapshots. The cronjob looks like this :

zfs-auto-snapshot -q -g --prefix=auto-snap --pre-snapshot=/usr/local/bin/zfs-threshold-check.sh --event=Frequent --label=:05: --keep=4 //

And the script contains :

#!/bin/bash

# Checks the data-written threshold of a zfs dataset for use with zfs-auto-snapshot
# Returns 0 if amount written has not reached threshold
# Returns 1 if over threshold
# If no threshold set, then defaults to 2M (arbitrary)

# Set threshold in bytes like this :
# zfs set com.sun:snapshot-threshold=6000000 pool/dataset

# Enable auto-snapshots with
# zfs set com.sun:auto-snapshot=true pool/dataset

NAME=$1
WRITTEN=`zfs get -Hpo value written ${NAME}`
THRESH=`zfs get -Hpo value com.sun:snapshot-threshold ${NAME}`

echo "${NAME} Threshold = ${THRESH}, Written = ${WRITTEN}"

# If no threshold set then default to 2MB
if [ ${THRESH} = "-" ]; then
    echo "No threshold,  setting to 2M"
    THRESH=2000000
fi

if [ ${WRITTEN} -gt ${THRESH} ]; then
    echo "Need to snapshot ${NAME}"
    RC=0
else
    echo "Not reached ${NAME} threshold yet"
    RC=1
fi

exit $RC

easier exclude only a small number of datasets from the total no of datasets

I need to be able to exclude only certain datasets but without setting "com.sun:auto-snapshot" on all other datasets. Now it works the other way, it excludes all but those I specify.
The current model of "--default-exclude" requires setting an attribute on all datasets in order to include them. I wish this was simpler so that when using a similar "--default-exclude"-like parameter there will be no need to set com.sun:auto-snapshot on all datasets but only set "com.sun:auto-snapshot"-like on the dataset I wish to exclude.
thanks and sorry for my non-native english, hope you understand what I meant.

make zfs list invocation in zfs-auto-snapshot more effective

i see zfs-auto-snapshot (and this zfs list as a sub process) spending much much(!) time in walking trough the list of ALL snapshots when i`m only auto-snapshotting a number of sub-filesystems.

i have 4000+ snapshots on my system where each zfspool/backup/$HOSTNAME sub-filesystem typically has only about 40

apparently, each invocation of "zfs list" from zfs-auto-snapshot needs to walk trough 100 times more snapshots than necessary, causing a long delay.

i see there is "--fast" switch already, but wouldnt it be a much better performance enhancement to generally limit "zfs list" invocation in zfs-auto-snapshot script to only list the appropriate (sub-)filesystems snapshots when zfs-auto-snapshotting an explicit (sub-)filesystem ?

regards

roland
systems administrator

root 32610 21882 0 09:03 ? 00:00:00 /bin/sh /sbin/zfs-auto-snapshot --quiet --syslog --label=daily --keep=31 zfspool/backup/adminstation.local

root 32618 32610 1 09:03 ? 00:00:11 zfs list -H -t snapshot -S creation -o name

example for limiting for sub-fs (mind "-r") - as i`m only auto-snapshotting adminstation.local fs, why walk trough ALL snapshots?

zfs list -H -r -t snapshot -S creation -o name zfspool/backup/adminstation.local

optionally ignore datasets which are readonly=on

In some use cases it would be convenient if zfs-auto-snapshot ignores datasets which are readonly=on regardless of the value in com.sun:auto-snapshot.

I am considering a setup using both zfs-auto-snapshot and zrep. zrep facilitates two way zfs replication with failover. The status of of the paired fs is kept in zfs properties. The 'master' have zrep:master=yes and readonly=off whereas the 'slave' have zrep:master=no and readonly=on.

For the replication to work snapshots cannot be added to the 'slave'. This could potentially be avoided if
zfs-auto-snapshot, perhaps by providing an optional flag '--skip-readonly', did not create snapshots for datasets that are readonly=on regardless of their value in com.sun:auto-snapshot. #76

With such modification of zfs-auto-snapshot's behavior, it would be possible to run 'zrep sync all' as a cron job together with zfs-auto-snapshot on both servers and auto-snapshot and replication-failover would "just work", where failover or takeover can be issued on each dataset individually.

Yes, in general, snapshotting datasets which are readonly=on costs very little, but for the above mentioned reason, and potentially others reasons too, usability can in some instances improve if it can be avoided.

Add a switch for the `zfs list` invocation.

Pursuant to openzfs/zfs#450, the slow return of some zfs list invocations is arguably a bug. The merge in #15 kludges the zfs list invocation to improve performance.

Provide a switch to allow the user to choose between correctness or performance. If corner-cases are discovered in the kludge, then add a --fast switch. Otherwise, add a --slow switch to preserve the original code until performance improves.

Implement the Hanoi algorithm for automated ZFS snapshots

From http://lists.debian.org/debian-bsd/2011/12/msg00045.html

# This algorithm implements a variation of the Towers of Hanoi rotation method
# (see http://en.wikipedia.org/wiki/Backup_rotation_scheme#Towers_of_Hanoi).
#
# Unlike traditional ToH rotation, which uses a finite set of physical tapes,
# we operate on a set of snapshots whose size doesn't necessarily have to be
# bounded. Note that the number of snapshots only grows logarithmically with
# time, which makes it very hard to fill your hard disk (even when running
# unbounded).
#
# The result is that once we've run this for long enough, we'll find that for
# recent dates (e.g. last few days) almost all snapshots are available, and the
# older the date we're searching the more spread available snapshots will be.

snapshots cleaned up globally

When specific volumes are specified instead of using //, the 'old' snapshots are cleaned up from the entire system.

This is problematic since I have 2 systems both running zfs-auto-snapshot which do a periodic zfs send/receive to each other, and zfs-auto-snapshot destroys parts of the synced content.

Example: the following command is executed in cron:

zfs-auto-snapshot --quiet --syslog --default-exclude --label=frequent --keep=4  -r tank/share

I do a zfs recv to tank/backup/... - which means zfs-auto-snap_hourly-... snapshots exist there too. These are however also cleaned up - screwing up the send/receive.

Currently I work around this by changing the prefix, but I still think this is serious issue which is destroying data unexpectedly.

"getopt: illegal option" on FreeBSD

I've used this on a FreeBSD system before, but I'm wonder if something has changed in the last few years that has changed this.

When running the following:
/usr/local/sbin/zfs-auto-snapshot daily 7

The following is returned:

getopt: illegal option -- :
getopt: illegal option -- :
getopt: illegal option -- :
getopt: illegal option -- :
getopt: illegal option -- :
getopt: illegal option -- h
getopt: illegal option -- q
getopt: illegal option -- h
getopt: illegal option -- :
getopt: illegal option -- h
getopt: illegal option -- :
getopt: illegal option -- h
getopt: illegal option -- :
getopt: illegal option -- :
getopt: illegal option -- :
getopt: illegal option -- :
getopt: illegal option -- :
getopt: illegal option -- q

Snapshot keep values per filesystem/volume

Hi zfs-auto-snapshot-Team!

Are there plans to implement something like reading com.sun:auto-snapshot:${label}:keep= in this script? This could open the possibility to set the number of snapshots to keep for all filesystems/volumes individually.

Would be nice!
Greetings,
Lars

Auto snapshot keep applies to ALL labels. Restrict to only specified labels is provided

The script would keep only the number of snapshots specified by the command line, always, applying to all labels.

I have labels such that;

Monthly = zfs-auto-snap_01
Daily = zfs-auto-snap_02

etc.

This is so Windows can view ALL snapshots because it decodes the DTS with SMB config of

vfs objects = shadow_copy2
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
shadow: format = zfs-auto-snap_%S-%Y-%m-%d-%H%M
shadow: localtime = yes

new line below.

SNAPSHOTS_OLD=$(env LC_ALL=C zfs list -H -t snapshot -o name -s name|grep $opt_prefix${opt_label:+$opt_sep$opt_label} |awk '{ print substr( $0, length($0) - 14, length($0) ) " " $0}' |sort -r -k1,1 -k2,2|awk '{ print substr( $0, 17, length($0) )}') \

With this mod, I can set-up to keep 6 monthly's, 4 weekly, 7 days, and then 4 hourly's for 2 days etc..

Error in --prefix and --sep validation regex

The regexes used to validate the --prefix and --sep options have an error:
The - (dash) character is used in the middle of the regex this (incorrectly) forms a character range (in this case from _ to .). The dash has to be the last character in the square brackets to avoid this.

I'll attach a pull request with a fix

ZFS-auto-Snapshot and Samba shadow: snapprefix interaction

First off. My appologizes if this is not the right place to post this. I have tried the Samba forums.

Looking for a little assistance on this as I have been unsuccessful is
getting the shadow: snapprefix, shadow:delimiter and shadow:format to work
as I expect.

I have no issue with getting previous versions in Windows to show me either
hourly, daily or weekly, etc snapshots as previous versions. But I would
like to be able to expose all.

The shadow:snapprefix, etc seems to be the way to do it, but I am unable to
get it to work.
When I setup the prefix, etc, I get no previous versions at all.

My setup in Smb.conf is below, and my directories are named as follows:

shadow:snapprefix =
^(zfs-auto-snap_dai){0,1}(zfs-auto-snap_hour){0,1}$
shadow:delimiter = ly-
shadow:format = ly-%Y-%m-%d-%H%M

Directories :

zfs-auto-snap_daily-2018-01-01-1235
zfs-auto-snap_daily-2018-03-15-1135
zfs-auto-snap_daily-2018-03-29-1135
zfs-auto-snap_hourly-2018-04-03-1817
zfs-auto-snap_hourly-2018-04-04-0817
zfs-auto-snap_weekly-2018-01-17-1240

and a lot more then that
Etc, etc, etc

When I do something like : shadow: format =
zfs-auto-snap_hourly-%Y-%m-%d-%H%M

in the smb.conf, without the prefix, etc, there are no issues. I see the
hourlys appear in my previous versions.
I do understand that this might be outside the scope of this forum, but I thought maybe there is something in the zfs-auto-snapshot script that I could do to make it work, or other of you here have experience using the Samba shadow:snapprefix.

I am using the latest version of Samba, and Ubuntu 16.04, and your ZFS-auto-Snapshot.

Great script.

Any help would be appreciated.
Regards

Add SuSE integration notes

From: https://groups.google.com/a/zfsonlinux.org/d/msg/zfs-discuss/nW4vODLQszs/v7h5w7rYtbQJ

On Wed, Jul 16, 2014 at 2:40 AM, [email protected] wrote:

So I installed zfs-auto-snapshot on my openSuSE 13.1 system, and it didn't
work -- I got the 'frequent' snapshots, but not the 'hourly' etc.

It took a bit of fiddling to track down the reason. SuSE's version of
'run-crons', which is what invokes the scripts in '/etc/cron.hourly' etc.,
calls something called 'checkproc' on each script before invoking it.
'checkproc' tries to figure out if the process that the script starts is
already running; if so, 'run-crons' skips over the script. (I guess the
point is to let people start long-running daemons using 'cron', which sounds
just as strange to me as it does to you, but anyway...) What was happening
was that the script that makes the 'frequent' snapshots was being started at
about the same moment as 'run-crons', and since they both invoke
'zfs-auto-snapshot', which was running already, 'run-crons' would skip the
hourly script.

Possible solutions include:

offset the times at which the 'frequent' snapshots are taken, so it doesn't
happen right on the hour
edit 'run-crons' so it doesn't call 'checkproc' at all
edit 'run-crons', adding 'sleep 10' near the top, to let the previous
'zfs-auto-snapshot' have plenty of time to complete

I went with (3), as it seemed safer than (2).

I'm not sure you should do anything about this in the code of
'zfs-auto-snapshot', but it might be worth a mention in the README, or a
separate README.SuSE.

Oh, there was one other problem. I have 'zfs' installed in
'/usr/local/sbin'. This necessitated setting the PATH explicitly in the
'hourly' etc. scripts so that 'zfs-auto-snapshot' could find it. (Maybe I
should have just added '/usr/local/sbin' to the PATH line in
'/etc/crontab'.) Again, this seems like it might deserve a mention in the
README.

-- Scott

Security Permission Folder Share Access Denied

The problem about security permission folder (Access Denied).
ZFS, Shadow Copy, Samba and join to Active Directory success install without error. I can use share folder with domain admins write or read access. But I get the problem when I use Properties Security Permission folder share windows, apply (Access Denied).

My smb.conf :
[share]
path = /fileserver/share
read only = No
comment = ZFS dataset with Previous Versions enabled
vfs objects = shadow_copy2
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
shadow: format = zfs-auto-snap_hourly-%F-%H%M

I used this permission ;
chmod g=rwx fileserver/share
chgrp "Domain Admins" fileserver/share

how to solve this issue ? :(

PATH in cron.d doesn't contain $PREFIX

PATH spec in zfs-auto-snapshot.cron.frequent should contain '/usr/local/sbin' as it is the default installation target in Makefile

With the current definition, cron (on ubuntu server 15.10) just logs:
/bin/sh: 1: zfs-auto-snapshot: not found

Store Label in Custom Property instead of Snapshot Name

I ran across an issue integrating the Samba shadow_copy2 module with ZFS using the zfs-auto-snapshot scripts, because the shadow_copy2 module requires snapshots to be named identically, and the default of the zfs-auto-snapshot script is to store the "label" of the snapshot (frequently/hourly/daily/weekly/monthly) as part of the snapshot name.

I tweaked the script some and was able to, instead, store the label in a custom property - com.sun:auto-snapshot-label - and have the script key off that for the purposes of managing snapshot aging rather than off the title of the snapshot. This allows you to make all of your snapshots available via Samba's shadow_copy2 module since they all have the same prefix. Not sure if there are any adverse impacts or incompatibilities to using another custom property, or if there are others who want this functionality. I'm happy to share the code, but thought I'd at least see if anyone cares, or it's just me.

Recursive ZFS snapshots are not properly destroyed

This seems to happen only when a child dataset contains a zvol.
ZoL seems to not be able to recursively delete zvol snapshots giving "Invalid Argument", error code 134.

I prepared a quick hack to fix this in the interim, but IMO recursion should be moved from FLAGS to a variable of it's own, or some other nicer way that I'm not seeing ;)
(explained futher in diff)

I'm not quite sure how to attach a commit to an issue, but here's the link:
akatrevorjay@e6cd6d6

A bit off-topic: If anyone gets 'Device is busy' errors while destroying snapshots on zvols and you use multipathd:

  • I found that even though I had set multipathd to a wwid whitelist-only (blacklisted wwid "*") configuration, it was still locking my zvol device nodes, as it does with managed root devices (giving access only to it's partitions).
  • I had to blacklist devnode "zd[0-9]*" as well, for some reason not known to me yet. Perhaps someone more multipath-savvy could elaborate as to why this is?

recursive snapshot without base tank

Assuming I have the following datasets:

tank
tank/dataset1
tank/dataset2
tank/datasetX

I would like to be able to recursively do a snapshot of tank/dataset[1,2..X] but not tank itself. How can I do that?

snapshot timestamp wrong (UTC)

The time in the snapshot name uses UTC although we are in UTC+1 and our server has been configured correctly. Is this normal or an issue? For us it would be nice if it could use the correct time zone and time of our storage server.

Recursion doesn't work where a sibling / child has com.sun:auto-snapshot=false

When a child or sibling has com.sun:auto-snapshot=false, none of it's cohort receive snapshots when requested by --recursive - only the root provided.

Setup involves a filesystem that does not want auto-snapshot, while its neighbour does:

$ truncate -s 64M pool
$ zpool create test $(pwd)/pool
$ zfs create test/child1
$ zfs create test/child2
$ zfs set com.sun:auto-snapshot=true test
$ zfs set com.sun:auto-snapshot=false test/child2
$ zfs get -r com.sun:auto-snapshot test
NAME         PROPERTY               VALUE                  SOURCE
test         com.sun:auto-snapshot  true                   local
test/child1  com.sun:auto-snapshot  true                   inherited from test
test/child2  com.sun:auto-snapshot  false                  local

child2 could be at any depth (my use-case is much deeper), and prevents all other filesystems under the given root from receiving a snapshot.

Now, try to make a recursive snapshot - expecting test and test/child1 to be covered, with test/child2 excluded:

$ zfs-auto-snapshot -vnd --recursive test
Debug: Including test for regular snapshot.
zfs snapshot -o com.sun:auto-snapshot-desc='-'  'test@zfs-auto-snap-2017-12-29-1834'
@zfs-auto-snap-2017-12-29-1834, 1 created, 0 destroyed, 0 warnings.

As can be seen, we only get a snapshot for test, while test/child1 (silently) does not receive one.
Note: "Including test for regular snapshot" and zfs snapshot without -r

If you remove child2's com.sun:auto-snapshot=false, and try again:

$ zfs inherit com.sun:auto-snapshot test/child2
$ zfs-auto-snapshot -vnd --recursive test
Debug: Including test for recursive snapshot.
zfs snapshot -o com.sun:auto-snapshot-desc='-' -r 'test@zfs-auto-snap-2017-12-29-1834'
@zfs-auto-snap-2017-12-29-1834, 1 created, 0 destroyed, 0 warnings.

Success, however now test/child2 is included in the snapshot
Note: "Including test for recursive snapshot" and zfs snapshot -r.

This is due to the test on zfs-auto-snapshot:498.


I understand that this is due to the use of zfs snapshot -r, which would create a snapshot on the unwanted filesystem... and I'm not sure what to suggest aside from (at the very least) a WARNING / NOTE printed at runtime to make this point clear.

Are snapshots cheap enough to create one on test/child2 as a side effect of zfs snapshot -r, to then go back and tidy up the unwanted snapshot?

It would be a shame to lose the atomic nature of zfs snapshot -r by splitting into multiple calls... but we could potentially unroll the recursion and architect the following (though command line length could be an issue for large systems):

zfs snapshot test@${SNAPNAME} test/child1@${SNAPNAME}

--default-exclude requires the // argument

From https://groups.google.com/a/zfsonlinux.org/d/msg/zfs-discuss/Yb3a7VFA9Fo/06CXVpvMZKMJ

From: Will Rouesnel <[email protected]>
To: [email protected]
Subject: zfs-auto-snap and default-exclude

Just something I learned about zfs-auto-snap: if you're using the
--default-exclude parameter with it, then you have to specify "//" on
the command-line for the data-sets to snapshot, or else nothing will be
snapshotted.

Ran for about 6 months like this without realizing.

BAD:
zfs-auto-snapshot --quiet --syslog --default-exclude --label=frequent --keep=4 --recursive storage

GOOD:
zfs-auto-snapshot --quiet --syslog --default-exclude --label=frequent --keep=4 --recursive //

This isn't documented at the top of the script, so handy information for
all.

Default install puts zfs-auto-snapshot.sh in unfindable directory.

The script is by default placed in /usr/local/sbin, which is not included in the path specified in the included crontab files.

From /etc/cron.d/zfs-auto-snapshot:

PATH="/usr/bin:/bin:/usr/sbin:/sbin"

Either the default install location needs to be changed, or /usr/local/sbin needs to be included in that path...

Free predefined amount of space by removing snapshots

I'd rather have way to tell zfs-auto-snapshot to use as much space for snapshots as possible while leaving predefined amount of free space. Let's say that i will tell zfs-auto-snapshot to free 10GB or 10% of space, so it will delete oldest snapshots until enough space is freed and then stop. This is somewhat oportunistic approach to snapshots that will not delete more snapshots than needed just because they are old. It's kinda similar to how NILFS2 garbage collection works. Also it's vital to combine with TTL, so snapshots are protected for some time period even when disk gets full.

[Feature request] Option for auto remove older snapshots on lack free space

Snapshots and zfs-auto-snapshot works very good.

But when disk lacking free space there are no easy way to free-up the space: if I delete some large files - they will stay here in last snapshot.

Solution for this problem can be adding feature to zfs-auto-snapshot tool for removing older snapshots if free space is less that selected in config file.

So, I set in config file "Free space not less than 5 gb" and on cron job zfs-auto-snapshot checks current free space and do removing older snaphshots while free space less than needed.

What do you think about this feature?

z-a-s -r fails to run recursively when a child has com.sun:auto-snapshot=false

justinp@backup:~$ time sudo ./zfs-auto-snapshot.real -d -s -r -k 32 -l x rpool
Debug: Including rpool for regular snapshot.
Doing regular snapshots of rpool
Debug: zfs snapshot -o com.sun:auto-snapshot-desc='-' 'rpool@zfs-auto-snap_x-2014-02-18-1513'
@zfs-auto-snap_x-2014-02-18-1513, 1 created, 0 destroyed, 0 warnings.

justinp@backup:~$ zfs get -H com.sun:auto-snapshot rpool/dump rpool/swap
rpool/dump com.sun:auto-snapshot false local
rpool/swap com.sun:auto-snapshot false local

The relevant logic that leads to this behavior is commented as:

Exclude datasets that are not named on the command line.

AND,

Check whether the candidate name is a prefix of any excluded dataset name.

Seems if [ -n "$opt_recursive" ], then all CANDIDATES should be included which is a decendent of an argument on the cmdline? [ "${ii#$jj/}" != "$ii" ]

Redundant logs from zfs-auto-snapshot.cron.frequent compared to the others

The other cron scripts seems to take a whole other form than that particular script, and I think that created redundant logs from CRON compared to the other ones.

My issue is really that the frequent snapshots seems to create syslogs like

Jul  9 06:30:01 server CRON[13941]: (root) CMD (which zfs-auto-snapshot > /dev/null && zfs-auto-snapshot --quiet --syslog --label=frequent --keep=4  //)
Jul  9 06:30:06 server zfs-auto-snap: @zfs-auto-snap_frequent-2017-07-09-0430, 1 created, 1 destroyed, 0 warnings.

while the other scripts simply cause

Jul  9 06:25:40 server zfs-auto-snap: @zfs-auto-snap_daily-2017-07-09-0425, 1 created, 0 destroyed, 0 warnings.

I may be in the wrong (since the number of entries in the log is overwhelmingly of the frequent ones in any case, which makes it slightly hard to read exhaustively) but I think this is the case.

So... Could we restructure the scripts somehow to make the logging smoother...?

(I'm looking at the output of grep -i "zfs" /var/log/syslog)

Document com.sun:auto-snapshot:$label

According to my understanding of the source code (and my short tests) you can skip specific snapshots to be generated by setting com.sun:auto-snapshot:$label to false (like com.sun:auto-snapshot does for all labels). This is not coumented anywhere.

Snapshot not recursive when label==hourly

I am using zfs-auto-snapshot on Ubuntu to perform regular snapshots, however I have an interesting problem with the hourly script.

I have configured the script /etc/cron.hourly/zfs-auto-snapshot to backup one of the three zpools (tank) on an hourly basis, essentially using the default options but replacing '//' with 'tank'.

The default script for the hourly job did not include the -r or --recursive attribute, and originally I forgot to add this. Having noticed that the snapshot wasn't recursive, I then added the attribute and let things run. However, the recursive attribute is being ignored and the snapshot isn't backing up recursively.

If I run the following:

sudo zfs-auto-snapshot --recursive --debug --label=test tank

I get the output:

Debug: Including tank for recursive snapshot.
Doing recursive snapshots of tank
Debug: zfs snapshot -o com.sun:auto-snapshot-desc='-' -r 'tank@zfs-auto-snap_test-2014-12-04-2058'
@zfs-auto-snap_test-2014-12-04-2058, 1 created, 0 destroyed, 0 warnings.

And using the label hourly:

$ sudo zfs-auto-snapshot --recursive --debug --label=hourly tank
Debug: Including tank for regular snapshot.
Doing regular snapshots of tank
Debug: zfs snapshot -o com.sun:auto-snapshot-desc='-' 'tank@zfs-auto-snap_hourly-2014-12-04-2100'
@zfs-auto-snap_hourly-2014-12-04-2100, 1 created, 0 destroyed, 0 warnings.

I've tried deleting all previous hourly snapshots, but it appears that this is now somehow fixed for the label hourly. Obviously I can work around this by using a different label, but it would be interesting to know why and whether I can ever get the label hourly back.

Oh, I tried changing the prefix as well, same problem ...

$ sudo zfs-auto-snapshot --recursive --debug --prefix=zfs-auto --label=hourly tank
Debug: Including tank for regular snapshot.
Doing regular snapshots of tank
Debug: zfs snapshot -o com.sun:auto-snapshot-desc='-' 'tank@zfs-auto_hourly-2014-12-04-2105'
@zfs-auto_hourly-2014-12-04-2105, 1 created, 0 destroyed, 0 warnings.

Happy to provide more information.

date in snapshot

I'm assuming it's recording the date as UTC? Is there any way of switching it to reflect the local time?

storage/mysqllogs@zfs-auto-snap_frequent-2017-10-20-1600 0 - 40.7K -
root@db1:~# date
Fri Oct 20 12:14:47 EDT 2017

Possibility to use fanotify to trigger snapshots

NILFS2 is filesystem that by design creates snapshot after any data change, so it can be easily rolled back to any time without specificaly making snapshots at that time using cron... Is it possible to mimick such feature using inotify to watch file changes and make snapshots accordingly? Maybe not for every inotify event, but at least for some.

Provide systemd service and timer files

Hi,

i just started using zfs-auto-snapshot on my Debian 9 server and want to suggest to add native
systemd service files for the script and also to add systemd timers to replace cron.

I run my server very infrequently and cronjobs only run when the server is online during that time. So it would probably miss most of the weekly snapshots i intended to make. With systemd there is the Persistent option on timers which let the service run whenever the server is running again and not missing snapshots.

Here are the service files and timers i made, based on an archlinux aur repo which provided systemd files, with some tweaks (using Type oneshot instead of simple).

These really should be provided upstream so every distribution gets it.

# /etc/systemd/system/zfs-auto-snapshot-frequent.service
[Unit]
Description=ZFS frequent snapshot service

[Service]
Type=oneshot
ExecStart=/usr/sbin/zfs-auto-snapshot --skip-scrub --prefix=znap --label=frequent --keep=4 //

# /etc/systemd/system/zfs-auto-snapshot-frequent.timer
[Unit]
Description=ZFS frequent snapshot timer

[Timer]
OnCalendar=*:0/15
Persistent=true

[Install]
WantedBy=timers.target

# /etc/systemd/system/zfs-auto-snapshot-hourly.service
[Unit]
Description=ZFS hourly snapshot service

[Service]
Type=oneshot
ExecStart=/usr/sbin/zfs-auto-snapshot --skip-scrub --prefix=znap --label=hourly --keep=24 //
# /etc/systemd/system/zfs-auto-snapshot-hourly.timer
[Unit]
Description=ZFS hourly snapshot timer

[Timer]
OnCalendar=hourly
Persistent=true

[Install]
WantedBy=timers.target

# /etc/systemd/system/zfs-auto-snapshot-daily.service
[Unit]
Description=ZFS daily snapshot service

[Service]
Type=oneshot
ExecStart=/usr/sbin/zfs-auto-snapshot --skip-scrub --prefix=znap --label=daily --keep=31 //
# /etc/systemd/system/zfs-auto-snapshot-daily.timer
[Unit]
Description=ZFS daily snapshot timer

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target
# /etc/systemd/system/zfs-auto-snapshot-weekly.service
[Unit]
Description=ZFS weekly snapshot service

[Service]
Type=oneshot
ExecStart=/usr/sbin/zfs-auto-snapshot --skip-scrub --prefix=znap --label=weekly --keep=8 //
# /etc/systemd/system/zfs-auto-snapshot-weekly.timer
[Unit]
Description=ZFS weekly snapshot timer

[Timer]
OnCalendar=weekly
Persistent=true

[Install]
WantedBy=timers.target

# /etc/systemd/system/zfs-auto-snapshot-monthly.service
[Unit]
Description=ZFS monthly snapshot service

[Service]
Type=oneshot
ExecStart=/usr/sbin/zfs-auto-snapshot --skip-scrub --prefix=znap --label=monthly --keep=12 //
# /etc/systemd/system/zfs-auto-snapshot-monthly.timer
[Unit]
Description=ZFS monthly snapshot timer

[Timer]
OnCalendar=monthly
Persistent=true

[Install]
WantedBy=timers.target

--destroy-only doesn't work as expected

Hi!
The --destroy-only flag does not seem to work as expected. There are two issues:

  • When using // as a a filesystem name, nothing happens. This is what the log says:

gpothier@heracles:~/zfs-auto-snapshot-master$ sudo zfs-auto-snapshot --destroy-only --label=min15 --keep=4 -r -d //
Debug: Including tambor for recursive snapshot.
Debug: Excluding tambor/caligrafix because tambor includes it recursively.
Debug: Excluding tambor/luki because tambor includes it recursively.
Debug: Excluding tambor/pub because tambor includes it recursively.
Debug: Excluding tambor/ssdcaligrafix because tambor includes it recursively.
Recursively destroying all but the newest 4 snapshots of tambor
@zfs-auto-snap_min15-2017-03-15-0015, 0 created, 0 destroyed, 0 warnings.

  • When using explicit filesystem names instead of //, the program destroys one more snapshot than it should. Eg. if I put --keep=4, I end up with 3 snapshots.

Problematic binary runs when running in cron

I am currently converting some backup strategies to ZFS with snapshots in a debian environment. In my scripts I call the zfs-auto-snapshot script after the backup there was a problem: When path is not properly exported to the cron environment the backup tasks run in, the environment reports:

Error: zpool status 127: env: zpool: No such file or directory

I heard of other people running into that problem when a binary resides in /sbin, as the default PATH may not include this.

I would suggest to change the executions of the zfs and zpool commands to full paths. You can test the current path via which or - if that returns nothin - fall back to the default path with /sbin/ and write either to a variable.

IMHO this would make the script more resilient to environmental changes, but I am not sure if this would break something on other systems/dists.

Snapshots always recursive?

Hello,
Am I doing something wrong or does "zfs-auto-snapshot" always make recursive snapshots?

I happen to have a usecase for non recursiv snapshots and would love to use zfs-auto-snapshot for it.

Initially found on Debian Stretch with zfs-auto-snapshot Package from apt (v1.2.1-1)
Tested with Debian Stretch with zfs-auto-snapshot install from git

root@BF-GW:~# zfs-auto-snapshot --debug -v --keep=3 --label=test storage1/test Debug: Including storage1/test for recursive snapshot. Doing recursive snapshots of storage1/test Destroying all but the newest 3 snapshots of each dataset. Debug: zfs snapshot -o com.sun:auto-snapshot-desc='-' -r 'storage1/test@zfs-auto-snap_test-2018-03-25-1822' @zfs-auto-snap_test-2018-03-25-1822, 1 created, 0 destroyed, 0 warnings.

Thank you in advance

'//': not a ZFS Filesystem

Installed on Ubuntu 16.04 from the Git repo. The following command:

sudo zfs-auto-snapshot --label=frequent --keep=4 '//'

errors with:

'//': not a ZFS filesystem
Error: zfs list 1: 

The happens when running directly as above, and also causes the default cron.d jobs to fail as they use the same syntax.

All further snapshots are aborted after the --pre-snapshot= command fails once.

Summary
When taking snapshots of multiple datasets using a command with --pre-snapshot= to decide whether to proceed or not, all snapshots are aborted after the first time the command returns non-zero. If the command returns zero for a following dataset the snapshot is still aborted. According to the man-page the snapshot is aborted for -this- dataset.

How to reproduce
Abort any dataset that isn't the last in sequence by returning non-zero from a command specified with --pre-snapshot=.

Expected behavior
Snapshot of dataset is aborted. Datasets following the aborted snapshot are only aborted when the command returns non-zero.

Observed behavior
All further snapshots of dataset are aborted irrespective of command return value.

P.S: RUNSNAP is declared and set to 1 on line 155. RUNSNAP is set to 0 when do_run fails on line 168 and is never set again for the next iterations of the loop. My personal fix was to change line 168 from:
do_run "$opt_pre_snapshot $ii $NAME" || RUNSNAP=0
to:
do_run "$opt_pre_snapshot $ii $NAME" && RUNSNAP=1 || RUNSNAP=0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.