Git Product home page Git Product logo

restic-backup-docker's Introduction

Restic Backup Docker Container

A docker container to automate restic backups

This container runs restic backups in regular intervals.

  • Easy setup and maintanance
  • Support for different targets (tested with: Local, NFS, SFTP, AWS)
  • Support restic mount inside the container to browse the backup files

Container:

Latest master (experimental):

docker pull ghcr.io/lobaro/restic-backup-docker:master

Latest release:

docker pull ghcr.io/lobaro/restic-backup-docker:latest

Contributing

Pull Requests to improve the image are always wellcome. Please create an issue about the PR first.

When behaviour of the image changes (Features, Bugfixes, Changes in the API) please update the "Unreleased" section of the CHANGELOG.md

Hooks

If you need to execute a script before or after each backup or check, you need to add your hook scripts in the container folder /hooks:

-v ~/home/user/hooks:/hooks

Call your pre-backup script pre-backup.sh and post-backup script post-backup.sh. You can also have separate scripts when running data verification checks pre-check.sh and post-check.sh.

Please don't hesitate to report any issues you find. Thanks.

Test the container

Clone this repository:

git clone https://github.com/Lobaro/restic-backup-docker.git
cd restic-backup-docker

Build the container (the container is named backup-test):

./build.sh

Run the container:

./run.sh

This will run the container backup-test with the name backup-test. Existing containers with that name are completely removed automatically.

The container will back up ~/test-data to a repository with password test at ~/test-repo every minute. The repository is initialized automatically by the container. If you'd like to change the arguments passed to restic init, you can do so using the RESTIC_INIT_ARGS env variable.

To enter your container execute:

docker exec -ti backup-test /bin/sh

Now you can use restic as documented, e.g. try to run restic snapshots to list all your snapshots.

Logfiles

Logfiles are inside the container. If needed, you can create volumes for them.

docker logs

Shows /var/log/cron.log.

Additionally you can see the full log, including restic output, of the last execution in /var/log/backup-last.log. When the backup fails, the log is copied to /var/log/restic-error-last.log. If configured, you can find the full output of the mail notification in /var/log/mail-last.log.

Use the running container

Assuming the container name is restic-backup-var, you can execute restic with:

docker exec -ti restic-backup-var restic

Backup

To execute a backup manually, independent of the CRON, run:

docker exec -ti restic-backup-var /bin/backup

Back up a single file or directory:

docker exec -ti restic-backup-var restic backup /data/path/to/dir --tag my-tag

Data verification check

To verify backup integrity and consistency manually, independent of the CRON, run:

docker exec -ti restic-backup-var /bin/check

Restore

You might want to mount a separate host volume at e.g. /restore to not override existing data while restoring.

Get your snapshot ID with:

docker exec -ti restic-backup-var restic snapshots

e.g. abcdef12

 docker exec -ti restic-backup-var restic restore --include /data/path/to/files --target / abcdef12

The target is / since all data backed up should be inside the host mounted /data dir. If you mount /restore you should set --target /restore and the data will end up in /restore/data/path/to/files.

Customize the Container

The container is set up by setting environment variables and volumes.

Environment variables

  • RESTIC_REPOSITORY - the location of the restic repository. Default /mnt/restic. For S3: s3:https://s3.amazonaws.com/BUCKET_NAME
  • RESTIC_PASSWORD - the password for the restic repository. Will also be used for restic init during first start when the repository is not initialized.
  • RESTIC_TAG - Optional. To tag the images created by the container.
  • NFS_TARGET - Optional. If set, the given NFS is mounted, i.e. mount -o nolock -v ${NFS_TARGET} /mnt/restic. RESTIC_REPOSITORY must remain its default value!
  • BACKUP_CRON - A cron expression to run the backup. Note: The cron daemon uses UTC time zone. Default: 0 */6 * * * aka every 6 hours.
  • CHECK_CRON - Optional. A cron expression to run data integrity check (restic check). If left unset, data will not be checked. Note: The cron daemon uses UTC time zone. Example: 0 23 * * 3 to run 11PM every Tuesday.
  • RESTIC_FORGET_ARGS - Optional. Only if specified, restic forget is run with the given arguments after each backup. Example value: -e "RESTIC_FORGET_ARGS=--prune --keep-last 10 --keep-hourly 24 --keep-daily 7 --keep-weekly 52 --keep-monthly 120 --keep-yearly 100"
  • RESTIC_INIT_ARGS - Optional. Allows specifying extra arguments to restic init such as a password file with --password-file.
  • RESTIC_JOB_ARGS - Optional. Allows specifying extra arguments to the backup job such as limiting bandwith with --limit-upload or excluding file masks with --exclude.
  • RESTIC_DATA_SUBSET - Optional. You can pass a value to --read-data-subset when a repository check is run. If left unset, only the structure of the repository is verified. Note: CHECK_CRON must be set for check to be run automatically.
  • AWS_ACCESS_KEY_ID - Optional. When using restic with AWS S3 storage.
  • AWS_SECRET_ACCESS_KEY - Optional. When using restic with AWS S3 storage.
  • TEAMS_WEBHOOK_URL - Optional. If specified, the content of /var/log/backup-last.log and /var/log/check-last.log is sent to your Microsoft Teams channel after each backup and data integrity check.
  • MAILX_ARGS - Optional. If specified, the content of /var/log/backup-last.log and /var/log/check-last.log is sent via mail after each backup and data integrity check using an external SMTP. To have maximum flexibility, you have to specify the mail/smtp parameters on your own. Have a look at the mailx manpage for further information. Example value: -e "MAILX_ARGS=-r '[email protected]' -s 'Result of the last restic run' -S smtp='smtp.example.com:587' -S smtp-use-starttls -S smtp-auth=login -S smtp-auth-user='username' -S smtp-auth-password='password' '[email protected]'".
  • OS_AUTH_URL - Optional. When using restic with OpenStack Swift container.
  • OS_PROJECT_ID - Optional. When using restic with OpenStack Swift container.
  • OS_PROJECT_NAME - Optional. When using restic with OpenStack Swift container.
  • OS_USER_DOMAIN_NAME - Optional. When using restic with OpenStack Swift container.
  • OS_PROJECT_DOMAIN_ID - Optional. When using restic with OpenStack Swift container.
  • OS_USERNAME - Optional. When using restic with OpenStack Swift container.
  • OS_PASSWORD - Optional. When using restic with OpenStack Swift container.
  • OS_REGION_NAME - Optional. When using restic with OpenStack Swift container.
  • OS_INTERFACE - Optional. When using restic with OpenStack Swift container.
  • OS_IDENTITY_API_VERSION - Optional. When using restic with OpenStack Swift container.

Volumes

  • /data - This is the data that gets backed up. Just mount it to wherever you want.

Set the hostname

Since restic saves the hostname with each snapshot and the hostname of a docker container is derived from its id, you might want to customize this by setting the hostname of the container to another value.

Set --hostname in the network settings

Backup via SFTP

Since restic needs a passwordless login to the SFTP server, make sure you can do sftp user@host from inside the container. If you can do so from your host system, the easiest way is to just mount your .ssh folder containing the authorized cert into the container by specifying -v ~/.ssh:/root/.ssh as an argument for docker run.

Now you can simply specify the restic repository to be an SFTP repository.

-e "RESTIC_REPOSITORY=sftp:user@host:/tmp/backup"

Backup via OpenStack Swift

Restic can back up data to an OpenStack Swift container. Because Swift supports various authentication methods, credentials are passed through environment variables. In order to help integration with existing OpenStack installations, the naming convention of those variables follows the official Python Swift client.

Now you can simply specify the restic repository to be a Swift repository.

-e "RESTIC_REPOSITORY=swift:backup:/"
-e "RESTIC_PASSWORD=password"
-e "OS_AUTH_URL=https://auth.cloud.ovh.net/v3"
-e "OS_PROJECT_ID=xxxx"
-e "OS_PROJECT_NAME=xxxx"
-e "OS_USER_DOMAIN_NAME=Default"
-e "OS_PROJECT_DOMAIN_ID=default"
-e "OS_USERNAME=username"
-e "OS_PASSWORD=password"
-e "OS_REGION_NAME=SBG"
-e "OS_INTERFACE=public"
-e "OS_IDENTITY_API_VERSION=3"

Backup via rclone

To use rclone as a backend for restic, simply add the rclone config file as a volume with -v /absolute/path/to/rclone.conf:/root/.config/rclone/rclone.conf.

Note that for some backends (Among them Google Drive and Microsoft OneDrive), rclone writes data back to the rclone.conf file. In this case it needs to be writable by Docker.

If the container fails to write the new rclone.conf file with the error message Failed to save config after 10 tries: Failed to move previous config to backup location, add the entire rclone directory as a volume: -v /absolute/path/to/rclone-dir:/root/.config/rclone.

Example docker-compose

This is an example docker-compose.yml. The container will back up two directories to an SFTP server and check data integrity once a week.

version: '3'

services:
  restic:
    image: lobaro/restic-backup-docker:latest
    hostname: nas                                     # This will be visible in restic snapshot list
    restart: always
    privileged: true
    volumes:
      - /volume1/Backup:/data/Backup:ro               # Backup /volume1/Backup from host
      - /home/user:/data/home:ro                      # Backup /home/user from host
      - ./post-backup.sh:/hooks/post-backup.sh:ro     # Run script post-backup.sh after every backup
      - ./post-check.sh:/hooks/post-check.sh:ro       # Run script post-check.sh after every check
      - ./ssh:/root/.ssh                              # SSH keys and config so we can login to "storageserver" without password
    environment:
      - RESTIC_REPOSITORY=sftp:storageserver:/storage/nas  # Backup to server "storageserver" 
      - RESTIC_PASSWORD=passwordForRestic                  # Password restic uses for encryption
      - BACKUP_CRON=0 22 * * 0                             # Start backup every Sunday 22:00 UTC
      - CHECK_CRON=0 22 * * 3                              # Start check every Wednesday 22:00 UTC
      - RESTIC_DATA_SUBSET=50G                             # Download 50G of data from "storageserver" every Wednesday 22:00 UTC and check the data integrity
      - RESTIC_FORGET_ARGS=--prune --keep-last 12          # Only keep the last 12 snapshots

Versioning

Starting from v1.3.0 versioning follows Semantic versioning

restic-backup-docker's People

Contributors

0chroma avatar alexjoedt avatar cobrijani avatar crast avatar cstuder avatar danrabinowitz avatar daquinoaldo avatar erfansahaf avatar fedalto avatar haneef95 avatar hf6b avatar kalledk avatar koelle25 avatar kumekay avatar lobaro-dev avatar lojutaan avatar m4a1x avatar mrclschstr avatar mulmschneider avatar niondir avatar stas2k avatar vfauth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

restic-backup-docker's Issues

Backup not aborting correctly after error

Hi all,

my backup failed today but the script didn't abort correctly. I think it's better to use exit 1 instead of the kill in the following line:

The kill command is more useful for killing processes than for quitting scripts.

What do you think?

EDIT: By the way... with kill 1 you are trying to killing the process with pid 1 😉

Permission denied on OpenShift

Hi,
I receive a permission denied error when running on OpenShift.

It occurs when the init script tries to perform mkdir in a subpath of /mnt and again when it tries to setup the cron job.

This is a known bug, with an official fix. OpenShift Container Platform runs containers using an arbitrarily assigned user ID instead of privileged user (root). Since the user inside the container is not root, it hasn't the permissions to perform some operations you do in the bootstrap script, like creating directory inside /mnt or setup the cron job (that btw I really enjoy, thanks!).

The bug affects all the OpenShift users.
I'll open a PR, I hope you will consider merging the fix, to allow your image to be run on OpenShift.

Multiple folders

Can I backup multible folders with this or must I ad as many containers as folders I want to backup?

Add WOL support in hooks

I would like to be able to wake a local server in the pre-backup hook and shut it down once the backup is done using the post-backup hook. The only thing I need for this to work is some form of wake-on-lan client, such as Awake. I thus propose the following modification to the Dockerfile (change is on line number 9):

- RUN apk add --update --no-cache heirloom-mailx fuse curl
+ RUN apk add --update --no-cache heirloom-mailx fuse curl awake

Would this change be okay? I can open a PR if you would like.

Push Docker HUB

Can you please update the latest build to docker hub?

the latest update there is 7 month ago and in this build rclone is not included

Support AWS backups

It should be simply possible to backup data to AWS. Restic has already support for this.

Cannot mount backups in container, fuse issues

I haven't been able to get restic mount to work properly inside of the container. It seems like fuse isn't able to load the module as expected:

$ docker exec -it restic sh
/ # restic mount /mnt/restic/
repository 1bcd9682 opened successfully, password is correct
fusermount: exit status 1
also, the following messages were logged by a library:
2021/04/29 14:53:53 mount helper error: fusermount: fuse device not found, try 'modprobe fuse' first
unable to umount (maybe already umounted or still in use?): exit status 1: fusermount: failed to clone namespace: Operation not permitted
/ # modprobe fuse
modprobe: can't change directory to '/lib/modules': No such file or directory

"ssh": executable file not found in $PATH

I try to use the docker image tagged as latest for backing up to SFTP. It seems like there is an issue with the $PATH variable. My output looks the following.

Starting container ...
Restic repository 'sftp:username@server:/some/path' does not exists. Running restic init.
Setup backup cron job with cron expression BACKUP_CRON: */5 * * * *
Container started.
create backend at sftp:username@server:/some/path failed: cmd.Start: exec: "ssh": executable file not found in $PATH

The images v1.0 and v1.1 are working fine. Am I doing something wrong?

restic snapshots exit code check

This has caused me problems:

restic snapshots &>/dev/null
status=$?
echo "Check Repo status $status"
if [ $status != 0 ]; then
echo "Restic repository '${RESTIC_REPOSITORY}' does not exists. Running restic init."
restic init
init_status=$?
echo "Repo init status $init_status"
if [ $init_status != 0 ]; then
echo "Failed to init the repository: '${RESTIC_REPOSITORY}'"
exit 1
fi
fi

Specifically, I've had backups be interrupted mid process (post backup cleanup or the backup it self, not sure) and they have locked the repository, causing restic snapshots to return a non 0 exit code, next the entry script tries to init a new repository which fails because it's already there. Because I've got restart=always, the container ends up in a restart loop.

I'm not sure how to fix this since restic does not provide meaningful exit codes (restic/restic#956). String parsing is not a great idea either...

Create hook to run a script before notification email is sent

I like to have a hook to insert a script which runs before a notification email is sent.

Motivation: I like to see the total space occupied by all restic snapshots on the target server. I need this, because my target server has a quota, and I like to make sure I do not exceed the quota with the backup snapshots. As a hack-y workaround I modified /bin/backup and inserted four lines before the notifications are generated when I do rclone backups via sftp:

logLast " "
logLast " "
logLast "Total size occupied by all restic snapshots:"
ssh -i /volume1/docker/privatekey.pem [email protected] 'du -h restic/ | tail -1' >> ${lastLogfile} 2>&1

This is for my web space at Domain Factory's ManagedHosting 64 Medium which contains ssh access. It has a 50 GB quota. The snapshots are in the sub-folder "restic/". The generated text in the email I get ends with:

Total size occupied by all restic snapshots:
8.4G	restic/

Of course, it would be much nicer and more sustainable to have this built in.
And maybe there is a much more elegant way to find out if I get close to exceeding the available space on the target medium?

default cron backs up every minute of every 6th hour

* */6 * * *

“At every minute past every 6th hour.”

resulting in snapshots like

836de08e  2018-06-11 12:00:03  redis-backup              /data
72e44f90  2018-06-11 12:01:01  redis-backup              /data
0d57c594  2018-06-11 12:02:01  redis-backup              /data
f19ea71a  2018-06-11 12:03:01  redis-backup              /data
6574aa5d  2018-06-11 12:04:01  redis-backup              /data
4ced03e9  2018-06-11 12:05:01  redis-backup              /data
fd723ce7  2018-06-11 12:06:02  redis-backup              /data
eae91422  2018-06-11 12:07:01  redis-backup              /data
ff7a345c  2018-06-11 12:08:01  redis-backup              /data
b9b08acf  2018-06-11 12:09:01  redis-backup              /data
61cf5d5b  2018-06-11 12:10:01  redis-backup              /data
559c09f5  2018-06-11 12:11:02  redis-backup              /data
42d50421  2018-06-11 12:12:01  redis-backup              /data
1372cafd  2018-06-11 12:13:01  redis-backup              /data
4288cb40  2018-06-11 12:14:01  redis-backup              /data
682eee64  2018-06-11 12:15:01  redis-backup              /data
42d741e2  2018-06-11 12:16:01  redis-backup              /data
4418a91b  2018-06-11 12:17:01  redis-backup              /data
a7aa6b92  2018-06-11 12:18:01  redis-backup              /data
c6c2b806  2018-06-11 12:19:02  redis-backup              /data
084f6462  2018-06-11 12:20:01  redis-backup              /data
b2e42d49  2018-06-11 12:21:01  redis-backup              /data
2710c626  2018-06-11 12:22:01  redis-backup              /data
474079e2  2018-06-11 12:23:01  redis-backup              /data
79a5cb38  2018-06-11 12:24:01  redis-backup              /data
09da118c  2018-06-11 12:25:01  redis-backup              /data
189411de  2018-06-11 12:26:01  redis-backup              /data
166d82fb  2018-06-11 12:27:01  redis-backup              /data
0d1b8fd8  2018-06-11 12:28:01  redis-backup              /data
faf5d64a  2018-06-11 12:29:01  redis-backup              /data
a21429d9  2018-06-11 12:30:01  redis-backup              /data
71930c1e  2018-06-11 12:31:01  redis-backup              /data
2d853bc6  2018-06-11 12:32:01  redis-backup              /data
45b797a5  2018-06-11 12:33:01  redis-backup              /data
b9ab968e  2018-06-11 12:34:01  redis-backup              /data
b9d9ae51  2018-06-11 12:35:01  redis-backup              /data
88848d44  2018-06-11 12:36:01  redis-backup              /data
24e53b3f  2018-06-11 12:37:01  redis-backup              /data
b095cfeb  2018-06-11 12:38:01  redis-backup              /data
6e0b562f  2018-06-11 12:39:02  redis-backup              /data
e853273e  2018-06-11 12:40:01  redis-backup              /data
c04d796c  2018-06-11 12:41:01  redis-backup              /data
784f1481  2018-06-11 12:42:01  redis-backup              /data
9974783e  2018-06-11 12:43:01  redis-backup              /data
af519b17  2018-06-11 12:44:01  redis-backup              /data
241a4cbc  2018-06-11 12:45:01  redis-backup              /data
2422324c  2018-06-11 12:46:01  redis-backup              /data
8c452c54  2018-06-11 12:47:01  redis-backup              /data
c46c2b14  2018-06-11 12:48:01  redis-backup              /data
52e9840d  2018-06-11 12:49:01  redis-backup              /data
8fe25822  2018-06-11 12:50:01  redis-backup              /data
89655c7c  2018-06-11 12:51:01  redis-backup              /data
e24aec9a  2018-06-11 12:52:01  redis-backup              /data
3387961f  2018-06-11 12:53:01  redis-backup              /data
b7a5a907  2018-06-11 12:54:01  redis-backup              /data
00bb303e  2018-06-11 12:55:01  redis-backup              /data
341a49ae  2018-06-11 12:56:01  redis-backup              /data
be1a914f  2018-06-11 12:57:01  redis-backup              /data
a5f183e4  2018-06-11 12:58:01  redis-backup              /data

I think you want a cron like

0 */6 * * *

which is “At minute 0 past every 6th hour.”

Cheers

Cameron

How to check backup progress?

I would like to check the progress of my backups but have been unable to get the progress to print using SIGUSR1, as documented in the restic docs.

Additionally on Unix systems if restic receives a SIGUSR1 signal the current progress will be written to the standard output so you can check up on the status at will.

The steps I followed inside the Docker container are:

/home # ps aux
PID   USER     TIME  COMMAND
    1 root      0:01 tail -fn0 /var/log/cron.log
   40 root      0:00 crond
   41 root      0:00 ash
   46 root      0:21 ash
   91 root      0:00 /usr/bin/flock -n /var/run/backup.lock /bin/backup
   92 root      0:00 {backup} /bin/sh /bin/backup
   97 root      0:00 restic backup /data --exclude="@eaDir/" --tag=
  106 root      0:00 rclone serve restic --stdio --b2-hard-delete --drive-use-trash=fa
  123 root      0:00 ps aux

Send SIGUSR1 signal to PID 97

kill -SIGUSR1 97

No progress is printed to /var/log/cron.log.

I also tried viewing stdout for PID 97 with tail -f /proc/97/fd/1, but still nothing, unfortunately.

Support cron that runs restic check regularly

Since it might take a long time. Having a second cron expression to run restic check on the repo would be nice. In addition of failure the server should send out some mail to notify an admin.

Pulling the docker with unraid doesn't enable rclone

I was able to follow your guide from: https://github.com/lobaro/restic-backup-docker

Since I'm using unraid, i didn't have to run ./build.sh or ./run.sh

I added some volumes from my unraid shares to your docker.
After following the "Backup via rclone" part, I see that rclone is not in the $PATH.

Below is my unraid template for your docker:
unraid template config

Below is the problem with rclone:
restic docker rclone not found

Let me know if you need more information on this, ready to help!

I want this docker to function with rclone at full, for my backups :-)

ARM version

Any chance of an armhf (arm32v7) flavour?

Support for Apprise notifications

I would love to be able to use my already running Gotify instance for notifications.
Rather than implement notifications in a bespoke way, it might make sense to integrated with a system like Apprise.

It's got a CLI that you can just include in the container and call with the specified URI when notifications need to be sent.

If you need help implementing this feature, I can probably make a PR.

Restic thinks it has 128 TiB (that's Tebibytes!!!) to process

Restic is giving me a crazy storage estimate (128 TiB) as well as ETA to process /var/lib/docker/aufs (/data/var/lib/docker/aufs from within docker). Help!!!!

ETA 2398:36:46
128.019 TiB

P.S. I even have "aufs" in my excludes.txt file

/ # restic version
restic 0.8.1
compiled with go1.9.2 on linux/amd64

/ # env
NFS_TARGET=
RESTIC_CLEANUP_KEEP_DAILY=7
HOSTNAME=ubuntu
SHLVL=1
HOME=/root
RESTIC_REPOSITORY=/volume1/backups/restic-repo
TERM=xterm
RESTIC_CLEANUP_KEEP_WEEKLY=5
RESTIC_TAG=
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RESTIC_CLEANUP_KEEP_YEARLY=75
BACKUP_CRON=* */6 * * *
RESTIC_JOB_ARGS=
RESTIC_FORGET_ARGS=
RESTIC_PASSWORD_FILE=/config/restic_password.txt
RESTIC_BACKUP_OPTIONS="--exclude-file=/config/excludes.txt --one-file-system"
RESTIC_CLEANUP_KEEP_MONTHLY=12
RESTIC_CLEANUP_OPTIONS="--prune"
PWD=/
RESTIC_PASSWORD=
RESTIC_VERSION=0.8.1

/ # restic check
password is correct
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

/ # du -hs /data/var/lib/docker/aufs
17.8G   /data/var/lib/docker/aufs

/ # find /data/var/lib/docker/aufs | wc -l
764254

/ # restic backup /data/var/lib/docker/aufs
password is correct
using parent snapshot 23234a03
scan [/data/var/lib/docker/aufs]
scanned 72154 directories, 693743 files in 0:10
[2:02] 0.00%  15.546 MiB/s  1.852 GiB / 128.019 TiB  60589 / 765897 items  0 errors  ETA 2398:36:46

Include rclone

Hi

Would you mind adding rclone in this container ? I'ld be interested so I can backup to a FTP repository.

Best regards,

Add (optional) call for restic self-update

The container could regularly run restic self-update to keep restic up to date.

I would keep the feature optional. Default on or off? to keep risks in production environments as low as possible.

notification mails not being sent

After upgrading, I noticed all backups were running fine but notification mails where no longer sent. It appears Alpine linux mail command do not support the -S option to provide an external mailserver.

/ # cat /var/log/mail-last.log
mail: unrecognized option: S
usage: mail [-dEIinv] [-a header] [-b bcc-addr] [-c cc-addr] [-r from-addr] [-s subject] [--] to-addr ...
mail [-dEIiNnv] -f [name]
mail [-dEIiNnv] [-u user]

msmtp is an alternative.

Pin version of base image

We should pin the version of the base image to have reproducible builds.

This should make issues like #27 less likely.

Publish new release

Hi,
This is a very nice Dockerfile. The only caveat is, that the last release is now 1.5 years ago, and cool new features have been added. A new release would be great.
Cheers and regards!

Releated: #68 #59

failed to create shim task: OCI runtime create failed

When trying to run the container, I get this error:

Error response from daemon: failed to create task for container:
failed to create shim task: OCI runtime create failed:
runc create failed: unable to start container process:
error during container init: error setting cgroup config
for procHooks process: resulting devices cgroup doesn't
match target mode: unknown

(I added line breaks)

docker info: https://bin.gy/isconegran

And the compose file:

version: '3'
services:
  restic:
    image: ghcr.io/lobaro/restic-backup-docker:latest
    hostname: [REDACTED] # This will be visible in restic snapshot list
    restart: unless-stopped
    privileged: true
    volumes:
      - ../activity-roles/db:/data/activity-roles:ro
      - ./restic:/restic
    environment:
      - RESTIC_REPOSITORY=/restic
      - 'RESTIC_PASSWORD=[REDACTED]
      - 'BACKUP_CRON=0 */6 * * *'
      - 'CHECK_CRON=0 4 * * *'
      - 'RESTIC_FORGET_ARGS=--prune --keep-daily 30 --keep-weekly 20 --keep-monthly 24'

exec user process caused "no such file or directory"

Always got crash with message in log

panic: standard_init_linux.go:178: exec user process caused "no such file or directory"

on :latest build from Docker Hub.

Also, got this on fresh local build:

 ~/d/restic-backup-docker  ./run.sh

Removing old container names 'backup-test' if exists
backup-test
Start backup-test container. Backup of ~/test-data/ to repository ~/test-repo/ every minute
panic: standard_init_linux.go:178: exec user process caused "no such file or directory" [recovered]
        panic: standard_init_linux.go:178: exec user process caused "no such file or directory"

goroutine 1 [running, locked to thread]:
panic(0x6f3080, 0xc4201b0150)
        /usr/lib/golang/src/runtime/panic.go:500 +0x1a1
github.com/urfave/cli.HandleAction.func1(0xc420089748)
        /builddir/build/BUILD/docker-ae7d637fcad9be396e75af430405446f9e6ab099/runc-3819cd61f5263275788f7279fe9d2bc13f086aa6/Godeps/_workspace/src/github.com/urfave/cli/app.go:478 +0x247
panic(0x6f3080, 0xc4201b0150)
        /usr/lib/golang/src/runtime/panic.go:458 +0x243
github.com/opencontainers/runc/libcontainer.(*LinuxFactory).StartInitialization.func1(0xc420089198, 0xc420026088, 0xc420089238)
        /builddir/build/BUILD/docker-ae7d637fcad9be396e75af430405446f9e6ab099/runc-3819cd61f5263275788f7279fe9d2bc13f086aa6/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/factory_linux.go:259 +0x18f
github.com/opencontainers/runc/libcontainer.(*LinuxFactory).StartInitialization(0xc420056780, 0xaac9c0, 0xc4201b0150)
        /builddir/build/BUILD/docker-ae7d637fcad9be396e75af430405446f9e6ab099/runc-3819cd61f5263275788f7279fe9d2bc13f086aa6/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/factory_linux.go:277 +0x353
main.glob..func8(0xc42008c780, 0x0, 0x0)
        /builddir/build/BUILD/docker-ae7d637fcad9be396e75af430405446f9e6ab099/runc-3819cd61f5263275788f7279fe9d2bc13f086aa6/main_unix.go:26 +0x66
reflect.Value.call(0x6dde00, 0x769e48, 0x13, 0x73c329, 0x4, 0xc420089708, 0x1, 0x1, 0x4d1818, 0x732160, ...)
        /usr/lib/golang/src/reflect/value.go:434 +0x5c8
reflect.Value.Call(0x6dde00, 0x769e48, 0x13, 0xc420089708, 0x1, 0x1, 0xac2720, 0xc4200896e8, 0x4da7f6)
        /usr/lib/golang/src/reflect/value.go:302 +0xa4
github.com/urfave/cli.HandleAction(0x6dde00, 0x769e48, 0xc42008c780, 0x0, 0x0)
        /builddir/build/BUILD/docker-ae7d637fcad9be396e75af430405446f9e6ab099/runc-3819cd61f5263275788f7279fe9d2bc13f086aa6/Godeps/_workspace/src/github.com/urfave/cli/app.go:487 +0x1e0
github.com/urfave/cli.Command.Run(0x73c4f5, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x74db36, 0x51, 0x0, ...)
        /builddir/build/BUILD/docker-ae7d637fcad9be396e75af430405446f9e6ab099/runc-3819cd61f5263275788f7279fe9d2bc13f086aa6/Godeps/_workspace/src/github.com/urfave/cli/command.go:191 +0xc3b
github.com/urfave/cli.(*App).Run(0xc4200ec000, 0xc42000c120, 0x2, 0x2, 0x0, 0x0)
        /builddir/build/BUILD/docker-ae7d637fcad9be396e75af430405446f9e6ab099/runc-3819cd61f5263275788f7279fe9d2bc13f086aa6/Godeps/_workspace/src/github.com/urfave/cli/app.go:240 +0x611
main.main()
        /builddir/build/BUILD/docker-ae7d637fcad9be396e75af430405446f9e6ab099/runc-3819cd61f5263275788f7279fe9d2bc13f086aa6/main.go:137 +0xbd6

Disable CRON

Is there any option to disable cron? I want to manage this via scripts to support multiple backend backups. So having a default cron will not work with my use case.

exec format error on pi dcokerimage

Running this docker container on a raspberry pi4 but getting these errors on boot:

standard_init_linux.go:219: exec user process caused: exec format error
standard_init_linux.go:219: exec user process caused: exec format error
standard_init_linux.go:219: exec user process caused: exec format error
standard_init_linux.go:219: exec user process caused: exec format error

Any ideas what might cause this or is it not arm compatible?

Docker hub test failes with docker-compose.test.yml

Successfully built 8d978235620b
Successfully tagged lobaro/restic-backup-docker:1.3.0-TEST
Starting Test in docker-compose.test.yml...
test uses an image, skipping
Creating network "b2zf5macpkktsmfbdwqlux3_default" with the default driver
No such service: sut
starting "sut" service in docker-compose.test.yml (1)

Test the container : /bin/sh: apk: not found

I tried to "Test the container" using ./build.sh and got this :

UPDATE : Problem similar to #39

Sending build context to Docker daemon  144.9kB
Step 1/28 : FROM alpine:3.10.1 as certs
3.10.1: Pulling from library/alpine
050382585609: Already exists 
Digest: sha256:6a92cd1fcdc8d8cdec60f33dda4db2cb1fcdcacf3410a8e05b3741f44a9b5998
Status: Downloaded newer image for alpine:3.10.1
 ---> b7b28af77ffe
Step 2/28 : RUN apk add --no-cache ca-certificates
 ---> Running in 258a0c4f1a78
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/1) Installing ca-certificates (20190108-r0)
Executing busybox-1.30.1-r2.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 6 MiB in 15 packages
Removing intermediate container 258a0c4f1a78
 ---> fdce6f8f97e9
Step 3/28 : ENV RESTIC_VERSION=0.9.5
 ---> Running in f6c3ab01c723
Removing intermediate container f6c3ab01c723
 ---> 4e811e109509
Step 4/28 : ADD https://github.com/restic/restic/releases/download/v${RESTIC_VERSION}/restic_${RESTIC_VERSION}_linux_amd64.bz2 /
Downloading [==================================================>]  6.391MB/6.391MB
 ---> 9d31db891613
Step 5/28 : RUN bzip2 -d restic_${RESTIC_VERSION}_linux_amd64.bz2 && mv restic_${RESTIC_VERSION}_linux_amd64 /bin/restic && chmod +x /bin/restic
 ---> Running in 5158bd4d1769
Removing intermediate container 5158bd4d1769
 ---> fa88fdf75277
Step 6/28 : FROM alpine as rclone
latest: Pulling from library/alpine
89d9c30c1d48: Pull complete 
Digest: sha256:c19173c5ada610a5989151111163d28a67368362762534d8a8121ce95cf2bd5a
Status: Downloaded newer image for alpine:latest
 ---> 965eaSending build context to Docker daemon  144.9kB
Step 1/28 : FROM alpine:3.10.1 as certs
3.10.1: Pulling from library/alpine
050382585609: Already exists 
Digest: sha256:6a92cd1fcdc8d8cdec60f33dda4db2cb1fcdcacf3410a8e05b3741f44a9b5998
Status: Downloaded newer image for alpine:3.10.1
 ---> b7b28af77ffe
Step 2/28 : RUN apk add --no-cache ca-certificates
 ---> Running in 258a0c4f1a78
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/1) Installing ca-certificates (20190108-r0)
Executing busybox-1.30.1-r2.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 6 MiB in 15 packages
Removing intermediate container 258a0c4f1a78
 ---> fdce6f8f97e9
Step 3/28 : ENV RESTIC_VERSION=0.9.5
 ---> Running in f6c3ab01c723
Removing intermediate container f6c3ab01c723
 ---> 4e811e109509
Step 4/28 : ADD https://github.com/restic/restic/releases/download/v${RESTIC_VERSION}/restic_${RESTIC_VERSION}_linux_amd64.bz2 /
Downloading [==================================================>]  6.391MB/6.391MB
 ---> 9d31db891613
Step 5/28 : RUN bzip2 -d restic_${RESTIC_VERSION}_linux_amd64.bz2 && mv restic_${RESTIC_VERSION}_linux_amd64 /bin/restic && chmod +x /bin/restic
 ---> Running in 5158bd4d1769
Removing intermediate container 5158bd4d1769
 ---> fa88fdf75277
Step 6/28 : FROM alpine as rclone
latest: Pulling from library/alpine
89d9c30c1d48: Pull complete 
Digest: sha256:c19173c5ada610a5989151111163d28a67368362762534d8a8121ce95cf2bd5a
Status: Downloaded newer image for alpine:latest
 ---> 965ea09ff2eb
Step 7/28 : ADD https://downloads.rclone.org/rclone-current-linux-amd64.zip /
Downloading [==================================================>]  11.56MB/11.56MB

 ---> 09f8c9716466
Step 8/28 : RUN unzip rclone-current-linux-amd64.zip && mv rclone-*-linux-amd64/rclone /bin/rclone && chmod +x /bin/rclone
 ---> Running in 8834c250f8cc
Archive:  rclone-current-linux-amd64.zip
   creating: rclone-v1.49.5-linux-amd64/
  inflating: rclone-v1.49.5-linux-amd64/README.html
  inflating: rclone-v1.49.5-linux-amd64/rclone.1
  inflating: rclone-v1.49.5-linux-amd64/git-log.txt
  inflating: rclone-v1.49.5-linux-amd64/README.txt
  inflating: rclone-v1.49.5-linux-amd64/rclone
Removing intermediate container 8834c250f8cc
 ---> b8bd33c3d35a
Step 9/28 : FROM busybox:glibc
glibc: Pulling from library/busybox
aff645f24c1e: Pull complete 
Digest: sha256:7c15dc145873c379dc0b1771da742b64754a9b4d3437d243e4d9f44f496cf6e5
Status: Downloaded newer image for busybox:glibc
 ---> 8dacfc772af7
Step 10/28 : RUN apk add --update --no-cache heirloom-mailx
 ---> Running in b10c22379eb4
/bin/sh: apk: not found
The command '/bin/sh -c apk add --update --no-cache heirloom-mailx' returned a non-zero code: 12709ff2eb
Step 7/28 : ADD https://downloads.rclone.org/rclone-current-linux-amd64.zip /
Downloading [==================================================>]  11.56MB/11.56MB

 ---> 09f8c9716466
Step 8/28 : RUN unzip rclone-current-linux-amd64.zip && mv rclone-*-linux-amd64/rclone /bin/rclone && chmod +x /bin/rclone
 ---> Running in 8834c250f8cc
Archive:  rclone-current-linux-amd64.zip
   creating: rclone-v1.49.5-linux-amd64/
  inflating: rclone-v1.49.5-linux-amd64/README.html
  inflating: rclone-v1.49.5-linux-amd64/rclone.1
  inflating: rclone-v1.49.5-linux-amd64/git-log.txt
  inflating: rclone-v1.49.5-linux-amd64/README.txt
  inflating: rclone-v1.49.5-linux-amd64/rclone
Removing intermediate container 8834c250f8cc
 ---> b8bd33c3d35a
Step 9/28 : FROM busybox:glibc
glibc: Pulling from library/busybox
aff645f24c1e: Pull complete 
Digest: sha256:7c15dc145873c379dc0b1771da742b64754a9b4d3437d243e4d9f44f496cf6e5
Status: Downloaded newer image for busybox:glibc
 ---> 8dacfc772af7
Step 10/28 : RUN apk add --update --no-cache heirloom-mailx
 ---> Running in b10c22379eb4
/bin/sh: apk: not found
The command '/bin/sh -c apk add --update --no-cache heirloom-mailx' returned a non-zero code: 127

run restic prune

Snapshots get deleted already but from time to time you might want to call prune automatically.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.