Git Product home page Git Product logo

eea.docker.rsync's Introduction

Simple rsync container based on alpine

A simple rsync server/client Docker image to easily rsync data within Docker volumes

Simple Usage

Get files from remote server within a docker volume:

$ docker run --rm -v blobstorage:/data/ eeacms/rsync \
         rsync -avzx --numeric-ids [email protected]:/var/local/blobs/ /data/

Get files from remote server to a data container:

$ docker run -d --name data -v /data busybox
$ docker run --rm --volumes-from=data eeacms/rsync \
         rsync -avz [email protected]:/var/local/blobs/ /data/

Advanced Usage

Client setup

Start client to pack and sync every night:

$ docker run --name=rsync_client -v client_vol_to_sync:/data \
             -e CRON_TASK_1="0 1 * * * /data/pack-db.sh" \
             -e CRON_TASK_2="0 3 * * * rsync -e 'ssh -p 2222' -aqx --numeric-ids [email protected]:/data/ /data/" \
         eeacms/rsync client

Copy the client SSH public key printed found in console

SSH key persistence

To use the same generated keys on docker container re-creation, you need to persist the key directory ( /root/.ssh ) in a Docker volume. On first start the keys will be created, and then, on all subsequent starts they will be re-used.

For example, you can use a volume called ssh-key like this:

$ docker run --name=rsync_client -v ssh-key:/root/.ssh -v client_vol_to_sync:/data
         eeacms/rsync client

Server setup

Start server on foo.bar.com

# docker run --name=rsync_server -d -p 2222:22 -v server_vol_to_sync:/data \
             -e SSH_AUTH_KEY_1="<SSH KEY FROM rsync_client>" \
             -e SSH_AUTH_KEY_n="<SSH KEY FROM rsync_client_n>" \
         eeacms/rsync server

Verify that it works

Add test file on server:

$ docker exec -it rsync_server sh
  $ touch /data/test

Bring the file on client:

$ docker exec -it rsync_client sh
  $ rsync -e 'ssh -p 2222' -avz [email protected]:/data/ /data/
  $ ls -l /data/

Rsync data between containers in Rancher

  1. Request TCP access to port 2222 to an accessible server of environment of the new installation from the source container host server.

  2. Start rsync client on host from where do you want to migrate data (ex. production).

    Infrastructures -> Hosts -> Add Container

    • Select image: eeacms/rsync
    • Command: sh
    • Volumes -> Volumes from: Select source container
  3. Open logs from container, copy the ssh key from the message

  4. Start rsync server on host from where do you want to migrate data (ex. devel). The destination container should be temporarily moved to an accessible server ( if it's not on one ) .

    Infrastructures -> Hosts -> Add Container

    • Select image: eeacms/rsync
    • Port map -> +(add) : 2222:22
    • Command: server
    • Add environment variable: SSH_AUTH_KEY=""
    • Volumes -> Volumes from: Select destination container
  5. Within rsync client container from step 1 run:

  $ rsync -e 'ssh -p 2222' -avz <SOURCE_DUMP_LOCATION> root@<TARGET_HOST_IP_ON_DEVEL>:<DESTINATION_LOCATION>
  1. The rsync servers can be deleted, and the destination container can be moved back ( if needed )

eea.docker.rsync's People

Contributors

avoinea avatar rvanlaak avatar snoopotic avatar valentinab25 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eea.docker.rsync's Issues

PasswordAuth not disabled

docker-entrypoint.sh:10 tries to disable password authentication, but fails because sshd_config contains whitespace+ between # and P

allowing for optional spaces fixes this:

sed -i "s/#\s*PasswordAuthentication yes/PasswordAuthentication no/g" /etc/ssh/sshd_config

Deploy in portainer.io

HI!
I am working with portainer.io and I am trying to deploy a new container based on eeacms/rsync:latest image, CMD | sh and ENTRYPOINT | /docker-entrypoint.sh, but I am obtaining Status: Stopped for a few seconds with exit code 0

I think possibly this error is because container can't stablish running.

Do you have acknowledgment any additional command or env variable I need to create together with the container to deploy

Persistent SSH host keys

Hey, there!

First of all, congratulations on the excellent work. This image fits my needs almost perfectly!

I said almost because there is one issue: the SSH host keys are regenerated every time the container is recreated. This is a problem on my stack because then I have to manually add the new public key to the client servers's known_hosts file (which ultimately results in downtime for my users).

To solve this, I created an image from your image with a slight variation: it checks a certain directory for existing SSH host keys and, if present, use those keys instead of generating new keys. It also copies the keys it generates on the first run over to this directory. This allows me to add a volume on the docker-compose file and map it to this directory, so that the SSH host keys are generated on the first run and then backed up to persistent storage. When the container is recreated, the previous keys are used instead of generating new keys, thus achieving "persistent SSH host keys".

Would you consider adding this to your image? If so, and if you are interested in how I implemented it, here follows.

I basically changed this part of the original docker-entrypoint.sh:

# Generate host SSH keys
if [ ! -e /etc/ssh/ssh_host_rsa_key.pub ]; then
  ssh-keygen -A
fi

To this:

if [ -e /ssh_host_keys/ssh_host_rsa_key.pub ]; then
  # Copy persistent host keys
  echo "Using existing SSH host keys"
  cp /ssh_host_keys/* /etc/ssh/
elif [ ! -e /etc/ssh/ssh_host_rsa_key.pub ]; then
  # Generate host SSH keys
  echo "Generating SSH host keys"
  ssh-keygen -A
  if [ -d /ssh_host_keys ]; then
    # Store generated keys on persistent volume
    echo "Persisting SSH host keys"
    cp -u /etc/ssh/ssh_host_* /ssh_host_keys/
  fi
fi

My docker-compose.yml file looks like this:

...
volumes:
  transfer:
  rsync_ssh_host_keys:
...
services:
  rsync_server:
    image: custom-rsync:latest
    volumes:
      - transfer:/data
      - rsync_ssh_host_keys:/ssh_host_keys
    environment:
      SSH_AUTH_KEY_1: "ssh-rsa ..."
    ports:
      - "2222:22"
    command: server

I am by no means a bash script expert, so feel free to point out any shortcomings :)

Build for ARM64 ?

Hello,

I'm working on a cluster of 4 RPI3 powered by openSUSE Leap 64bits.
I wish use your docker on ARM, I've test deployment but ARM doesn't seems supported.
How could I build your Docker Image to run it on ARM64 ?

Thanks

Have find other solution as Syncthing

Docker Compose File Example

Hello,

i think this project is really great.

I've also read the documentation.

Would it be possible to show the examples as Docker Compose.yaml in the documentation?

It would really simplify things.

Thank you and keep it up

Container is pausing with no apparent reason

Hi,

I'm experimenting with your container (thanks for the nice job). I've some problem to understand why the following docker-compose configuration starts a container that is paused after a few seconds:

...
  rsync:
    image: eeacms/rsync
    command: [ "client" ]
...

This is what I get filtering the output of docker ps command:

$ user@host:~/git/project$ docker ps | grep rsync
a5af0b17883a        eeacms/rsync             "/docker-entrypoint.…"   20 seconds ago      Up 18 seconds                                                     project_rsync_1
...
$ user@host:~/git/project$ docker ps | grep rsync
a5af0b17883a        eeacms/rsync             "/docker-entrypoint.…"   41 seconds ago      Up 40 seconds (Paused)                                                 project_rsync_1

You can see that while the container project_rsync_1 is created just fine, but it goes to a (Paused) state without me doing nothing. By checking the container log I can only see the usual SSH key output:

user@host:~/git/project$ docker logs project_rsync_1 
ssh-keygen: generating new host keys: RSA DSA ECDSA ED25519 
Please add this ssh key to your server /home/user/.ssh/authorized_keys        
================================================================================
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRmI7wc61m4gaDnaJQVq2TfS2Rj/avt0Oxw1ZRmvDYkzVXmkBJIUt0aycLcqJSrcu9oaVg0m/t5oFwTxX7rGCl1JObhCiibSn8FI+X30VZRxPmMt4fKybGgFAcTSY9C1TsIny9+jbLo+lO24dUWP05wlD6UPFxWhUyFxLIXbk8HT03Z+GI/1gLZ7cjS3jhYtrCxP5MO6T//jRksRj0qYyGtGWt84fRptFCMd9/HFVa09m95N9ASteHTLAz+t8TKaxqrt0otpsTPAAbMFHy/lmIIW9QCjvRfE56nVYvwd9vak6sR6GLhLdW62vVr3EF/qQP64cmgvygnOWqCQH4asajb9i2e3Jxn1xJ91O5ZbX8UMLplnFArIh//lPRClR7zRMvGNzKo/f6BwsEUE5Of4vGH6Xkwj9bk0FCQp8IbQAydSRaPNxuBeEczXtBi9GdWZYAsONJW1Oie5x5kXWPW5jZI6ZU+fe45kieVCjDeES8n4wMG8nm6lRSErfw8UY5KzU= root@a5af0b17883a
================================================================================

If I (manually) unpause the container and check the running processes within the container, I can see the following:

$ docker-compose exec rsync ash
... (enter the container) ...
Mem: 14428704K used, 1373116K free, 936804K shrd, 952276K buff, 5104148K cached
CPU:   2% usr   4% sys   0% nic  91% idle   0% io   0% irq   1% sirq
Load average: 1.20 1.36 1.43 2/1716 20
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
   15     0 root     S     1624   0%   1   0% ash
   20    15 root     R     1560   0%   3   0% top
    1     0 root     S     1552   0%   4   0% /usr/sbin/crond -f

The container crontab -l is actually empty, because I did not set the variable CRON_TASK_1, but the same problem happens if I set such variable.

Is it a problem or is it supposed to work that way? Do I miss something obvious?

Thanks

Stops instantly with no logs on unraid

Howdy folks, I'm using this container to try and pull data from one server to this one every 30 minutes. This is running on unraid, if that makes any difference.

I've set it up as follows

image

CRON_TASK_1 = */30 * * * *  rsync -av --ignore-existing --remove-source-files user@host:~/files/completed/  /data/ 

When I first run it the logs spit out an ssh public key for me to attach to my server, and then stops. Even after I have added the public key the only thing the container wants to do is tell me that key and quit. No errors from the box.

Any ideas?

Cheers

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.