Git Product home page Git Product logo

docker-volume-rclone's Introduction

docker-volume-rclone License Project Status

GitHub release Go Report Card codecov master : Travis master develop : Travis develop

Use Rclone as a backend for docker volume. This permit to easely mount a lot of cloud provider (https://rclone.org/overview/).

Status : BETA (work and in use but still need improvements)

Use Rclone cli in the plugin container so it depend on fuse on the host.

Docker plugin (Easy method) Docker Pulls ImageLayers Size

docker plugin install sapk/plugin-rclone
docker volume create --driver sapk/plugin-rclone --opt config="$(base64 ~/.config/rclone/rclone.conf)" --opt remote=some-remote:bucket/path --name test
docker run -v test:/mnt --rm -ti ubuntu

Build

make

Start daemon

./docker-volume-rclone daemon
OR in a docker container
docker run -d --device=/dev/fuse:/dev/fuse --cap-add=SYS_ADMIN --cap-add=MKNOD  -v /run/docker/plugins:/run/docker/plugins -v /var/lib/docker-volumes/rclone:/var/lib/docker-volumes/rclone:shared sapk/docker-volume-rclone

For more advance params : ./docker-volume-rclone --help OR ./docker-volume-rclone daemon --help

Run listening volume drive deamon to listen for mount request

Usage:
  docker-volume-rclone daemon [flags]

Global Flags:
  -b, --basedir string   Mounted volume base directory (default "/var/lib/docker-volumes/rclone")
  -v, --verbose          Turns on verbose logging

Create and Mount volume

docker volume create --driver rclone --opt config="$(base64 ~/.config/rclone/rclone.conf)" --opt remote=some-remote:bucket/path --name test
docker run -v test:/mnt --rm -ti ubuntu

Allow acces to non-root user

Some image doesn't run with the root user (and for good reason). To allow the volume to be accesible to the container user you need to add some mount option: --opt args="--uid 1001 --gid 1001 --allow-root --allow-other".

For example, to run an ubuntu image with an non root user (uid 33) and mount a volume:

docker volume create --driver sapk/plugin-rclone --opt config="$(base64 ~/.config/rclone/rclone.conf)" --opt args="--uid 33 --gid 33 --allow-root --allow-other" --opt remote=some-remote:bucket/path --name test
docker run -i -t -u 33:33 --rm -v test:/mnt ubuntu /bin/ls -lah /mnt

Docker-compose

First put your rclone config in a env variable:

export RCLONE_CONF_BASE64=$(base64 ~/.config/rclone/rclone.conf)

And setup you docker-compose.yml file like that

volumes:
  some_vol:
    driver: sapk/plugin-rclone
    driver_opts:
      config: "${RCLONE_CONF_BASE64}"
      args: "--read-only --fast-list"
      remote: "some-remote:bucket/path"

You can also hard-code your config in the docker-compose file in place of the env variable.

Healthcheck

The docker plugin volume protocol doesn't allow the plugin to inform the container or the docker host that the volume is not available anymore. To ensure that the volume is always live, It is recommended to setup an healthcheck to verify that the mount is responding.

You can add an healthcheck like this example:

services:
  server:
    image: my_image
    healthcheck:
      test: ls /my/rclone/mount/folder || exit 1
      interval: 1m
      timeout: 15s
      retries: 3
      start_period: 15s

Inspired from :

How to debug docker managed plugin :

#Restart plugin in debug mode
docker plugin disable sapk/plugin-rclone
docker plugin set sapk/plugin-rclone DEBUG=1
docker plugin enable sapk/plugin-rclone

#Get files under /var/log of plugin
runc --root /var/run/docker/plugins/runtime-root/plugins.moby list
runc --root /var/run/docker/plugins/runtime-root/plugins.moby exec -t $CONTAINER_ID cat /var/log/rclone.log
runc --root /var/run/docker/plugins/runtime-root/plugins.moby exec -t $CONTAINER_ID cat /var/log/docker-volume-rclone.log

docker-volume-rclone's People

Contributors

dependabot-preview[bot] avatar dependabot-support avatar dependabot[bot] avatar robertbaker avatar sapk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-volume-rclone's Issues

Sonarr

Hey,

I have been using your plugin since a few days in combination with Sonarr, Docker Compose and a encrypted GDrive and it works fine so far.
Files get moved after downloading to the volume and then get uploaded to GDrive.

When a file gets moved to the volume a partial file is first created which is then edited and renamed.
I was wondering if it is more efficient to filter out those partial files. So only video files etc gets uploaded.

Rclone has argruments to filter uploads, but does this plugin actually supports this?

Anyway thanks for the work you've done so far, even though this plugin is still WIP it works!

Stefan Fransen.

Add rclone cache support

I have been using this for the past day and this volume plugin has worked flawlessly. Thanks

In rclone you can specify a cache to preserve network io and costs associated with accessing similar files many times.

Would it be possible to have the ability to provide a volume to this volume plugin to use a caching directory on the host?

Slow operations and cannot remove volume

  • Plugin version (or commit ref) : rclone:latest (ID from docker inspect 4ed13fa184fb0b79afa2a5d2f301764a1705ee7e8bb108c473726abfe4185085 PluginReference docker.io/rclone/docker-volume-rclone:amd64)
  • Docker version : 20.10.7, build f0df350
  • Plugin type : legacy/managed
  • Operating system: ubuntu 20.04

Description

I create a service which spawns some volumes managed via rclone. I use CEPH as an S3 backend.
When creating the volume the options look something like this:

"VolumeOptions": {
    "DriverConfig": {
        "Name": "rclone",
        "Options": {
            "allow-other": "true",
            "dir-cache-time": "10s",
            "path": "master-simcore/e5751e46-8f09-11ec-a814-02420a041bec/50b5e822-3c89-4809-aebe-03302ed656a6/home_jovyan_work_workspace",
            "poll-interval": "9s",
            "s3-access_key_id": "****************",
            "s3-endpoint": "https://ceph_endpoint_address",
            "s3-location_constraint": "",
            "s3-provider": "Minio",
            "s3-region": "us-east-1",
            "s3-secret_access_key": "****************",
            "s3-server_side_encryption": "",
            "type": "s3",
            "vfs-cache-mode": "minimal"
        }
    }

I've just noticed that the s3-provider is set to Minio and not Ceph. Maybe this is what is causing all the issues.

Below operation take a very long time each

$ docker volume ls
DRIVER          VOLUME NAME
rclone:latest   dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace
....
$ docker volume inspect dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace
[]
Error response from daemon: get dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace: error while checking if volume "dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace" exists in driver "rclone:latest": Post "http://%2Frun%2Fdocker%2Fplugins%2F4ed13fa184fb0b79afa2a5d2f301764a1705ee7e8bb108c473726abfe4185085%2Frclone.sock/VolumeDriver.Get": context deadline exceeded
$ docker volume rm -f  dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace
Error response from daemon: get dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace: error while checking if volume "dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace" exists in driver "rclone:latest": Post "http://%2Frun%2Fdocker%2Fplugins%2F4ed13fa184fb0b79afa2a5d2f301764a1705ee7e8bb108c473726abfe4185085%2Frclone.sock/VolumeDriver.Get": context deadline exceeded

How would I remove a volume in such a situation?

Logs

Tests

Both Nginx and Apache access forbidden

I'm using your driver with Docker in swarm mode.

When using Nginx or Apache I'm unable to access the files mounted with the volume using your driver.
If I exec a bash inside an instantiated container, I can see and read the mounted content.
Apache gives me a more detailed error: "access denied because search permissions are missing on a component of the path".

I checked the permissions on the mounted folder and accessed file:

drwxr-xr-x  1 root root    0 Jun 17 17:32 uploads
-rw-r--r--  1 root root  129 Jul 24  2015 test.txt

The only strange thing I see, is that there are no . and .. directory entries when I ls -a from inside the 'uploads' directory or any sub directories.

RClone Union Support

I apologize that this isn't related to this rclone volume driver, but I couldn't find another way to make this request.

MergerFS is commonly used with rclone, take a look at trapexit/mergerfs for more details on what it does. It's basically creates a union of other directories, a union fuse mount.

This is a needed driver as it would remove the need to have a container for a mergerfs mount.

A common use-case is to have a merged directory that merges a local folder (a cache) and an rclone remote together, then you bind your apps to the merged directory and things go into one of the local branches based on the rules you set. Typically, a script is used to have rclone upload the data.

I may take a stab at forking this code and seeing if I can adapt it for mergerfs but I have no experience with volume drivers, so I thought I would ask first if you had any plans to do this. I'm not even sure if it could work, probably technical limitations. You basically would need a way to have the mergerfs volume, depend on / use another volume for a mountpoint (branch as mergerfs calls them).

RClone has a union backend, but using it might be tricky with this plugin.

Add an example

like how to setup quickly a package mirror from a ftp (or http) remote.

Error on install- can't enable plugin: rclone.sock no such file or directory

  • Plugin version (or commit ref) :
  • Docker version : Docker version 19.03.12-ce, build 48a66213fe
  • Plugin type : legacy/managed
    Legacy plugins do not work in swarm mode. However, plugins written using the v2 plugin system do work in swarm mode, as long as they are installed on each swarm worker node.
  • Operating system: ubuntu 20.04 on RPI4 4GB

Description

I am running in swarm mode. If the plugin type is actually legacy, as it defaults to in this template, then that explains why this doesn't work.
Though if I didn't want to run in swarm mode, I could literally just give it cap_add SYS_ADMIN and be done with it- at least that works in kubernetes, so I assume I could get it to work in swarm.

Logs

 $ docker plugin install sapk/plugin-rclone
Plugin "sapk/plugin-rclone" is requesting the following privileges:
 - network: [host]
 - device: [/dev/fuse]
 - capabilities: [CAP_SYS_ADMIN]
Do you grant the above permissions? [y/N] y
latest: Pulling from sapk/plugin-rclone
e7d99237d18e: Download complete 
Digest: sha256:f4a9025fbb59f70f9d3fb58ad092371accfd08370a5de60c813c73cc773ae803
Status: Downloaded newer image for sapk/plugin-rclone:latest
Error response from daemon: dial unix /run/docker/plugins/e56de56989076030fdd8c57952983ce859c2bd1d78abf753daa7716e4cf08468/rclone.sock: connect: no such file or directory

The bottom line repeats if I run the docker plugin enable sapk/plugin-rclone command

Tests

make: *** No rule to make target 'test-integration'. Stop.
Yeah uh..I see a go file that should be it, but honestly, don't know how to use it off the top of my head. This is probably because legacy instead of v2, so I'll leave it there.

Mount existing S3 bucket as volume

  • Plugin version (or commit ref) : sapk/plugin-rclone:latest
  • Docker version : 20.10.8, build 3967b7d
  • Plugin type : legacy/managed <= not sure i am just installing latest
  • Operating system: ubuntu 18.04

Description

Can you please suggest how a pre-existing S3 bucket can be mounted in docker ? Ideally this can be done directly inside a volume statement but it needs to be pre-existing.

The current description shows how to install the plugin and how to create a volume but in my use case the data is pre-existing. I cannot figure it out from the docs

Using the no-seek argument causes a bug

  • Plugin version (or commit ref) : latest
  • Docker version : 24.0.6, build ed223bc
  • Plugin type : legacy
  • Operating system: Fedora 38

Description

If I create a volume using the --no-seek option as args, the files in the volume will be empty (do not get)
SEEK should be avoided due to its high processing load
...

Logs

  • Don't use --no-seek

In this case, the existence of the file is firmly confirmed.

root@thinkcentre-any ~# docker volume rm nextcloud_dav && docker volume create --driver sapk/plugin-rclone --opt config="$(base64 ~/.config/rclone/rclone.conf)" --opt remote=nextcloud: --name nextcloud_dav --opt args="--allow-root --allow-other"
nextcloud_dav
nextcloud_dav

root@thinkcentre-any ~# docker run -itd --name testvolume -v nextcloud_dav:/nc_dav busybox && docker exec -it testvolume /bin/sh && docker stop testvolume && docker rm testvolume
6a36381e1127a4876653e6df99d86fc5f85c0baef14bd72ee8169b9835ef7bb1
/ # ls nc_dav
Audio      Books      Documents  Photos     Video      public
/ #
  • Use --no-seek

In this case, the existence of the file cannot be firmly confirmed.

root@thinkcentre-any ~# docker volume rm nextcloud_dav && docker volume create --driver sapk/plugin-rclone --opt config="$(base64 ~/.config/rclone/rclone.conf)" --opt remote=nextcloud: --name nextcloud_dav --opt args="--allow-root --allow-other --no-seek"
nextcloud_dav
nextcloud_dav

root@thinkcentre-any ~# docker run -itd --name testvolume -v nextcloud_dav:/nc_dav busybox && docker exec -it testvolume /bin/sh && docker stop testvolume && docker rm testvolume
b7f961e9f8290f9faedbfd07adcb8f39a70018a6d1ccd855c3ca460ac7586532
/ # ls nc_dav/
/ #

RFE: Support additional Docker Volume options

  • Plugin version (or commit ref) : sapk/plugin-rclone:latest
  • Docker version : 19.03.13
  • Plugin type : managed
  • Operating system: Ubuntu Server 20.02

Description

I am trying to use docker-volume-rclone as an alternative to RexRay/S3FS.
It almost works, but there are a few things missing:

  • Ability to specify GID, UID and umask. This gets accepted with using --opt but it does not seem to actually use the option.
  • Ability to specify rclone options such as allow-non-empty and allow-other. As before, the --opt gets accepted, but does not seem to do anything.
  • Ability to use plugin in Scope: Global for use with Docker in Swarm mode. Right now, the volume is local, which stops a container from moving between Docker hosts.

Logs

Test command:

docker volume create --driver sapk/plugin-rclone \
--opt config="$(base64 ~/.config/rclone/rclone.conf)" \
--opt remote=wasabi-s3:captain--test-s3 \
--opt uid=1001 \
--opt gid=root \
--opt umask=0022 \
--opt allow_other \
--opt use_path_request_style \
--opt allow_non_empty \
--name captain--test-s3

Docker volume inspect result:

[
    {
        "CreatedAt": "2020-11-10T11:00:05Z",
        "Driver": "sapk/plugin-rclone:latest",
        "Labels": {},
        "Mountpoint": "/var/lib/docker-volumes/rclone/captain--test-s3",
        "Name": "captain--test-s3",
        "Options": {
            "allow_non_empty": "",
            "allow_other": "",
            "config": "[edited out]",
            "gid": "root",
            "remote": "wasabi-s3:captain--test-s3",
            "uid": "1001",
            "umask": "0022",
            "use_path_request_style": ""
        },
        "Scope": "local"
    }
]

This in comparison to RexRay/S3FS Docker volume inspect result:

[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "rexray/s3fs:latest",
        "Labels": null,
        "Mountpoint": "",
        "Name": "captain--test2-s3",
        "Options": null,
        "Scope": "global",
        "Status": {
            "availabilityZone": "",
            "fields": null,
            "iops": 0,
            "name": "captain--test2-s3",
            "server": "s3fs",
            "service": "s3fs",
            "size": 0,
            "type": ""
        }
    }
]

The GID and GUID of the docker-volume-rclone volume remains at root:root. I tried to inspect and chown the volume test contents with an intermediate container. This is probably due to fuse, so I should set the GID, GUID and umask at the point of volume creation.

docker run -v captain--test-s3:/mnt/s3-endpoint -it --rm --name ubuntu-permissions-fixer ubuntu /bin/bash
root@45395027f995:/# ls -lah /mnt/s3-endpoint/
total 0
drwxr-xr-x 1 root root 0 Nov 10 11:22 data
root@45395027f995:/# chown 1001:root -Rv /mnt/s3-endpoint/*
changed ownership of '/mnt/s3-endpoint/data' from root:root to 1001:root
root@45395027f995:/# ls -lah /mnt/s3-endpoint/
total 0
drwxr-xr-x 1 root root 0 Nov 10 11:22 data

Permissions workaround for postgres

Hi there,

thanks for the extremely useful plugin. I am trying to use a volume created by the plugin in docker-compose with postgres:

version: '3'

services:
  db:
    user: "postgres:postgres"
    image: postgres:13-alpine
    environment:
      PGDATA: /var/lib/postgresql/data/pgdata
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    # Un-comment to access the db service directly
#   ports:
#     - 5432:5432
    networks:
      - db
    restart: unless-stopped
    volumes:
      - db_data:/var/lib/postgresql/data

When it tries to write some data I get:

mkdir: can't create directory '/var/lib/postgresql/data/': Permission denied

My guess is that it has to do with the chown that postgres container is doing. Is there a way to work-around this?

Thanks!

rclone zombie process

  • Plugin version (or commit ref) : latest
  • Docker version : 20.10.1
  • Plugin type : managed
  • Operating system: Ubuntu 20.04.1 LTS

Description

I found that I have 15214 zombie processes and all of it is rclone, It should created by the rclone driver plugin.

Logs

~ ps -ef | grep defunct
......
root     4174981 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4175109 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4175111 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4177840 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4177842 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4177964 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4177966 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4179121 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4179123 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4179257 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4179260 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4183322 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4183324 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4183446 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4183448 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4184028 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4184030 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4184169 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4184171 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4186349 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4186351 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4186477 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4186479 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4189842 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4189844 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4189991 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4189994 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4193821 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4193823 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4193946 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4193948 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
root     4194268 3423939  0 Jan04 ?        00:00:00 [rclone] <defunct>
root     4194270 3423939  0 Jan04 ?        00:00:00 [bash] <defunct>
......

Dependabot can't resolve your Go dependency files

Dependabot can't resolve your Go dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

github.com/sapk/docker-volume-rclone/rclone: cannot find module providing package github.com/sapk/docker-volume-rclone/rclone

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

can't enable plugin

  • Plugin version (or commit ref) : latest (?)
  • Docker version :1 9.03.13
  • Plugin type : legacy/managed
  • Operating system:ubuntu 20

Description

can't enable the plugin

running:
docker plugin set sapk/plugin-rclone DEBUG=1
docker plugin enable sapk/plugin-rclone

and getting:
Error response from daemon: dial unix /run/docker/plugins/c3d4d472e7deffd29d83c35ead453f32f71b242315822dad8a82a46671aae982/rclone.sock: connect: no such file or directory

Any $ sympols need to be escaped with a $ symbol.

This is regarding the Docker Compose section in the readme.
current state:

    cloud_storage_data:
      driver: sapk/plugin-rclone
      driver_opts:
        config: "$(base64 ~/.config/rclone/rclone.conf)"
        args: "--read-only --fast-list"
        remote: "https://sos-ch-dk-2.exo.io:rmcsearchresiliotestrz"

according to: moby/moby#30629
this is how it should look.

    cloud_storage_data:
      driver: sapk/plugin-rclone
      driver_opts:
        config: "$$(base64 ~/.config/rclone/rclone.conf)"
        args: "--read-only --fast-list"
        remote: "https://sos-ch-dk-2.exo.io:rmcsearchresiliotestrz"

After using two $ signs my docker compose file worked. Before that I got the error:
Invalid interpolation format for "driver_opts

Dependabot can't resolve your Go dependency files

Dependabot can't resolve your Go dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

github.com/sapk/docker-volume-rclone/rclone: cannot find module providing package github.com/sapk/docker-volume-rclone/rclone

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

How to remove a volume created by rclone?

  • Plugin version (or commit ref) : latest
  • Docker version : 20.10.1
  • Plugin type : managed
  • Operating system: Ubuntu 20.04.1 LTS

Description

I can't remove volume, even if I use -f option, are there any other ways we can workaround it?

Logs

~ docker volume rm -f bitwarden_bitwarden-data
Error response from daemon: remove bitwarden_bitwarden-data: VolumeDriver.Remove: remove /var/lib/docker-volumes/rclone/bitwarden_bitwarden-data: directory not empty

sapk/plugin-rclone:with-mount not working

See my command log below:

# mkdir /var/cache/rclone
# docker plugin install sapk/plugin-rclone:with-mount
Plugin "sapk/plugin-rclone:with-mount" is requesting the following privileges:
 - network: [host]
 - mount: [/var/cache/rclone]
 - device: [/dev/fuse]
 - capabilities: [CAP_SYS_ADMIN]
Do you grant the above permissions? [y/N] y
with-mount: Pulling from sapk/plugin-rclone
1195935e8be5: Download complete 
Digest: sha256:596d4bdbc6bc3d4fa6f9ac8a4bceb160efe2e32e3aa6bef3f650b2c4a1d5fc89
Status: Downloaded newer image for sapk/plugin-rclone:with-mount
Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/var/cache/rclone\\\" to rootfs \\\"/var/lib/docker/plugins/8f29633936add4a89e93dc331cb5070e26562b0504cf8db5066ede970604f485/rootfs\\\" at \\\"/var/lib/docker/plugins/8f29633936add4a89e93dc331cb5070e26562b0504cf8db5066ede970604f485/rootfs/var/cache/rclone\\\" caused \\\"no such device\\\"\"": unknown
# ls -l /var/cache/rclone
total 0
# ls -l /var/lib/docker/plugins/8f29633936add4a89e93dc331cb5070e26562b0504cf8db5066ede970604f485/rootfs/var/cache/rclone
total 0

My docker logs:

time="2020-05-29T11:00:31.725468540+02:00" level=info msg="shim containerd-shim started" address="/containerd-shim/plugins.moby/8f29633936add4a89e93dc331cb5070e26562b0504cf8db5066ede970604f485/shim.sock" debug=false pid=18185 
time="2020-05-29T11:00:31.799542311+02:00" level=info msg="shim reaped" id=8f29633936add4a89e93dc331cb5070e26562b0504cf8db5066ede970604f485 
time="2020-05-29T11:00:31.822600929+02:00" level=error msg="Handler for POST /v1.40/plugins/sapk/plugin-rclone:with-mount/enable returned error: OCI runtime create failed: container_linux.go:349: starting container process caused \"process_linux.go:449: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/cache/rclone\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/plugins/8f29633936add4a89e93dc331cb5070e26562b0504cf8db5066ede970604f485/rootfs\\\\\\\" at \\\\\\\"/var/lib/docker/plugins/8f29633936add4a89e93dc331cb5070e26562b0504cf8db5066ede970604f485/rootfs/var/cache/rclone\\\\\\\" caused \\\\\\\"no such device\\\\\\\"\\\"\": unknown"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.