Git Product home page Git Product logo

docker-volume-plugins's People

Contributors

franciscoda avatar glorpen avatar mahmoudfarid avatar marcelo-ochoa avatar trajano avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

docker-volume-plugins's Issues

Additional property name is not allowed

Describe the bug
I'm simply trying to use this plugin to mount a glusterfs volume. I configured my compose file exactly as described in the documentation

volumes:
  jellyfin-data:
    driver: glusterfs
    name: "swarm-volumes/jellyfin-beta_jellyfin-data"

After running docker stack deploy it throws this error:
volumes.jellyfin-data Additional property name is not allowed

To Reproduce
Steps to reproduce the behavior:

  1. Install plugin
  2. Add volume to compose file
  3. docker stack deploy
  4. See error

Expected behavior
The stack should be deployed and volume mounted without error.

Server (please complete the following information):

  • OS: CentOs Stream
  • GlusterFs 8.3
  • Docker version 20.10.2, build 2291f61

Can't enable the plugin in ARM (Raspberry)

Describe the bug
When enabling the plugin in a Raspberry Pi (4),

root@rpi4:~# docker plugin enable glusterfs
Error response from daemon: dial unix /run/docker/plugins/4963ba381884b7af616e4340f57e44e296671b3b23cd061d4c1431b7f4d0f126/gfs.sock: connect: no such file or directory

To Reproduce
Steps to reproduce the behavior:

  1. docker plugin install --alias glusterfs mochoa/glusterfs-volume-plugin --grant-all-permissions --disable
  2. docker plugin set glusterfs SERVERS=x,y
  3. docker plugin enable glusterfs

Server (please complete the following information):

  • OS: Raspbian
  • Version 10.9
  • Raspberry Pi 4

Additional context
It's reported on: trajano#11

TY in advance

fstype should be glusterfs

Hello

Describe the bug
When I try to deploy a container with glusterfs drivers :

# docker run -it --rm --volume-driver glusterfs -v swarm-glust-vol1/dir1:/data alpine
docker: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/594396d66d9a36561f8c81cdc1e0ffe260b1d868311760852e8d36cb7cd3f6ba/rootfs': VolumeDriver.Mount: error mounting swarm-glust-vol1/dir1:
Type
ext4

fstype should be glusterfs.

Server (please complete the following information):

  • OS: Debian 10
  • Docker : Docker version 20.10.7
  • glusterfs-common 5.5-3
  • swarm initialised
# gluster volume info
 
Volume Name: swarm-glust-vol1
Type: Replicate
Volume ID: 05e14e69-3a9d-4245-bd3d-a776482a46aa
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 5 = 5
Transport-type: tcp
Bricks:
Brick1: swarm-man01:/gluster/volume1/data
Brick2: swarm-man02:/gluster/volume1/data
Brick3: swarm-wk01:/gluster/volume1/data
Brick4: swarm-wk02:/gluster/volume1/data
Brick5: swarm-wk03:/gluster/volume1/data
Options Reconfigured:
auth.allow: docker-compose.yml
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Glusterfs works fine

Thanks

Plugin freeze with nonexistent gluster volume

First of all, thank you for providing and maintaining this plugin for glusterfs. It is working great but unfortunately I discovered an issue which can be pretty annoying in production.

Describe the bug
The plugin freeze if a nonexistent gluster volume is specified. Invalid docker volume cannot be removed nor inspected. A docker restart is required. Force disable the plugin and enable it corrects the problem too.

The plugin is installed and configured like this:

docker plugin install --alias ts-gfs-prod mochoa/glusterfs-volume-plugin --grant-all-permissions --disable
docker plugin set ts-gfs-prod SERVERS=removedserver1,removedserver2
docker plugin set ts-gfs-prod SECURE_MANAGEMENT=yes
docker plugin enable ts-gfs-prod

To Reproduce
I have a gluster volume named containers1-digidev-data with a file file1 containing the string 'volume mounted'.

Working test

$ docker volume create -d ts-gfs-prod containers1-digidev-data/foobar
containers1-digidev-data/foobar
$ docker run -it --rm -v containers1-digidev-data/foobar:/mnt alpine
/ # cat /mnt/file1
volume mounted
$ docker volume rm containers1-digidev-data/foobar
containers1-digidev-data/foobar

Failing test

$ docker volume create -d ts-gfs-prod containers1-digidev-data-invalid/foobar
containers1-digidev-data-invalid/foobar
$ docker run -it --rm -v containers1-digidev-data-invalid/foobar:/mnt alpine
docker: Error response from daemon: Post http://%2Frun%2Fdocker%2Fplugins%2F67f9742d15bf7d90d0ab71cd0243d5b202d0cb9a7f524a13ba71915c8ed31bbe%2Fgfs.sock/VolumeDriver.Mount: net/http: request canceled (Client.Timeout exceeded while awaiting headers).
See 'docker run --help'.

Plugin does not respond anymore

I can't remove the failing volume.

$ docker volume rm containers1-digidev-data-invalid/foobar
Error response from daemon: get containers1-digidev-data-invalid/foobar: error while checking if volume "containers1-digidev-data-invalid/foobar" exists in driver "ts-gfs-prod:latest": Post http://%2Frun%2Fdocker%2Fplugins%2F67f9742d15bf7d90d0ab71cd0243d5b202d0cb9a7f524a13ba71915c8ed31bbe%2Fgfs.sock/VolumeDriver.Get: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

The plugin stops working for any existing volume. I can't inspect any other volume managed by this plugin.

By disabling and renabling the plugin, I am able to remove the failing volume:
$ docker plugin disable -f ts-gfs-prod
$ docker plugin enable ts-gfs-prod

$ docker volume rm containers1-digidev-data-invalid/foobar
containers1-digidev-data-invalid/foobar

Expected behavior
An error should be handled like it is for an unexisting folder:

$ docker run -it --rm -v containers1-digidev-data/foobar-invalid:/mnt alpine
docker: Error response from daemon: VolumeDriver.Mount: error mounting containers1-digidev-data/foobar-invalid:
df: /var/lib/docker-volumes/b940aae4af507f6555698f19d91087116742379bde13f0bcd3f13b78fc384884: Transport endpoint is not connected

fstype should be glusterfs.
$ docker volume rm containers1-digidev-data/foobar-invalid
containers1-digidev-data/foobar-invalid

Server (please complete the following information):

  • Docker host OS: CentOS 7.9
  • Docker version: 19.03.13
  • GlusterFS server version: glusterfs 8.4
  • GlusterFS cluster op-code: 60000 (version 6)

S3FS: Volume mounting fails outside us-east-1

Describe the bug
When mounting a volume that references an S3 bucket outside of the AWS us-east-1 region, the container initialization fails.

To Reproduce
Steps to reproduce the behavior:

  1. Create an AWS bucket in a region outside of us-east-1 (e.g: ap-northeast-1). Create a subdirectory for kicks
  2. Configure plugin with credentials: docker plugin enable mochoa/s3fs-volume-plugin:latest AWSACCESSKEYID=<keyid> AWSSECRETACCESSKEY=<secret> --grant-all-permissions
  3. Create volume and service:
volumes:
  myvol:
    driver: mochoa/s3fs-volume-plugin:latest
    name: "mybucket/mysubdir"

services:
  ubuntu:
    image: ubuntu:latest
    volumes:
      - myvol:/mybucket
  1. Initialize container: docker-compose run --rm ubuntu bash
  2. Container creation fails:
$ docker-compose run --rm ubuntu bash
[+] Running 1/0
 ⠿ Volume "myvol/mysubdir"  Created                                                                                                                                       0.0s
Error response from daemon: error while mounting volume '': VolumeDriver.Mount: error mounting myvol/mysubdir:
df: /var/lib/docker-volumes/<volumehash>: Software caused connection abort

fstype should be s3fs

Expected behavior
The volume should be mounted correctly inside the container

Server (please complete the following information):

  • OS: Arch linux

Additional context
logs from journalctl -f -u docker.service:

May 26 00:36:15 SWIFT dockerd[74543]: time="2022-05-26T00:36:15-03:00" level=error msg="2022/05/26 03:36:15 [/var/lib/docker-volumes/93fc9e5c754fcea78015467b7aa6341a034cdce9465cff3bbfd618c1d60f4412 -o nomultipart,bucket=mybucket:/mysubdir]" plugin=b89c599f4771fb877bbbfd08746965a044a36fe6c7c7d9ee08b81532bf762dc5
May 26 00:36:19 SWIFT dockerd[74543]: time="2022-05-26T00:36:19-03:00" level=info msg="df --output=fstype: df: /var/lib/docker-volumes/93fc9e5c754fcea78015467b7aa6341a034cdce9465cff3bbfd618c1d60f4412: Software caused connection abort" plugin=b89c599f4771fb877bbbfd08746965a044a36fe6c7c7d9ee08b81532bf762dc5

S3fs: ZeroByte size with ls

Hi,
First of all, thank you for this nice project. It is very useful.

I use the S3fs plugin to do remote acces with Wasabi cloud storage service. The volume is well mounted in the container and my application writes succesfully on remote bucket. After writing, I need to check the file size, but the file system shows zero byte size and 1 jan 1970 as default date. I have checked with a S3 client, the files metadata are right on the remote S3 storage.

My docker-compose volume declaration is:

volumes:
archive:
driver: mochoa/wasabi.listeners:latest
driver_opts:
s3fsopts: "use_path_request_style,allow_other,uid=1001,gid=1001,url=https://s3.eu-central-.wasabisys.com/,bucket=listeners"
name: "listeners"

The file list command returns:

~ docker exec -ti listener ls -alh /data/archive
total 16K
drwxrwxrwx 1 listeneruser listeneruser 0 Jan 1 1970 .
drwxr-xr-x 4 listeneruser listeneruser 38 Oct 29 16:53 ..
drwxr-x--- 1 listeneruser listeneruser 0 Jan 1 1970 xxx
drwxr-x--- 1 listeneruser listeneruser 0 Jan 1 1970 yyy
drwxr-x--- 1 listeneruser listeneruser 0 Jan 1 1970 zzz
....

The plugin inspection:

~ docker plugin inspect mochoa/wasabi.listeners:latest
...
"Description": "S3FS plugin for Docker v2.0.8",
"DockerVersion": "19.03.15",
"Documentation": "https://github.com/marcelo-ochoa/docker-volume-plugins/",
"Entrypoint": [
"/usr/bin/tini",
"--",
"/s3fs-volume-plugin"
],
...

Could you please tell me how to get/show the right file metadata from the file system.

Thank you in advance for help and assistance.

Glusterfs driver doesn't fail if the mount was not successful

Hi @marcelo-ochoa, thanks for enabling issues. I asked for it with a bit of self interest :) So, the issue currently is as follows: Glusterfs does not properly report failures via mount exit codes (see my glusterfs issue on gluster/glusterfs#1693 -- there might be even more cases where this happens). Since glusterfs wrongly reports back zero as exit code, the docker driver does not know that mounting failed since it only checks the error code:

cmd := exec.Command(p.mountExecutable, args...)
if out, err := cmd.CombinedOutput(); err != nil {
fmt.Printf("Command output: %s\n", out)
return &volume.MountResponse{}, fmt.Errorf("error mounting %s: %s", req.Name, err.Error())
}
. Do you think it would make sense to execute findmnt mountPoint (https://linux.die.net/man/8/findmnt) to verify that the path actually got mounted and error out otherwise? This could be made optional by adding a flag to
func NewDriver(mountExecutable string, mountPointAfterOptions bool, dockerSocketName string, scope string) *Driver {

As far as I understand it the PostMount hook cannot be utilized for that?

To Reproduce
Steps to reproduce the behavior:

  1. Enable the glusterfs plugin
  2. Create a volume with a subfolder that does not exist; vol1/sub1
  3. Run an container with that volume attached
  4. Note that the container starts without a mount

Expected behavior
An error should be thrown because the mount could not be established

Server (please complete the following information):

  • OS: Debian 10
  • Docker: 19.03.13

Additional context
Add any other context about the problem here.

Cannot create docker volume

Describe the bug
Hi,
I have a system with 2 nodes (the third is on the way) both with glusterfs ad docker swarm configurated.
I have 1 gluster volume online, named "gfs" with some subfolders like "nginx-proxy-manager" (i use this as example below).

I can mount this folder with the command:
sudo glusterfs --volfile-server=localhost --volfile-id=gfs --subdir-mount=/nginx-proxy-manager ./test
and the following command run without error:
docker volume create -d glusterfs --opt "glusteropts=--volfile-server=localhost --volfile-id=gfs --subdir-mount=/nginx-proxy-manager" test

BUT:
The services that use this volume can't run:

VolumeDriver.Mount: error mounting sh_gfs_nginx-proxy-manager: df: /var/lib/docker-volumes/0a20292622d3ced85dba291fd7491f2b953cc14c7c6c79771e0e7cfbdf38b287: Transport endpoint is not connected fstype should be glusterfs

the volume on sh.yaml file of the cluster is configurated with the following line:

gfs_nginx-proxy-manager: 
  driver: glusterfs
  driver_opts:
    glusteropts: "--volfile-server=localhost --volfile-id=gfs --subdir-mount=/nginx-proxy-manager"

what am I doing wrong?

Server (please complete the following information):

  • OS: Raspberry OS
  • Version 10.9
  • Docker version: 20.10.7, build f0df350
  • Plugin version: latest, installed with the following command today 06/11:
    docker plugin install --alias glusterfs mochoa/glusterfs-volume-plugin-armv7l --grant-all-permissions --disable
  • Glusterfs server version: glusterfs 5.5

One plugin for each GlusterFS major version

Is your feature request related to a problem? Please describe.
The current tag latest provides only plugin for glusterfs 7, which is the latest glusterfs-client version available in Ubuntu.
It could be interesting to have a build for each glusterfs major version, in order to match the glusterfs cluster version.

Despite the Op-Version mechanism, the documentation still recommends to keep the same version between clients and servers.
"It is recommended to have the same client and server, major versions running eventually" - Generic Upgrade procedure

Describe the solution you'd like
Specifying the glusterfs-client package version in the Dockerfile and creating 4 different builds (at least versions 6, 7, 8, 9).
On Ubuntu, Glusterfs community PPA can be used: https://launchpad.net/~gluster

For instance with version 8:

RUN apt update && apt-get install -y software-properties-common &&
apt install -yq software-properties-common &&
add-apt-repository -y ppa:gluster/glusterfs-8 &&
apt install -y glusterfs-client curl rsyslog tini &&
apt clean && rm -rf /var/lib/apt/lists/* &&
rm -rf "/tmp/*" "/root/.cache" /var/log/lastlog &&
mkdir -p /var/lib/glusterd /etc/glusterfs &&
touch /etc/glusterfs/logger.conf

cifs plugin naming issue

Describe the bug
I'm trying to use the cifs plugin with Docker Swarm. The issue seems to be that the plugin is naming the volumes according to the variable volumes.volume-name.name. From my understanding the variable volumes.volume-name.name should represent the cifs path, e.g. fileserver/share, and the name should be taken from volumes.volume-name but this is not the case. The problem is if the plugin names the volume fileserver/share, I can't mount it within a service / container because docker thinks this is a host path. Volume names are not allowed to contain /.

To Reproduce
Steps to reproduce the behavior:

  1. Create the credentials file /root/credentials/file01
  2. Create a docker compose with:
services:
  servicename: ...
  volumes:
     - type: volume
        source: 'file01d'
        target: '/mnt/smbdata'

volumes:
  file01d:
    driver: cifs
    name: "file01/d"

  1. run docker stack deploy
  2. Check the container and see the error that it can't mount file01d

Expected behavior
Volume should be named file01d instead of file01/d

Server (please complete the following information):

  • OS: CentOS Stream
  • Version: latest

Unable to mount volume with s3fs plugin

I am trying to add persistent storage to my RPi Swarm cluster and first wanted to start with MinIO as it is more modern than other options. I am unable to even mount the created volume though.

Steps to reproduce the behavior:

  1. Install the plugin with docker plugin install --alias minio mochoa/s3fs-volume-plugin-armv7l --grant-all-permissions --disable DEFAULT_S3FSOPTS='allow_other,url=http://swarm:9002,use_path_request_style,nomultipart'
  2. Set access/secret keys
  3. Enable the plugin
  4. Create volume with docker volume create -d minio octoprint
  5. Got error while trying to mount the volume with
docker run --rm -it --volume-driver minio -v octoprint:/mnt arm32v7/alpine:3.12 /bin/sh
docker: Error response from daemon: VolumeDriver.Mount: error mounting octoprint:
df: /var/lib/docker-volumes/fc6cb3928d55a6a14b7280e768285e888351c91996c1aa464c7d0d2d7a584361: Software caused connection abort

fstype should be s3fs.
  1. No bucket created in MinIO (according to s3 ls).
  2. Created manually by mc mb minio/octoprint
  3. Now the error is different:
docker run --rm -it --volume-driver minio -v octoprint:/mnt arm32v7/alpine:3.12 /bin/sh
docker: Error response from daemon: failed to chmod on /var/lib/docker/plugins/5c1d484c3d37486708844217a3d7f230e27d1b2e52285f696fcc3f9d69248e98/propagated-mount/c6d472d807b5e4c0f6c0a34480e20682f97bdc399c9c62e03e6e3e4ca1097476: chmod /var/lib/docker/plugins/5c1d484c3d37486708844217a3d7f230e27d1b2e52285f696fcc3f9d69248e98/propagated-mount/c6d472d807b5e4c0f6c0a34480e20682f97bdc399c9c62e03e6e3e4ca1097476: input/output error.

Server (please complete the following information):

  • OS: Raspbian
  • Version: buster
  • Arch: armv7

Additional context
I have tried many s3fs options combinations, but nothing worked. When I removed use_path_request_style or added uid=0,gid=0 (to prevent chown) it failed the same as when the bucket didn't exist in MinIO.
I tried searching around a bit and seems like it is incorrect to try to do chown on S3 as it is not POSIX filesystem. Not sure why the plugin does it, or whether it is the plugin or docker itself. Not sure how other S3 compatible filesystems deal with this, but MinIO obviously can't handle it...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.