Git Product home page Git Product logo

docker-s3fs-client's Introduction

Dear Reader ๐Ÿ“–

I am slowly migrating all my docker-related work to an organisation, whalimeter

  • ๐Ÿ—ž I seldom update my blog
  • ๐Ÿ›  I often create or update private OSS projects here, at bitbucket and gitlab
  • ๐Ÿ’ฐ I sometimes create or update corporate OSS projects, formerly at Yanzi, HealthIntegrator and Mitigram, now at GpsGate
  • ๐Ÿ’– I have a special love for the misunderstood Tcl
  • ๐Ÿš Lately I have focused on POSIX shell to better target the embedded and devops spaces
  • ๐Ÿณ I have a gist that, from a running container, prints out the command-line docker command to create it.
  • ๐Ÿ‹ I have a script to rebase a slim Docker image on top of another one without changing its original configuration.
  • ๐Ÿข I am always open for opportunities in the beautiful Stockholm, or remote. But I am enjoying my time at GpsGate.

docker-s3fs-client's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

docker-s3fs-client's Issues

manifest for efrecon/s3fs:latest not found: manifest unknown: manifest unknown

Based on the READMe file, If you run the command like:

docker run -it --rm \
    --device /dev/fuse \
    --cap-add SYS_ADMIN \
    --security-opt "apparmor=unconfined" \
    --env "AWS_S3_BUCKET=<bucketName>" \
    --env "AWS_S3_ACCESS_KEY_ID=<accessKey>" \
    --env "AWS_S3_SECRET_ACCESS_KEY=<secretKey>" \
    --env UID=$(id -u) \
    --env GID=$(id -g) \
    -v /mnt/tmp:/opt/s3fs/bucket:rshared \
    efrecon/s3fs

It would not work. Based efrecon/s3fs will try to pull the latest image. However based on this document here: https://hub.docker.com/r/efrecon/s3fs It clearly says not to use latest image but use specific version.

Possibility to load credentials from env file

Hey,
I'm currently trying to setup s3fs inside kubernetes with IAM Role as a Service Account support.
Since s3-fuse doesn't use the aws cpp sdk, it's kind of a hack to pull off.

My current setup looks like this:

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  serviceAccount: svc-acc
  volumes:
  - name: devfuse
    hostPath:
      path: /dev/fuse
  - name: tmp
    emptyDir: {}
  - name: mntdumps3fs
    hostPath:
      path: /mnt/s3fs/test-rocket-rocketchat-s3fs
  initContainers:
  - name: get-aws-credentials
    image: amazon/aws-cli:latest
    volumeMounts:
    - mountPath: /tmp
      name: tmp
    securityContext:
      runAsUser: 0
      runAsNonRoot: false
      runAsGroup: 0
    command: ["/bin/sh", "-c"]
    args:
      - STS=$(aws sts assume-role-with-web-identity --role-arn $AWS_ROLE_ARN --role-session-name s3-fuse --web-identity-token $(cat ${AWS_WEB_IDENTITY_TOKEN_FILE}) --query 'Credentials.[AccessKeyId,SecretAccessKey]' --output text);
        AWS_ACCESS_KEY_ID=$(echo $STS | cut -d' ' -f1);
        AWS_SECRET_ACCESS_KEY=$(echo $STS | cut -d' ' -f2);
        echo -e "$AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY" > /tmp/s3_passwd;
        chmod 600 /tmp/s3_passwd;
  containers:
  - name: s3fs
    image: ghcr.io/efrecon/s3fs:1.91
    imagePullPolicy: IfNotPresent
    securityContext:
      privileged: true
      capabilities:
        add:
          # needed for fuse
          - SYS_ADMIN
          - MKNOD
      runAsUser: 0
      runAsNonRoot: false
      runAsGroup: 0
      seccompProfile:
        type: "Unconfined"
    env:
      - name: AWS_S3_BUCKET
        value: organisation-rocketchat-backups-stage
      - name: AWS_S3_AUTHFILE
        value: /tmp/s3_passwd
      - name: S3FS_ARGS
        value: "endpoint=eu-central-1"
      - name: S3FS_DEBUG
        value: "1"
      - name: S3FS_ARGS
        value: "curldbg -f -o f2"
    volumeMounts:
    - name: devfuse
      mountPath: /dev/fuse
    - name: mntdumps3fs
      mountPath: /opt/s3fs/bucket
    - name: tmp
      mountPath: /tmp

If the entrypoint would provide a way to load the needed environment variables from a .env file, I could simply do the following in my init container:

STS=$(aws sts assume-role-with-web-identity --role-arn $AWS_ROLE_ARN --role-session-name s3-fuse --web-identity-token $(cat ${AWS_WEB_IDENTITY_TOKEN_FILE}) --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text)
AWS_ACCESS_KEY_ID=$(echo $STS | cut -d' ' -f1)
AWS_SECRET_ACCESS_KEY=$(echo $STS | cut -d' ' -f2)
AWS_SESSION_TOKEN =$(echo $STS | cut -d' ' -f3)
echo -e "AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID\nAWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY\nAWS_SESSION_TOKEN=$AWS_SESSION_TOKEN" > /tmp/creds.env

Using that approach I could provide the path to the credentials file and the entrypoint will source it.

But having a native way of s3-fuse to do it, would still be way better.
@gaul do you have any plans on adding support soon?

Does anyone able to umount automcatically successfully in k8s?

I also experienced the "transport endpoint is not connected" problem for my k8s cluster and I am unable to umount my mount point from the host. However, I would like to make the recovery process automatically. Here is my trial:

      initContainers:
        - name: umount
          image: efrecon/s3fs:1.93
          command: ["sh", "-euxc", "! mountpoint -q \"${MY_MOUNT_PATH}\" | umount ${MY_MOUNT_PATH}"]
          securityContext:
            privileged: true

The idea is to umount the fs if it is being mounted before the pod get started. Unfortunately, it doesn't work. Please let us know if anyone has better idea. Thx.

ps:
I also try thinks like the code below or its variation. All trials fail.

mountpoint -q \"${MY_MOUNT_PATH}\" && fusermount -u ${MY_MOUNT_PATH} || exit 0

UID and GID work correctly in latest; appear to be ignored in recent tags

Using the latest tag, the UID and GID environment variables behave correctly: setting them allows only the specified user to access the mounted directory.

However, using any of the other recent tags, e.g. 1.91, 1.90, 1.89, etc., the UID and GID environment variables do nothing at all: the user whose UID and GID are specified will be granted access to the mounted directory only if the allow_other option is used. Otherwise, it gets a "permission denied" error.

BTW, which s3fs version is used in the latest tag?

s3fs: unable to access MOUNTPOINT error

Hi! I'm having the following error trying to incorporate this image into my workflow:

Mounting bucket my-bucket-name onto /opt/s3fs/bucket, owner: 0:0
my-bucket-name-s3fs-1         | s3fs: unable to access MOUNTPOINT my-bucket-name: No such file or directory

The core idea is to have some data from Prometheus and Grafana stored into a S3 bucket.

This is a sample of my docker-compose.yml file:

services:
  s3fs:
    image: efrecon/s3fs:1.91
    restart: unless-stopped
    user: root
    cap_add:
      - SYS_ADMIN
    security_opt:
      - 'apparmor:unconfined'
    devices:
      - /dev/fuse
    environment:     
      S3FS_DEBUG: 1
      S3FS_ARGS: "-o nonempty"
      AWS_S3_BUCKET: $AWS_S3_BUCKET
      AWS_S3_ACCESS_KEY_ID: $AWS_S3_ACCESS_KEY_ID
      AWS_S3_SECRET_ACCESS_KEY: $AWS_S3_SECRET_ACCESS_KEY     
    volumes:
      - ./bucket:/opt/s3fs/bucket:rshared

  prometheus:
    image: prom/prometheus:latest
    user: root
    volumes:
      - ./bucket/prometheus:/etc/prometheus
      - ./bucket/prometheus-data:/prometheus
    command: --config.file=/etc/prometheus/prometheus.yml --web.config.file=/etc/prometheus/web.yml
    ports:
      - 9999:9090
    depends_on: ["s3fs"]

It seems it can't find the directory to mount, although from what I saw in the entry point file, the image tries to create the directory: https://github.com/efrecon/docker-s3fs-client/blob/master/docker-entrypoint.sh#L82-L85. I'm not a Docker expert, is there something else I should be doing to make this work properly?

Error when starting contaniers 'exec /sbin/tini: exec format error'

Hi,
Congrats on the initiative. However, I wasn't able to make It work. When I start the container s3fs, logs shows a 'exec /sbin/tini: exec format error'. I'm trying to run the following services:

version: '3.8'
services:
  s3fs:
    privileged: true
    image: efrecon/s3fs:1.91
    restart: always
    environment:
      - AWS_S3_BUCKET=media
      - AWS_S3_ACCESS_KEY_ID=xxxxxxxxxxxxxxxxxx
      - AWS_S3_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxx
      - AWS_S3_URL=xxxxxxxxxxxxxxxxxxxxxxxxxxx
    volumes:
    # This also mounts the S3 bucket to `/mnt/s3data` on the host machine
      - /media/:/opt/s3fs/bucket:rshared

  test:
    image: bash:latest
    restart: always
    depends_on:
      - s3fs
    # Just so this container won't die and you can test the bucket from within
    command: sleep infinity
    volumes:
      - /media:/data:rshared

What am I doing wrong?

Session Token

Does this support session tokens? I have a bucket that requires a session token and can do it from the CLI with s3fs, but adding the variable to the docker-compose or calling it from the command line didn't seem to do anything.

Unable to force UID/GID

When trying to force the UID/GID i keep having errors :
s3fs_1 | adduser: uid '33' in use
s3fs_1 | su: unknown user 33
s3fs_1 | Mount failure
s3fs_1 exited with code 0

Add arm64 arch. build to DockerHub releases

Just a suggestion: could you add a arm64 architecture build when you do releases on the Dockerhub? This is becoming more popular on AWS instances and for the Apple M1 chip.

Specifying platform: linux/arm64 for this container in the Dockerfile/Compose file will cause the build to fail for me in 1.92. A standard x86_64 arch. set like platform: linux/amd64 builds normally. Thank you.

https://hub.docker.com/r/efrecon/s3fs

Volume on Host / Containers not reflecting Bucket Contents

OS: Ubuntu 22.04
Docker Version: 20.10.22

Sample Docker-Compose:

version: "3.6"

services:
  php-fpm:
    container_name: "php-fpm"
    build:
      context: ./services/php-fpm
      dockerfile: Dockerfile
    volumes:
      ...
      - $VOLUME_S3FS_PUBLIC:/var/www/html/sites/default/files
      ...
    depends_on:
      - s3fs-public
  ...
  s3fs-public:
    container_name: "s3fs-public"
    image: efrecon/s3fs:1.91
    environment:
      AWS_S3_BUCKET: $MEDIA_S3_BUCKET_PUBLIC
      AWS_S3_ACCESS_KEY_ID: $MEDIA_S3_KEY
      AWS_S3_SECRET_ACCESS_KEY: $MEDIA_S3_SECRET
      AWS_S3_MOUNT: '/opt/s3fs/bucket'
      S3FS_DEBUG: 1
      S3FS_ARGS: ''
    devices:
      - /dev/fuse
    cap_add:
      - SYS_ADMIN
    security_opt:
      - "apparmor:unconfined"
    volumes:
      - '${VOLUME_S3FS_PUBLIC}:/opt/s3fs/bucket:rshared'

The issue I'm having is when I run docker compose up against the above config (some other containers and env vars omitted), the s3fs volumes don't appear to be shared with the host or containers.

This is the output from a docker compose log for the s3fs-public container:

s3fs-public   | Mounting bucket dev-website-public onto /opt/s3fs/bucket, owner: 0:0
s3fs-public   | FUSE library version: 2.9.9
s3fs-public   | nullpath_ok: 0
s3fs-public   | nopath: 0
s3fs-public   | utime_omit_ok: 1
s3fs-public   | unique: 2, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
s3fs-public   | INIT: 7.34
s3fs-public   | flags=0x33fffffb
s3fs-public   | max_readahead=0x00020000
s3fs-public   |    INIT: 7.19
s3fs-public   |    flags=0x00000039
s3fs-public   |    max_readahead=0x00020000
s3fs-public   |    max_write=0x00020000
s3fs-public   |    max_background=0
s3fs-public   |    congestion_threshold=0
s3fs-public   |    unique: 2, success, outsize: 40

If I docker exec s3fs-public sh and navigate to ./bucket I can see the contents of the remote s3 bucket. But if I am on the host and navigate to $VOLUME_S3FS_PUBLIC (which the container creates - in this case /media/s3fs-public) then I can't see the contents of the remote s3 bucket. Similarly, if I docker exec php-fpm bash and navigate to /var/www/html/sites/default/files I can't see the contents of the remote s3 bucket either.

I have also tried cloning this repo, setting my S3 credentials in a .env, and running docker compose up against the untouched docker-compose.yml file, but am getting the same result - i.e. can't see the remote s3 files in ./bucket.

Is there additional configuration I need to make in order for the mounted s3fs to be shared with the host and other containers?

Thanks.

Dockerfile doesn't build for S3FS_VERSION=v1.87

`Cloning into 's3fs-fuse'...
Note: switching to 'tags/v1.87'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

git switch -c

Or undo this operation with:

git switch -

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at 194262c Update ChangeLog and configure.ac for 1.87
--- Make commit hash file -------
--- Finished commit hash file ---
--- Start autotools -------------
configure.ac:30: installing './compile'
configure.ac:26: installing './config.guess'
configure.ac:26: installing './config.sub'
configure.ac:27: installing './install-sh'
configure.ac:27: installing './missing'
src/Makefile.am: installing './depcomp'
parallel-tests: installing './test-driver'
--- Finished autotools ----------
checking build system type... x86_64-pc-linux-musl
checking host system type... x86_64-pc-linux-musl
checking target system type... x86_64-pc-linux-musl
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... ./install-sh -c -d
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for g++... g++
checking whether the C++ compiler works... yes
checking for C++ compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking whether make supports the include directive... yes (GNU style)
checking dependency style of g++... gcc3
checking for gcc... gcc
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking dependency style of gcc... gcc3
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking sys/xattr.h usability... yes
checking sys/xattr.h presence... yes
checking for sys/xattr.h... yes
checking attr/xattr.h usability... no
checking attr/xattr.h presence... no
checking for attr/xattr.h... no
checking sys/extattr.h usability... no
checking sys/extattr.h presence... no
checking for sys/extattr.h... no
checking s3fs build with nettle(GnuTLS)... no
checking s3fs build with OpenSSL... no
checking s3fs build with GnuTLS... no
checking s3fs build with NSS... no
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for common_lib_checking... yes
checking compile s3fs with... OpenSSL
checking for DEPS... yes
checking for malloc_trim... no
checking for library containing clock_gettime... none required
checking for clock_gettime... yes
checking pthread mutex recursive... PTHREAD_MUTEX_RECURSIVE
checking checking CURLOPT_TCP_KEEPALIVE... yes
checking checking CURLOPT_SSL_ENABLE_ALPN... yes
checking checking CURLOPT_KEEP_SENDING_ON_ERROR... yes
checking for git... yes
checking github short commit hash... 194262c
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating Makefile
config.status: creating src/Makefile
config.status: creating test/Makefile
config.status: creating doc/Makefile
config.status: creating config.h
config.status: executing depfiles commands
make all-recursive
make[1]: Entering directory '/s3fs-fuse'
Making all in src
make[2]: Entering directory '/s3fs-fuse/src'
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT s3fs.o -MD -MP -MF .deps/s3fs.Tpo -c -o s3fs.o s3fs.cpp
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT curl.o -MD -MP -MF .deps/curl.Tpo -c -o curl.o curl.cpp
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT cache.o -MD -MP -MF .deps/cache.Tpo -c -o cache.o cache.cpp
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT string_util.o -MD -MP -MF .deps/string_util.Tpo -c -o string_util.o string_util.cpp
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT s3fs_util.o -MD -MP -MF .deps/s3fs_util.Tpo -c -o s3fs_util.o s3fs_util.cpp
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT fdcache.o -MD -MP -MF .deps/fdcache.Tpo -c -o fdcache.o fdcache.cpp
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT common_auth.o -MD -MP -MF .deps/common_auth.Tpo -c -o common_auth.o common_auth.cpp
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT addhead.o -MD -MP -MF .deps/addhead.Tpo -c -o addhead.o addhead.cpp
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT sighandlers.o -MD -MP -MF .deps/sighandlers.Tpo -c -o sighandlers.o sighandlers.cpp
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT openssl_auth.o -MD -MP -MF .deps/openssl_auth.Tpo -c -o openssl_auth.o openssl_auth.cpp
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT test_string_util.o -MD -MP -MF .deps/test_string_util.Tpo -c -o test_string_util.o test_string_util.cpp
mv -f .deps/common_auth.Tpo .deps/common_auth.Po
mv -f .deps/openssl_auth.Tpo .deps/openssl_auth.Po
fdcache.cpp: In static member function 'static bool PageList::GetSparseFilePages(int, size_t, fdpage_list_t&)':
fdcache.cpp:478:33: error: 'SEEK_HOLE' was not declared in this scope
478 | int hole_pos = lseek(fd, 0, SEEK_HOLE);
| ^~~~~~~~~
fdcache.cpp:479:33: error: 'SEEK_DATA' was not declared in this scope; did you mean 'SEEK_SET'?
479 | int data_pos = lseek(fd, 0, SEEK_DATA);
| ^~~~~~~~~
| SEEK_SET
mv -f .deps/sighandlers.Tpo .deps/sighandlers.Po
make[2]: *** [Makefile:630: fdcache.o] Error 1
make[2]: *** Waiting for unfinished jobs....
mv -f .deps/string_util.Tpo .deps/string_util.Po
mv -f .deps/addhead.Tpo .deps/addhead.Po
mv -f .deps/test_string_util.Tpo .deps/test_string_util.Po
mv -f .deps/s3fs_util.Tpo .deps/s3fs_util.Po
mv -f .deps/cache.Tpo .deps/cache.Po
mv -f .deps/s3fs.Tpo .deps/s3fs.Po
mv -f .deps/curl.Tpo .deps/curl.Po
make[2]: Leaving directory '/s3fs-fuse/src'
make[1]: *** [Makefile:400: all-recursive] Error 1
make[1]: Leaving directory '/s3fs-fuse'
make: *** [Makefile:341: all] Error 2
The command '/bin/sh -c apk --no-cache add ca-certificates build-base git alpine-sdk libcurl automake autoconf libxml2-dev libressl-dev fuse-dev curl-dev && git clone https://github.com/s3fs-fuse/s3fs-fuse.git && cd s3fs-fuse && git checkout tags/${S3FS_VERSION} && ./autogen.sh && ./configure --prefix=/usr && make -j && make install' returned a non-zero code: 2
make: *** [build] Error 2`

is --volumes-from a possibility?

I tried using docker run --volumes-from <s3fs-container-id> and the base directory is available, but then the bucket content is not there. Is this because of the lack of rshared bind propagation? Is there any way to do something like mounting from another container without having to mount on the host?

Thanks!

Update latest docker image

It took me a long time to realize that even though this issue is solved:

#11

It's not solved in the latest docker image. You should push an update when you can or make a note.

Also add git sha to tag

In order to make the difference between the source project and this project, we should consider:

  • Generating tags such as 1.90-96052ad, i.e. combining the version from s3fs and the sha of the commit here.
  • Generating tags as 1.90 corresponding to the latest build out of here.

This would allow external users to revert to images if modern additions broke things up.

s3fs: unable to access MOUNTPOINT {bucketname}: No such file or directory

Greetings,

I've tried your container with latest version of s3fs running on 1.89.

When I try to setup the container using a MinIO backend endpoint, I run into the following error:

s3fs: Unable to access MOUNTPOINT my-bucket: No such file or directory
Mount failure

I guess your startup script created the directory /opt/s3fs/bucket but not the directory which actually holds the bucket, which would need to be opt/s3fs/bucket/my-bucket if I understand correctly.

Thus, you'd have to change the entrypoint script to mkdir -p $DEST/$AWS_S3_BUCKET or something like that.

I used ACCESS_KEY/SECRET_KEY combination instead of a keyfile and added S3FS_ARGS: "use_path_request_style,allow_other,default_acl=public".

Do you have any other hints on this?

[HCP] Hitachi Content Platform

Hello,

I can mount HCP's s3 bucket on the host from the cli:

s3fs bucket-test /mnt/bucket-test -o passwd_file=${HOME}/.passwd -o url=https://test.s3-hcp.lan/ -o use_path_request_style

${HOME}/.passwd contains <key-id>:<access-key>

but when I try to mount the bucket within docker:

docker run -it --rm \
    --device /dev/fuse \
    --cap-add SYS_ADMIN \
    --security-opt "apparmor=unconfined" \
    --env "AWS_S3_BUCKET=bucket-test" \
    --env "AWS_S3_ACCESS_KEY_ID=<key-id>" \
    --env "AWS_S3_SECRET_ACCESS_KEY=<access-key>" \
    --env "AWS_S3_URL=https://video.s3-hcp.bank.lan/" \
    --env "S3FS_ARGS=use_path_request_style" \
    --env UID=$(id -u) \
    --env GID=$(id -g) \
    -v /mnt/bucket-test:/opt/s3fs/bucket:rshared \
    efrecon/s3fs:1.91
    ```
the following error has appeared:

ls: /opt/s3fs/bucket: Connection aborted


Could you help me to find out the abortion cause?

brg
Serhiy.

Run docker-s3fs-client as non root user

I agree with another user, thank you for providing (and maintaing!) this docker image.
I am currently testing using the image on an Red Hat OpenShift cluster. This is to by pass the requirement to have the s3fs code installed directly on the cluster.
One of the main security objectives is to run the containers with the OpenShift Cluster as a non root user. I understand that it can be set if required, although a lot of admin staff do get upset. Thus trying to run with either a random or per-defined UID/GID.
Normally with storage you can get a way with just using a GID of '0'.
However, in this case a lot of the commands in the docker-entrypoint.sh require root permission to execute.

Any Chance we could get extra flags or recommendations on how to run the image as a non root user.
ps. You can test by adding a USER setting at the end of the current docker file.

S3FS_ARGS syntax in README.md

Hi,

I get "Mount failure" error messages following the syntax given in

`use_path_request_style,allow_other,default_acl=public-read`

When I adapt it to the following syntax : '-oallow_other' (instead of 'allow_other') it works.

Is it a doc problem or your code could auto-inject that missing (and ugly) '-o' ?

COPY failed: no source files were specified

Hey,

Thanks for providing this. While trying to build the image with the provided Dockerfile, I would get this error:

Step 13/23 : COPY *.sh /usr/local/bin/ COPY failed: no source files were specified

Any idea how to fix this?

Is manual unmount required?

I am creating 2 containers:

  1. s3fs-client container that binds to -v /mnt/tmp/myFolder:/opt/s3fs/bucket:rshared
  2. nginx container that uses the s3 directory from the host -v /mnt/tmp/myFolder/:/s3data:rw

After I stop and kill both containers and look at the /mnt/tmp/myFolder folder, it will usually throw:

/mnt/tmp# ls
ls: cannot access 'myFolder': Transport endpoint is not connected

Until I do fusermount -u myFolder. Is this the expected way to do it? Or are there any recomendations?

Note: The /mnt/tmp/ was mounted like this (if it could matter)

sudo mount -o bind /mnt/tmp /mnt/tmp
sudo mount --make-shared /mnt/tmp

Thank you.

GROUP_NAME vs GID

Hi,

I need to mount the s3fs directory as 50000:0. So I pass as env variables UID=50000 and GID=0. I see that there is a line in your docker-entrypoint.sh that has a GROUP_NAME variable that is created. That code (maybe it was a fix already) is not present in the latest image:

docker pull efrecon/s3fs
Using default tag: latest
latest: Pulling from efrecon/s3fs
Digest: sha256:56c87521168d51da8bd81a6ecbfa5097c30c839fa449858724cbca7a15fea926
Status: Image is up to date for efrecon/s3fs:latest
docker.io/efrecon/s3fs:latest
run -it --entrypoint="" efrecon/s3fs sh       
/opt/s3fs # cat /usr/local/bin/docker-entrypoint.sh 
#! /usr/bin/env sh

# Where are we going to mount the remote bucket resource in our container.
DEST=${AWS_S3_MOUNT:-/opt/s3fs/bucket}

# Check variables and defaults
if [ -z "${AWS_S3_ACCESS_KEY_ID}" -a -z "${AWS_S3_SECRET_ACCESS_KEY}" -a -z "${AWS_S3_SECRET_ACCESS_KEY_FILE}" -a -z "${AWS_S3_AUTHFILE}" ]; then
    echo "You need to provide some credentials!!"
    exit
fi
if [ -z "${AWS_S3_BUCKET}" ]; then
    echo "No bucket name provided!"
    exit
fi
if [ -z "${AWS_S3_URL}" ]; then
    AWS_S3_URL="https://s3.amazonaws.com"
fi

if [ -n "${AWS_S3_SECRET_ACCESS_KEY_FILE}" ]; then
    AWS_S3_SECRET_ACCESS_KEY=$(read ${AWS_S3_SECRET_ACCESS_KEY_FILE})
fi

# Create or use authorisation file
if [ -z "${AWS_S3_AUTHFILE}" ]; then
    AWS_S3_AUTHFILE=/opt/s3fs/passwd-s3fs
    echo "${AWS_S3_ACCESS_KEY_ID}:${AWS_S3_SECRET_ACCESS_KEY}" > ${AWS_S3_AUTHFILE}
    chmod 600 ${AWS_S3_AUTHFILE}
fi

# forget about the password once done (this will have proper effects when the
# PASSWORD_FILE-version of the setting is used)
if [ -n "${AWS_S3_SECRET_ACCESS_KEY}" ]; then
    unset AWS_S3_SECRET_ACCESS_KEY
fi

# Create destination directory if it does not exist.
if [ ! -d $DEST ]; then
    mkdir -p $DEST
fi

# Add a group
if [ $GID -gt 0 ]; then
    addgroup -g $GID -S $GID
fi

# Add a user
if [ $UID -gt 0 ]; then
    adduser -u $UID -D -G $GID $UID
    RUN_AS=$UID
    chown $UID $AWS_S3_MOUNT
    chown $UID ${AWS_S3_AUTHFILE}
    chown $UID /opt/s3fs
fi

# Debug options
DEBUG_OPTS=
if [ $S3FS_DEBUG = "1" ]; then
    DEBUG_OPTS="-d -d"
fi

# Mount and verify that something is present. davfs2 always creates a lost+found
# sub-directory, so we can use the presence of some file/dir as a marker to
# detect that mounting was a success. Execute the command on success.

su - $RUN_AS -c "s3fs $DEBUG_OPTS ${S3FS_ARGS} \
    -o passwd_file=${AWS_S3_AUTHFILE} \
    -o url=${AWS_S3_URL} \
    -o uid=$UID \
    -o gid=$GID \
    ${AWS_S3_BUCKET} ${AWS_S3_MOUNT}"

# s3fs can claim to have a mount even though it didn't succeed.
# Doing an operation actually forces it to detect that and remove the mount.
ls "${AWS_S3_MOUNT}"

mounted=$(mount | grep fuse.s3fs | grep "${AWS_S3_MOUNT}")
if [ -n "${mounted}" ]; then
    echo "Mounted bucket ${AWS_S3_BUCKET} onto ${AWS_S3_MOUNT}"
    exec "$@"
else
    echo "Mount failure"
fi

could you please upload a new image with that "fix"?

regards

Running container with command docker service

From docs say its posible running on swarm enviroment. I have 2 node with active swarm joined. Simplest trial is running container service with docker service create --env bla blah blah --constraint node

And i still confuse from file docker-compose.yml with :

   security_opt:
         - 'apparmor:unconfined'
   devices:
         - /dev/fuse

this no option when run with docker container.
My plan just simple, I want to try this s3 container service on swarm mode.
Example :

  • container on node 1 backup log file
  • container s3 on node 2 accept backup log file from mount

How to add mount to another container

Right now i have sucesfully running this docker container. to expose my wasabi (s3) bucket. I would like to use it in my plex container. But i cant use the /mnt/ that i have defined in the plex container.

Socket not connected

I got 2 times

ls: /opt/s3fs/bucket: Socket not connected after a while.

relaunching the docker image fix the problem.
this disconnection is really problematic for me,
maybe related to that issue https://github.com/s3fs-fuse/s3fs-fuse/issues/152

Mounted volume is empty

Hi,
I'm running docker-compose file from the repo, everything works fine, I can see bucket files inside container, but mounted host directory is empty. What's wrong with that?

Some kinds of s3 failures not detected

Hi,

I had some problems when due to some configuration problem an s3 bucket was not available, then the container still considered the mount to be successful.

This is because s3fs might release that there's a problem just a few moments after the command going to the background.

So if you check the output of "mount" right after s3fs exits, it would show the mount point even though it didn't actually succeed.

I added a workaround for that in fork. The solution doesn't seem perfectly clean, but it appears to work. I can create a pull request if you would like to merge that:
totycro@012d136

s3fs: unable to access MOUNTPOINT {bucketname}: No such file or directory

Hey,

I am trying to mount S3 sub-directories, using the following command:
s3fs -o iam_role="mediation-ocomc-lit" -o url="https://s3-eu-central-1.amazonaws.com" -o endpoint=eu-central-1 -o dbglevel=info -o curldbg -o allow_other -o use_cache=/tmp mp_umask=002 -o multireq_max=5 iotbigdatarawfilesft-lit:/InputFiles/OCOMC-mnt/ /opt/SP/users/oracle/data/input/

but I am stuck now with this error:
s3fs: unable to access MOUNTPOINT iotbigdatarawfilesft-lit:/InputFiles/OCOMC-mnt/: No such file or directory

I have tried a lot of fixes but with no success, actually I don't know what's the issue.
Please advise

Thanks & BR,
Haytham

s3fs: Failed to access bucket.

Hi

I tried to start the container but I face this issue

docker run -it --rm \ --device /dev/fuse \ --cap-add SYS_ADMIN \ --security-opt "apparmor=unconfined" \ --env "AWS_S3_BUCKET=sample-bucket" \ --env "AWS_S3_ACCESS_KEY_ID=myname" \ --env "AWS_S3_SECRET_ACCESS_KEY=mypass" \ --env "AWS_S3_URL=http://myip:9000" \ --env UID=$(id -u) \ --env S3FS_DEBUG=1 \ --env GID=$(id -g) \ -v $(pwd)/tmp1111111111:/opt/s3fs/bucket:rshared \ efrecon/s3fs:1.78 Add group 1000 Add user 1000, turning on rootless-mode Mounting bucket sample-bucket onto /opt/s3fs/bucket, owner: 1000:1000 FUSE library version: 2.9.9 nullpath_ok: 0 nopath: 0 utime_omit_ok: 0 unique: 2, opcode: INIT (26), nodeid: 0, insize: 104, pid: 0 INIT: 7.38 flags=0x73fffffb max_readahead=0x00020000 s3fs: Failed to access bucket.

I checked the network and it is fine also the credentials

Please your support

s3fs | adduser: user '1000' in use

If you start the container and need to restart it for whatever reason, the user has been created and the start-up fails. You appear to need to manually delete the user before continuing the container.

Permissions using an IAM role

First of all thank you for providing (and maintaing!) this docker image.

We have a minor usability problem: We are using iam roles to handle permissions on AWS. Unfortunately the docker container has a hard check that there is either an authfile or key pair.

My workaround is currently to provide an empty authfile so that the docker entrypoint is happy and then I pass "iam_role=auto" via S3FS_ARGS. This is working fine but I thought I would report anyway as this usecase doesn't seem to be on the radar.

super low priority.

docker-entrypoint.sh: reading AWS_S3_SECRET_ACCESS_KEY variable from file not working, request for reading variable from file for AWS_S3_ACCESS_KEY_ID

Question 1:
In docker-entrypoint.sh should:
AWS_S3_SECRET_ACCESS_KEY=$(read -r "${AWS_S3_SECRET_ACCESS_KEY_FILE}")
on line 66 not be:
read -r AWS_S3_SECRET_ACCESS_KEY < "${AWS_S3_SECRET_ACCESS_KEY_FILE}"
to be able to work properly (i.e. read from the file path indicated with AWS_S3_SECRET_ACCESS_KEY_FILE and store in variable AWS_S3_SECRET_ACCESS_KEY)?

Question 2:
Could this same behavior (i.e. reading from file) be implemented for:
AWS_S3_ACCESS_KEY_ID
?

GID must be specified when UID is specified

Hi!

Thanks for providing this very convenient docker image!

I just noticed a minor issue: If you specify the UID, but not the GID, then the mount actually fails.
The first problem is here, where adduser fails due to GID being 0, so you get adduser: unknown group 0:
https://github.com/efrecon/docker-s3fs-client/blob/master/docker-entrypoint.sh#L48

Therefore the user is not created and the subsequent commands fail.

This is not really a bug, but it might be nice to e.g. get a more precise error message or set GID to some default value.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.