Git Product home page Git Product logo

kube-s3's Issues

Setting AWS IMDSv2 API response 404

Hi.
I'm getting a 404 when I'm try to apply the daemonset.
This worked the first time, but I'm getting this error every time I tried to create again the daemonset

kubectl apply -f .daemonset.yaml -n up

kubectl -n up get pods
NAME READY STATUS RESTARTS AGE
s3-provider-54wr8 0/1 CrashLoopBackOff 1 6s
s3-provider-64qtq 0/1 CrashLoopBackOff 1 6s

kubectl -n upiloto logs s3-provider-54wr8

2021-12-22T13:23:13.186Z [INF] s3fs version 1.90(cd466eb) : s3fs -d -f -o endpoint=us-east-1,iam_role=arn:aws:iam:::role/S3All2,allow_other,retries=5 up /var/s3
2021-12-22T13:23:13.188Z [CRT] s3fs_logger.cpp:LowSetLogLevel(240): change debug level from [CRT] to [INF]
2021-12-22T13:23:13.188Z [INF] s3fs.cpp:set_mountpoint_attribute(4093): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
2021-12-22T13:23:13.194Z [INF] curl.cpp:InitMimeType(434): Loaded mime information from /etc/mime.types
2021-12-22T13:23:13.194Z [INF] fdcache_stat.cpp:CheckCacheFileStatTopDir(79): The path to cache top dir is empty, thus not need to check permission.
2021-12-22T13:23:13.197Z [INF] s3fs.cpp:s3fs_init(3382): init v1.90(commit:cd466eb) with OpenSSL
2021-12-22T13:23:13.197Z [INF] s3fs.cpp:s3fs_check_service(3516): check services.
2021-12-22T13:23:13.197Z [INF] curl.cpp:CheckIAMCredentialUpdate(1770): IAM Access Token refreshing...
2021-12-22T13:23:13.197Z [INF] curl.cpp:GetIAMCredentials(2822): [IAM role=arn:aws:iam::
:role/S3All2]
2021-12-22T13:23:13.198Z [INF] curl.cpp:RequestPerform(2316): HTTP response code 200
2021-12-22T13:23:13.198Z [INF] curl.cpp:SetIAMv2APIToken(1727): Setting AWS IMDSv2 API token to AQAEAIXa1q5anhlf3OFws-8QjuE2vpwzshqDm8rHiIyobu6LANpz8A==
2021-12-22T13:23:13.198Z [INF] curl.cpp:RequestPerform(2368): HTTP response code 404 was returned, returning ENOENT
2021-12-22T13:23:13.198Z [ERR] curl.cpp:CheckIAMCredentialUpdate(1774): IAM Access Token refresh failed
2021-12-22T13:23:13.198Z [INF] curl_handlerpool.cpp:ReturnHandler(110): Pool full: destroy the oldest handler
2021-12-22T13:23:13.199Z [CRT] s3fs.cpp:s3fs_check_service(3520): Failed to check IAM role name(arn:aws:iam::*:role/S3All2).
2021-12-22T13:23:13.199Z [ERR] s3fs.cpp:s3fs_exit_fuseloop(3372): Exiting FUSE event loop due to errors

2021-12-22T13:23:13.199Z [INF] s3fs.cpp:s3fs_destroy(3440): destroy

Indentation issue in the daemonset YAML file

There is an indentation issue in the daemonset.yaml file which results in the following errors

error: error validating "daemonset.yaml": error validating data: [ValidationError(DaemonSet.spec): unknown field "matchLabels" in io.k8s.api.apps.v1.DaemonSetSpec, ValidationError(DaemonSet.spec): missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec]; if you choose to ignore these errors, turn validation off with --validate=false

So, if we try to ignore these errors it results in following error

kubectl apply -f daemonset.yaml --validate=false

The DaemonSet "s3-provider" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"s3-provider"}: `selector` does not match template `labels`

Just update the snippet before the spec in daemonset.yaml file with the below snippet to resolve it

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: s3-provider
  name: s3-provider
spec:
  selector:
    matchLabels:
      app: s3-provider
  template:
    metadata:
      labels:
        app: s3-provider

Can it be used on containerd?

I built an image based on the documentation and deployed it on EC2's k3s
ERROR:
IAM_ROLE is not set - mounting S3 with credentials from ENV
/docker-entrypoint.sh: line 19: /usr/bin/s3fs: No such file or directory

my dockerfile:
###############################################################################

The FUSE driver needs elevated privileges, run Docker with --privileged=true

###############################################################################

FROM alpine:latest

ENV MNT_POINT /var/s3
ENV IAM_ROLE=none
ENV S3_REGION 'ap-east-1'

VOLUME /var/s3

ARG S3FS_VERSION=v1.89

RUN apk --update add bash fuse libcurl libxml2 libstdc++ libgcc alpine-sdk automake autoconf libxml2-dev fuse-dev curl-dev git;
git clone https://github.com/s3fs-fuse/s3fs-fuse.git;
cd s3fs-fuse;
git checkout tags/${S3FS_VERSION};
./autogen.sh;
./configure --prefix=/usr ;
make;
make install;
make clean;
rm -rf /var/cache/apk/*;
apk del git automake autoconf;

RUN sed -i s/"#user_allow_other"/"user_allow_other"/g /etc/fuse.conf

COPY docker-entrypoint.sh /
RUN chmod 777 docker-entrypoint.sh
CMD /docker-entrypoint.sh

does not build on alpine:latest - update to s3fs version 1.93

#6 20.18 g++ -DHAVE_CONFIG_H -I. -I..  -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2    -g -O2 -Wall -fno-exceptions -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT s3fs_global.o -MD -MP -MF .deps/s3fs_global.Tpo -c -o s3fs_global.o s3fs_global.cpp
#6 20.59 mv -f .deps/s3fs_global.Tpo .deps/s3fs_global.Po
#6 20.59 g++ -DHAVE_CONFIG_H -I. -I..  -I/usr/include/fuse -D_FILE_OFFSET_BITS=64 -I/usr/include/libxml2    -g -O2 -Wall -fno-exceptions -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -MT s3fs_help.o -MD -MP -MF .deps/s3fs_help.Tpo -c -o s3fs_help.o s3fs_help.cpp
#6 21.01 In file included from s3fs_help.cpp:26:
#6 21.01 common.h:34:14: error: 'int64_t' does not name a type

I didn't dig deep into the issue, because the easy fix is to update s3fs to version 1.93

Pod not supported on Fargate

The pod is in the pending stage for longer than an hour and the describe pod mentions the following error

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  2m2s  fargate-scheduler  Pod not supported on Fargate: invalid SecurityContext fields: Privileged, volumes not supported: mntdatas3fs is of an unsupported volume Type

Using ConfigMaps for credentials is an anti-pattern.

You shouldn't be posting examples publicly of putting secrets in a ConfigMap. You're supposed to use a Kubernetes Secret for anything which should be kept secret. By showing an example of using a ConfigMap containing API keys, you're teaching people to deploy on K8s in an incredibly insecure way. You should re-work this, and your original article, to use secrets instead. Additionally, secret credentials should never be checked into a git repository or included in any source code. They can be kept separately in encrypted password storage, such as Keeper or even a password-protected keyring. There are also services which are designed specifically for keeping corporate credentials secure. You shouldn't be posting examples online of mis-handling credentials by carelessly putting them into a ConfigMap.

pod restart causes error while creating mount source path

I have cases where the s3 provider pod dies, and tries to restart, when that happends, it fails to restart and I get in the describe pod:

  Warning  Failed     7s (x4 over 47s)  kubelet, ip-192-168-7-42.eu-west-1.compute.internal  Error: failed to start container "s3fuse": Error response from daemon: error while creating mount source path '/mnt/data-s3fs-shapedo-tools-eu': mkdir /mnt/data-s3fs: file exists
  Warning  BackOff    6s (x3 over 21s)  kubelet, ip-192-168-7-42.eu-west-1.compute.internal  Back-off restarting failed container

In the node I can see that the folder is mounted with "Transport not connected"

If I unmount the folder the pod runs normally.
Using EKS 1.15

After some time mount disappears

Hi, I am using this repository as a reference in my project. I have a pod with an s3 mount running an FFMPEG microservice that reads a file from S3 and then uploads the transcoded HLS stream. On small files this works just fine, but when the files are big and take more than 5 minutes to transcode, at some point the transcoder pod fails liveness check and gets restarted, losing the transcoding progress. Do you know what the issue might be? Here are my YAML files for reference:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rival-transcoder-deployment
  labels:
    app: rival-transcoder
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rival-transcoder
  template:
    metadata:
      labels:
        app: rival-transcoder
    spec:
      serviceAccountName: rival-server-service-account
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: dedicated
                operator: NotIn
                values:
                - infra
      containers:
        - name: rival-transcoder
          image: our trancoder image
          securityContext:
            privileged: true
          volumeMounts:
            - name: mntdatas3fs
              mountPath: /var/s3
              mountPropagation: Bidirectional
          livenessProbe:
            exec:
              command: ["ls", "/var/s3"]
            failureThreshold: 3
            initialDelaySeconds: 10
            periodSeconds: 5
            successThreshold: 1
            timeoutSeconds: 1
          resources:
            requests:
              memory: "24Gi"
              cpu: "1"
            limits:
              memory: "24Gi"
              cpu: "4"
          ports:
          - containerPort: 1050
          imagePullPolicy: Always
          env: 
          - name: API_URL
            value: {{ .Values.api_url }}
          - name: AWS_S3_BUCKET
            valueFrom:
              secretKeyRef:
                name: s3-secret
                key: s3-bucket
          - name: AWS_REGION
            value: {{ .Values.aws_region }}
          - name: RABBITMQ_HOST
            valueFrom:
              secretKeyRef:
                name: rabbitmq-default-user
                key: host
          - name: RABBITMQ_PORT
            valueFrom:
              secretKeyRef:
                name: rabbitmq-default-user
                key: port
          - name: RABBITMQ_USER
            valueFrom:
              secretKeyRef:
                name: rabbitmq-default-user
                key: username
          - name: RABBITMQ_PASSWORD
            valueFrom:
              secretKeyRef:
                name: rabbitmq-default-user
                key: password
          - name: env
            value: "{{ .Values.environment }}" 
      volumes:
        - name: mntdatas3fs
          hostPath:
            path: /mnt/data-s3-fs/root
      imagePullSecrets:
      - name: ghcr-login-secret
---
apiVersion: v1
kind: Service
metadata:
  name: rival-transcoder-service
  labels:
    run: rival-transcoder
spec:
  type: ClusterIP
  selector:
    app: rival-transcoder
  ports:
    - protocol: TCP
      port: 1050
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: s3-provider
  labels:
    app: s3-provider
spec:
  selector:
    matchLabels:
      app: s3-provider
  template:
    metadata:
      labels:
        app: s3-provider
    spec:
      initContainers:
      - name: init-myservice
        image: bash
        command: ['bash', '-c', 'umount -l /mnt/data-s3-fs/root ; true']
        securityContext:
          privileged: true
          capabilities:
            add:
            - SYS_ADMIN
        envFrom:
        - secretRef:
            name: s3-credentials
        volumeMounts:
        - name: devfuse
          mountPath: /dev/fuse
        - name: mntdatas3fs-init
          mountPath: /mnt
          mountPropagation: Bidirectional
      containers:
      - name: s3fuse
        image: ghcr.io/rival-xr/rival-s3-mount:1.0 
        imagePullPolicy: Always
        lifecycle:
          preStop:
            exec:
              command: ["/bin/sh","-c","umount -f /var/s3/root"]
        securityContext:
          privileged: true
          capabilities:
            add:
            - SYS_ADMIN
        envFrom:
        - secretRef:
            name: s3-credentials
        env:
        - name: MNT_POINT
          value: /var/s3/root
        volumeMounts:
        - name: devfuse
          mountPath: /dev/fuse
        - name: mntdatas3fs
          mountPath: /var/s3/root
          mountPropagation: Bidirectional
      volumes:
      - name: devfuse
        hostPath:
          path: /dev/fuse
      - name: mntdatas3fs
        hostPath:
          type: DirectoryOrCreate
          path: /mnt/data-s3-fs/root
      - name: mntdatas3fs-init
        hostPath:
          type: DirectoryOrCreate
          path: /mnt
      imagePullSecrets:
      - name: ghcr-login-secret

Docker image is built as per instructions in this repo.

failed to start container

Hi
I'm trying to use the kube-s3 on my wsl2 using ubuntu20.4 I'm getting the flowing error
Error: failed to start container "s3-test-container": Error response from daemon: path /mnt/data-s3-fs is mounted on / but it is not a shared mount
Can you advise what to can I do in order to fix this?
Thanks
Guy

Help: Using K3S+containerd does not work for this solution

I created the required resources based on the document. The container was successfully mounted, but the host was not successful. It seems that containerd does not support shared resources. How can I solve this problem


## Inside the container

S3fs fuse. s3fs 16.0E 0 16.0E 0%/var/s3

## Host

root@test :~# df - hT/mnt/data-s3 fs/

Filesystem Type Size Used Available Use% Mounted on

/Dev/sda3 ext4 145G 113G 25G 82%/

Add top-level LICENSE

My employer does not allow contributing to projects that lack a proper license. Can you add one, preferably Apache or MIT? This prevents me from fixing #7 myself.

Upgrade to s3fs 1.89

1.84 is a few years old and has several data corruption and crashing bugs that may have caused #6.

Docs: "Why does it work?" is outdated

From the README:

Why does this work?

Docker engine 1.10 added a new feature which allows containers to share the host mount namespace.

this is outdated, since Kubernetes does not use Docker anymore.

Volume mount does not work in Azure AKS

I followed the steps in the blogpost https://blog.meain.io/2020/mounting-s3-bucket-kube/, but when the daemonset get created when i bash into the pod i see below in /var directory.

bash-4.3# ls /var
cache   db      empty   git     lib     lock    log     run     s3fs    s3fs:shared  spool   tmp

Just wanted to check if I am missing anything here which needs to be done for Azure AKS or does this setup not work for Azure AKS?

Daemonset.yaml

spec:
      containers:
      - name: s3fuse
        image: meain/s3-mounter
        lifecycle:
          preStop:
            exec:
              command: ["/bin/sh","-c","umount -f /var/s3fs"]
        securityContext:
          privileged: true
          capabilities:
            add:
            - SYS_ADMIN
        envFrom:
        - configMapRef:
            name: s3-config
        volumeMounts:
        - name: devfuse
          mountPath: /dev/fuse
        - name: mntdatas3fs
          mountPath: /var/s3fs:shared
      volumes:
      - name: devfuse
        hostPath:
          path: /dev/fuse
      - name: mntdatas3fs
        hostPath:
          path: /mnt/s3-data

examplepod.yaml

spec:
  containers:
  - image: nginx
    name: s3-test-container
    securityContext:
      privileged: true
    volumeMounts:
    - name: mntdatas3fs
      mountPath: /var/s3fs:shared
  volumes:
  - name: mntdatas3fs
    hostPath:
      path: /mnt/s3-data

Crash boot loop because of "Transport endpoint is not connected", umount -l

Hey,
Sometimes I get an issue that pots the pod on CrashLoopBackOff.
The workaround is to ssh to the node and run unmount -l (lazy), then deleted the pod and let it get created.
During that time the mount is down.

Debuging results:

  • Trying to unmount returns "device resource busy"
  • The pod describe shows "Failed to create directory (of mount) already exists"
  • When I ran lsof (which had to to be installed with yum), then I got the error Transport endpoint is not connected

Unable to access mounted directory.

I followed similar steps on our AWS EKS cluster but while accessing mounted s3 directories getting permission denied error -

d????????? ? ? ? ? ? s3fs

mount's are working fine of S3fs daemonsets I have created. Issue is only on additional containers where I am adding the s3fs as volume

Let me know what additional information I can provide

Host share on example pod is lost when s3-provider is restarted

It you set the host share to the same path as the mountpoint oh the host, if the host remounts the fuse volume, the example pod looses the folder link and gets Transport endpoint is not connected

The workaround I found was to mount on the example pod a folder up the tree, and then that folder remains static and does not get a different mapping if the fuse gets remounted. That way the folder remains in place and the subfolder that mounted has its node updated correctly.

Ideally you would want something like

  volumes:
  - name: mntdatas3fs
    hostPath:
      path: /mnt
      subpath: data-s3-fs

I am not sure if that is a feature available in Kubernets. will test.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.