wiserain / docker-rclone Goto Github PK
View Code? Open in Web Editor NEWDocker image for rclone mount with scripts
Home Page: https://hub.docker.com/r/wiserain/rclone
Docker image for rclone mount with scripts
Home Page: https://hub.docker.com/r/wiserain/rclone
When using the mergerfs option, it ignores user-provided PUID & PGID environment variables.
Using the following:
environment:
- PUID=1000
- PGID=1000
volumes:
- ${USERDIR}/test-rclone:/cloud:shared
- ${USERDIR}/test-local:/local:shared
- ${USERDIR}/test-merged:/data:shared
Creates the following output folder permissions:
drwxr-xr-x 2 root root 4.0K May 6 15:14 test-local
drwxr-xr-x 2 user user 4.0K May 6 15:14 test-merged
drwxr-xr-x 2 user user 4.0K May 6 15:14 test-rclone
My current workaround is to create the local folder with user permissions before starting the container, but this is not ideal for portability reasons.
I've never experienced an issue like this before, been through hours of troubleshooting with users and mount points and not worked out the cause yet - Thought I'd post an issue just incase someone else is experiencing the same.
All of the files that rclone is syncing from my Google Drive are visible in the container folders (/cloud, /data) - However they're not being sent through to the docker host folders where I've bound them (/mergerfs:/data) for example.
I've got a user on my host called abc
with GID and UID 911
to match the container, just incase it was a permissions issue.
If I manually create a file in any of the container folders, it's immediately visible in the host.
Only files created by rclone or the mergerfs/unionfs aren't coming through to the host and are therefore not visible to other containers either.
Any chance you could provide some examples for this?
I am basically wanting to refresh the filelist at startup in a similar way to the below command does it
/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5574 _async=true
but i have been unsable to workout how to correctly use the ENV variable above to correctly trigger a command like the one above for the whole mount,
I have tried / and . but both result in
2022/05/05 12:13:13 REFRES: >>> refreshing "."
2022/05/05 12:13:16 REFRES: ".": file does not exist
Since rcloen does not keep the filelist over restarts this would allow it to use fastlane to quickly cache the remote filesystem structure and reduce the number of overall calls need for when an app then scans the filesystem. This is useful for remote filesystems that provide changes to the filesystem structure as updates
I'm trying to make a setup like in this post, and started the config but keep getting kicked out of the terminal because the container keeps restarting.
the docker log:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 10-adduser: executing...
GID/UID
-------------------------------------
User uid: 1000
User gid: 1000
-------------------------------------
[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 40-config: executing...
[cont-init.d] 40-config: exited 0.
[cont-init.d] 50-rclone: executing...
*** checking rclone.conf
Waiting for rclone configuration file in /config/rclone.conf. Retrying in 30s ...
RUN: docker exec -it <DOCKER_CONTAINER> rclone_setup
Waiting for rclone configuration file in /config/rclone.conf. Retrying in 30s ...
RUN: docker exec -it <DOCKER_CONTAINER> rclone_setup
It loops through the last two lines over and over.
Here is my setup in the docker-compose
rclone:
container_name: rclone
image: wiserain/rclone
restart: always
network_mode: "bridge"
privileged: true
environment:
- PUID=$PUID
- PGID=$PGID
- TZ=$TZ
# - RCLONE_REMOTE_PATH=gmedia-crypt:/cloud
volumes:
- $ROOT/config/rclone:/config
- $ROOT/config/rclone/logs:/log
- $ROOT/cache/rclone:/cache
# - $ROOT/config/rclone/mounts/gmedia-crypt:/cloud:shared
- $ROOT/config/rclone/mounts/gmedia-local:/local:shared # Optional: if you have a folder to be mergerfs/unionfs with
- $ROOT/media:/data:shared
devices:
- /dev/fuse
cap_add:
- MKNOD
- SYS_ADMIN
Is it a bug in the docker image or in my setup?
This is really neat, it's a bit janky having to call the RClone move from a cron job on the host so this is a much nicer solution.
But I've found that there's a few things in the readme that appear to have changed over time and don't match any longer.
This line:
Along with the rclone folder, you can specify one local directory to be mergerfs with by POOLING_FS=mergerfs.
It should state:
"...you have to specify one local directory"
Otherwise the script doesn't appear to call Mergerfs, both have to have values.
And this seems to be incorrect too:
RCLONE_REMOTE_PATH=remote_name:path/to/mount
Should be:
RCLONE_REMOTE_PATH=remote_name:
The /path/to/mount is hardcoded as /cloud which can be defined in the docker-compose as - /path/to/mount:/cloud
And these two:
COPY_LOCAL_SCHEDULE
MOVE_LOCAL_SCHEDULE
Should be:
COPY_LOCAL_CRON
MOVE_LOCAL_CRON
I also found that MOVE_LOCAL_AFTER_DAYS has to be set to any number for the move script to work, I don't understand the code well but I think the MOVE_LOCAL_EXCEEDS_GB has a null value check but MOVE_LOCAL_AFTER_DAYS doesn't and so it errors. Or it could be defaulted to a sensible number like 5 the same way as the GUID and PGID are.
I see an option RCLONE_LOG_FILE, but this variable is nowhere else in the code than the README. Is this variable possible to use?
Can you please update to 22.04?
Why is the rclone remote control API turned on, what is it used for? I understand the ports are exposed to the container itself only
TODO
mergerfs \ -o uid=${PUID:-911},gid=${PGID:-911},umask=022,allow_other \ -o ${MFS_USER_OPTS} \ /local=RW:/cloud=NC /data
if that is executed that means it searches for /local and /cloud in the container
but the compose that i have copied from the github says:
- /volume1/mediaserver/remote:/data:shared - /volume1/mediaserver/local:/local
/data:shared and /local
checkin in the docker indeed /cloud is empty and the cloud files are on /data
i don't get it?
is that correct?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.