janko / tus-ruby-server Goto Github PK
View Code? Open in Web Editor NEWRuby server for tus resumable upload protocol
Home Page: https://tus.io
License: MIT License
Ruby server for tus resumable upload protocol
Home Page: https://tus.io
License: MIT License
For security purposes, I have different buckets with different encryption keys that are separated per client in my app. As clients sign up, they get a bucket and a key specific to them. Is it possible to create new s3 adapters on the fly?
We are evaluating this server for large file uploads and would need to have the full file SHA256 checksum in the end for validation. Doing this only after the upload is finished is ugly and a performance issue (i.e. for a 50GB file).
As I understand the tus protocol, checksumming is only done for chunk verification - is there any way to get a full file checksum after the last chunk? Or any hooks we could use to compute a kind of "running" checksum while chunks are uploaded?
If a file was previously uploaded, before but no longer exists on the server because of the expirator, then they upload the exact same file (so that the fingerprint matches and the localstorage URL is reusued), tus-js will run an OPTIONS calls to the server to fetch headers. However, ruby-tus returns a 404, which results in the upload failing because the preflight check failed. In reality, the options call should return 204 always, and then when it does a HEAD to the url, then it gets the 404, and it realises that it needs to start a new upload. Basically, this server needs to return 204 for checkflight (options), never a 404 or it all fails.
At current state, we upload (PATCH) a chunk or whole file. After that if headers are wrong or missing, 412 error returned. However, we should let client before we receive the file.
Currently, the whole body file uploaded regardless it has invalid or missing headers.
How we should handle errors before we accept file?
Not sure this is the best place to ask but I am implementing this and I wanted to test the configuration to find the fastest possible upload speeds. I'm not sure if concurrency is 'on' even though I set it in the Tus::Storage::S3.new options. Also is there a recommendation for chunk size?
Tus::Server.opts[:storage] = Tus::Storage::S3.new(
bucket: "....",
access_key_id: .....,
secret_access_key: .....,
region: "us-east-1",
use_accelerate_endpoint: true,
logger: Logger.new(STDOUT),
retry_limit: 5,
http_open_timeout: 10,
concurrency: { concatenation: 20 }
)
Tus::Server.opts[:redirect_download] = true
This is a placeholder issue until we can get a another CI going.
Perhaps GitHub Actions CI?
Hey buddy!
I came across the Tus protocoll these days, as I need to upload huge files and then convert them to mp3. I use Companion, AWS S3 Multpart, Lambda & Elastic Transcoder for the moment but the costs seem to high. So I want to run my own Tus Server for the files. I checked your package first as I prefer Ruby over Node or Go, but I then started using the Go implementation "tusd" as it overs a hook system, which is very similar to my current AWS Lambda Trigger "S3 Object created" and then triggers conversion of audio files to mp3.
Any possibility of implementing something like before create hook for stuff like authentication and after create hook for stuff like "move this file somewhere and manipulate it"? Or did you not choose doing so on purpose?
Cheers from snowy Germany!
I'm trying to run tus-ruby-server behind a reverse proxy via SSL and would like to use your approach instead of tusd's docker solution. Is there a way to add -behind-proxy so that it pays attention to special headers when used in conjunction with a proxy?
I'm trying to upload +100gb files to S3 storage but I'm getting error:
FATAL -- : Aws::S3::Errors::InvalidArgument (Part number must be an integer between 1 and 10000, inclusive)
Is there any workaround or fix for this error?
sorry i want to ask, i want to use tus on my local development
first I add gem "tus-server", "~> 2.0" to my gemfile
and i add this to my routes.rb
Rails.application.routes.draw do
mount Tus::Server => "/files"
end
when my IOS hit my localhost:3000/files they get error code 400
can you help me?
We're using an S3 API 'compatible' storage which seems to occasionally take a second or two to actually persist the update. This seems to be causing our 409 Upload-Offset header doesn't match current offset
errors as when subsequent chunks are uploaded, the change to the info
hasn't updated yet.
I'm thinking of subclassing the S3 storage adapter to add a delay for subsequent chunks, or potentially adding some kind of automatic retry mechanism on the client side. Is there a better solution?
I'd like to be able to do something like the following in Rack:
map '/uploads/s3' do
run Tus::Server, storage: Tus::Storage::S3.new(...)
end
map '/uploads/filesystem' do
run Tus::Server, storage: Tus::Storage::Filesystem.new(...)
end
I've been digging through the Roda internals, and as far as I can make out, this isn't possible because opts
is stored against the class.
I managed to work around the limitation by writing some middleware which injects the intended storage destination into the Upload-Metadata
header and then implementing a custom storage adapter whose behaviour depends which storage destination was specified in the metadata. Workable, but it would have been a lot simpler to do something like this โ๏ธ instead.
Is there a way to do what I've sketched out above that I could persue? I'm not well-versed in Roda. From my reading, it might be possible for Tus::Server
to implement its own Roda::Base::InstanceMethods#opts
which would open the door to instance-level configuration.
Hi!
I am experiencing this error when I try to upload a file:
I, [2020-05-14T18:08:46.914272 #12057] INFO -- : [Aws::S3::Client 400 0.080107 0 retries] create_multipart_upload(content_type:"text/csv",content_disposition:"inline; filename=\"tmp.csv\"; filename*=UTF-8''tmp.csv",bucket:"weopt-local",key:"[FILTERED]") Aws::S3::Errors::InvalidArgument Invalid argument.
I am using this config in initializer:
require "tus/server"
require "tus/storage/s3"
Tus::Server.opts[:storage] = Tus::Storage::S3.new(
access_key_id: ENV['S3_ACCESS_KEY'], # "AccessKey" value
secret_access_key: ENV['S3_SECRET'], # "SecretKey" value
endpoint: ENV['S3_ENDPOINT'], # "Endpoint" value
bucket: ENV['GCLOUD_BUQUET'], # name of the bucket you created
region: ENV['GCLOUD_REGION'],
# prefix: 'files',
logger: Logger.new(STDOUT),
force_path_style: true,
)
Tus::Server.opts[:redirect_download] = true # redirect download requests to S3
Trying with AWS sdk, I can upload a file with this code:
s3=Aws::S3::Resource.new(
access_key_id: ENV['S3_ACCESS_KEY'], # "AccessKey" value
secret_access_key: ENV['S3_SECRET'], # "SecretKey" value
endpoint: ENV['S3_ENDPOINT'], # "Endpoint" value
# bucket: ENV['GCLOUD_BUQUET'], # name of the bucket you created
region: ENV['GCLOUD_REGION'],
# prefix: 'files',
force_path_style: true,
)
obj = s3.bucket(ENV['GCLOUD_BUQUET']).object('key')
obj.upload_file('tmp/tmp.csv')
Can you help, please?
Hello!
I'm going to use your tus-ruby-server gem in my rails app. Do i need to use goliath-rack_proxy in rails?
Was about to use this but then ran into bundler conflict.
Rails 4.2.7.1 has rack dependency of "~> 1.6".
tus-ruby-server has rack dependency of "~> 2.0"
However from what I could see, this gems other dependency, roda, doesn't require such a recent version.
Would be good to get this gems rack version decreased so it works with Rails 4.
It seems that the method to expire files is not checking the prefix. This can potentially expire files outside the scope of tus.
I believe bucket.objects
should be bucket.objects(prefix: @prefix)
here:
https://github.com/janko/tus-ruby-server/blob/master/lib/tus/storage/s3.rb#L212
And also bucket.multipart_uploads
should be bucket.multipart_uploads.prefix(prefix: @prefix)
here:
https://github.com/janko/tus-ruby-server/blob/master/lib/tus/storage/s3.rb#L218
We handle many GBs of video footage uploads per day. When an upload is finished, we move the file, upload to a CDN, then delete it. But if a person stops the upload half way and doesn't resume, we have say 20GB sitting there for 24 hours taking up disk space.
What we need is a setting to enable the upload expire timestamp to be updated with each PATCH request. The PATCH request keeps the file fresh, and if they stop uploading the file, it no longer gets updated. That we can set the expirator to run every 15 minutes without fear of it deleting files that are still being uploaded.
@janko-m Thoughts?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.