Git Product home page Git Product logo

nginx-big-upload's People

Contributors

mmatuska avatar pgaertig avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nginx-big-upload's Issues

Resumable upload with loadbalancer

The scenario is to have a loadbalancer in front of few backend servers with resumable upload support. As I can see nginx stores the state on individual backends so this prevents us from scaling. Thought of having a shared folder between the nodes but that means overhead In performance etc..
Suggestions are welcome

SPDY support (external issue)

This is placeholder to test SPDY support available from nginx 1.4.0. So far I experience problems described here openresty/lua-nginx-module#252 .

I see two workarounds currently:

  1. different domain/server block for upload with SPDY disabled. CORS must be considered on client side then.
  2. haproxy 1.5-dev12+ in front of nginx as SPDY/SSL terminator.

Looking forward for more tests and contributions to ngx_lua module.

in-core solution

nginx-upload-module is widely known solution for reliable big files upload with resume option and it's really great to see it still supported even by new project name.

But still don't understand why not to just use in-core client_body_in_file_only functionality? It has a lack of documentation and nobody use it, but we tried and have some success on it. I can share my experience and configuration here if it necessary

pass the initial params to final url

I need some params to be passed to the final url when the file uploading is finished

a config option like in nginix-upload-module upload_pass_form_field

global lua variables race condition

Using this with lua jit 2.1-20190912 from https://github.com/openresty/luajit2 renders warnings about global variables

[warn] 6#6: *1 [lua] _G write guard:12: __newindex(): writing a global Lua variable ('file_size') which may lead to race conditions between concurrent requests, so prefer the use of 'local' variables

and

[warn] 6#6: *1 [lua] _G write guard:12: __newindex(): writing a global Lua variable ('file_data') which may lead to race conditions between concurrent requests, so prefer the use of 'local' variables

Upload alternates between 200 OK and 412 PRECONDITION FAILED

Hi Piotr,

I managed to get the file uploads working earlier. I now have everything running in Docker swarm mode, however when I repeatedly upload the same (chunked) file I first get a 200 OK, then a 412 precondition failed, then a 200 ok again, etc. It perfectly alternates between the two. I noticed you use 411 and 412 codes in your Lua scripts so my feeling is that the problem is somewhere in the scripts or I'm not calling the endpoints correctly.

Below you see some output of my Python test script talking to the Nginx upload endpoint. The first part shows a successful scenario (uploading 11 chunks). The second test run shows that right after the 11th chunk the script fails with a 412 error. After that it passes again. After that it fails.

Do you have any idea what could be the cause of this strange but regular behavior? I also attached the test output to this issue as well as my nginx.conf and test code (in case you might want to reproduce it)

Thanks,

Ralph

============================================================================ test session starts ============================================================================
platform darwin -- Python 2.7.11, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /Users/Ralph/development/flask-microservices, inifile:
collected 8 items

backend/tests/test_auth.py ...
backend/tests/test_compute.py ..
backend/tests/test_scenarios.py .
backend/tests/test_storage.py ..

========================================================================= 8 passed in 0.20 seconds ==========================================================================
ralph@macpro:~$ ./manage.sh test
============================================================================ test session starts ============================================================================
platform darwin -- Python 2.7.11, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /Users/Ralph/development/flask-microservices, inifile:
collected 8 items

backend/tests/test_auth.py ...
backend/tests/test_compute.py ..
backend/tests/test_scenarios.py .
backend/tests/test_storage.py .sent chunk 1
sent chunk 2
sent chunk 3
sent chunk 4
sent chunk 5
sent chunk 6
sent chunk 7
sent chunk 8
sent chunk 9
sent chunk 10
sent chunk 11
.

========================================================================= 8 passed in 0.41 seconds ==========================================================================
ralph@macpro:~$ ./manage.sh test
============================================================================ test session starts ============================================================================
platform darwin -- Python 2.7.11, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /Users/Ralph/development/flask-microservices, inifile:
collected 8 items

backend/tests/test_auth.py ...
backend/tests/test_compute.py ..
backend/tests/test_scenarios.py .
backend/tests/test_storage.py .sent chunk 1
sent chunk 2
sent chunk 3
sent chunk 4
sent chunk 5
sent chunk 6
sent chunk 7
sent chunk 8
sent chunk 9
sent chunk 10
sent chunk 11
F

================================================================================= FAILURES ==================================================================================
_____________________________________________________________________________ test_upload_file ______________________________________________________________________________

def test_upload_file():

    response = requests.post(uri('auth', '/tokens'), headers=login_header('ralph', 'secret'))
    assert response.status_code == 201
    token = response.json()['token']

    f_name = 'data.nii.gz'
    f_path = os.path.join(os.getenv('DATA_DIR', os.path.abspath('data')), f_name)
    session_id = None

    with open(f_path, 'rb') as f:
        i = 0
        j = 1
        n = os.path.getsize(f_path)
        for chunk in read_chunks(f, 1024*1024):
            content_range = 'bytes {}-{}/{}'.format(i, i + len(chunk) - 1, n)
            headers = token_header(token)
            headers.update({
                'Content-Length': '{}'.format(len(chunk)),
                'Content-Type': 'application/octet-stream',
                'Content-Disposition': 'attachment; filename={}'.format(f_name),
                'X-Content-Range': content_range,
                'X-Session-ID': session_id,
            })
            response = requests.post(uri('file', '/files'), headers=headers, data=chunk)
            if response.status_code == 200:
                break
            print('sent chunk {}'.format(j))
          assert response.status_code == 201

E assert 412 == 201
E + where 412 = <Response [412]>.status_code

backend/tests/test_storage.py:39: AssertionError
==================================================================== 1 failed, 7 passed in 0.39 seconds =====================================================================
ralph@macpro:~$ ./manage.sh test
============================================================================ test session starts ============================================================================
platform darwin -- Python 2.7.11, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /Users/Ralph/development/flask-microservices, inifile:
collected 8 items

backend/tests/test_auth.py ...
backend/tests/test_compute.py ..
backend/tests/test_scenarios.py .
backend/tests/test_storage.py .sent chunk 1
sent chunk 2
sent chunk 3
sent chunk 4
sent chunk 5
sent chunk 6
sent chunk 7
sent chunk 8
sent chunk 9
sent chunk 10
sent chunk 11
.

========================================================================= 8 passed in 0.40 seconds ==========================================================================
ralph@macpro:~$ ./manage.sh test
============================================================================ test session starts ============================================================================
platform darwin -- Python 2.7.11, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /Users/Ralph/development/flask-microservices, inifile:
collected 8 items

backend/tests/test_auth.py ...
backend/tests/test_compute.py ..
backend/tests/test_scenarios.py .
backend/tests/test_storage.py .sent chunk 1
sent chunk 2
sent chunk 3
sent chunk 4
sent chunk 5
sent chunk 6
sent chunk 7
sent chunk 8
sent chunk 9
sent chunk 10
sent chunk 11
F

================================================================================= FAILURES ==================================================================================
_____________________________________________________________________________ test_upload_file ______________________________________________________________________________

def test_upload_file():

    response = requests.post(uri('auth', '/tokens'), headers=login_header('ralph', 'secret'))
    assert response.status_code == 201
    token = response.json()['token']

    f_name = 'data.nii.gz'
    f_path = os.path.join(os.getenv('DATA_DIR', os.path.abspath('data')), f_name)
    session_id = None

    with open(f_path, 'rb') as f:
        i = 0
        j = 1
        n = os.path.getsize(f_path)
        for chunk in read_chunks(f, 1024*1024):
            content_range = 'bytes {}-{}/{}'.format(i, i + len(chunk) - 1, n)
            headers = token_header(token)
            headers.update({
                'Content-Length': '{}'.format(len(chunk)),
                'Content-Type': 'application/octet-stream',
                'Content-Disposition': 'attachment; filename={}'.format(f_name),
                'X-Content-Range': content_range,
                'X-Session-ID': session_id,
            })
            response = requests.post(uri('file', '/files'), headers=headers, data=chunk)
            if response.status_code == 200:
                break
            print('sent chunk {}'.format(j))
          assert response.status_code == 201

E assert 412 == 201
E + where 412 = <Response [412]>.status_code

backend/tests/test_storage.py:39: AssertionError
==================================================================== 1 failed, 7 passed in 0.35 seconds =====================================================================

info.zip

How to use 'auth_request' directive with this module?

I can not use auth_request.
Nginx proxy body to my backend on / upload, instead of performing sub request on
/ api / v1 / check_auth.
Can you help me? @pgaertig

location = /upload {
        auth_request    /api/v1/check_auth;
        uwsgi_pass_request_body off;
        uwsgi_pass      uwsgi://api:8001;

        set $storage backend_file;
        set $file_storage_path /var/www/data;
        set $backend_url /api/v1/upload_finish;

        set $bu_sha1 on;
        set $bu_checksum on;

        set $package_path '/etc/nginx/?.lua';
        content_by_lua_file /etc/nginx/big-upload.lua;
    }

MAX_FILESIZE Check...

I need a way to determine the file size and the file exceeds a certain size to the upload with the next chunk will be refused and returned to give an error. In addition, the upload will be deleted.

crash with backend enabled

I'm use Ubuntu Server 12.04 LTS with Lua 5.1. No request to backend happend. Works well in the file only mode. Is there any ideas how to debug this problem? Only string in the nginx error log:

worker process xxx exited on signal 11

clean-up files after backend error

Add something like:

syntax: upload_cleanup <HTTP status/range> [<HTTP status/range>...]
default: none
severity: optional
context: server, location
Specifies HTTP statuses after generation of which all file successfuly uploaded in current request will be removed. Used for cleanup after backend or server failure. Backend may also explicitly signal errornous status if it doesn't need uploaded files for some reason. HTTP status must be a numerical value in range 400-599, no leading zeroes are allowed. Ranges of statuses could be specified with a dash.

example:
upload_cleanup 400 404 499 500-505;

Prevent file uploading before condition.

How we can make it? For example:
Backend some lang, which can create some redis keys:
client requests is file upload available -> server under nginx check it and create redis key for example: uid_hash_timestamp 30s -> send url to client -> client starts file uploading by given url -> nginx check utl second part, for example (location upload/url_key_in_redis) -> if url exists go to file uploading module -> ok

If another client starts uploading to same url(which is not his) or url is not represent in redis we should immediately block request without any parsing (no nginx processing, no upload processing).

Error when trying to build docker with nginx 1.15.x

Makefile:8: recipe for target 'build' failed
The command '/bin/sh -c DOCKER=1 /tmp/build_nginx.sh && apt-get install dumb-init zlib1g-dev libssl1.0-dev && rm -rf /usr/src /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc /usr/share/doc-base /usr/share/man /usr/share/locale /usr/share/zoneinfo /usr/src && groupadd -g 1000 -o nginx && useradd --shell /usr/sbin/nologin -u 1000 -o -c "" -g 1000 -G www-data nginx' returned a non-zero code: 2

Test with Nginx 1.3.15

Do some production testing with Nginx 1.3.x,
especially with ngx_http_spdy_module available with Nginx 1.3.15.

$file_storage_path other than /tmp does not seem to work

Hi,

First of all, thanks for the great work on this Nginx extension. It looks really promising. So far, I managed to get the example configuration running inside a Docker container. My files are nicely uploaded in chunks and reported back to the upload page where I can download them again. I changed the nginx.conf slightly to use an alias directive in the /tmp location instead of a root directive. Also, the alias is pointing to /tmp/ instead of /. This seems to work.

However, when I change the $file_storage_path and alias to something else, e.g., /home then I get the following weird Nginx errors:

webserver | nginx: [alert] lua_code_cache is off; this will hurt performance in /usr/local/nginx/conf/nginx.conf:24
webserver | 2016/08/05 10:17:24 [error] 8#0: *1 [lua] big-upload.lua:41: Failed to open file /home/88224118406027090, client: 192.168.99.1, server: , request: "POST /upload HTTP/1.1", host: "192.168.99.100", referrer: "http://192.168.99.100/"

webserver | 192.168.99.1 - - [05/Aug/2016:10:17:24 +0000] "POST /upload HTTP/1.1" 500 5 "http://192.168.99.100/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"
webserver | 192.168.99.1 - - [05/Aug/2016:10:17:29 +0000] "\x1F\x8B\x08\x08H\xFC\x88W\x00\x03data.nii\x00\x9C\xBDy\xD8m\xDBU\xD69\xE6\x5C\xFDZ{\xEF\xEF\xFB\xCE9\xF7\x9C\xDB\xF7\xB9\x897\xC9MCB\xE0\x92\x10\x08\x84\x10 \x01\x02A\x90F\x1AC\x80\xD0\xB7\x16\x8F\x1A\x04\xA1xP\x14\x0C E\x1FD\xA0D\xA4\x8BF\x90\x18\x10\x8CE!" 400 173 "-" "-"

So, it looks like big-upload.lua suddenly can't find the /home/ directory even though the /home directory exists in the Docker image.

Any idea what the cause of this might be? Is there are hard-coded dependency on the /tmp directory? In the end I'd like to use a shared Docker data volume container with a mounted directory like /mnt/shared/files. But this assumes that I can point $file_storage_path to that mounted directory.

I attached my nginx.conf to this issue. Thanks in advance for your time!

Ralph
nginx-yoda.conf.zip

Multiple File Upload

Thanks for working on this!

I wanted to know if you had any plans to add support for HTML5 multiple file upload.
There's a page describing it at http://css.dzone.com/articles/working-html5s-multiple-file
The Spec is at http://www.w3.org/TR/html-markup/input.file.html

The short version is it lets users add multiple files to a single upload, which is rather convenient. I had talked with the nginx-upload-module author about adding them, but he seems to have lost interest in the project.

In any event, thanks for building this.
-CPD

Hangs if used together with auth_request

I'm trying to use this module together with the auth_request module, so I can authorize the uploads before they happen, instead of doing that on the final chunk, however, if I include an auth_request directive in my config, nginx just hangs, without performing a request. If I remove the auth_request directive, everything works perfectly. I'm wondering if you have any idea what could cause this?

My config:

worker_processes  1;
pid /tmp/nginx-upload.pid;

error_log  /dev/stderr;
daemon off;

events {
    worker_connections  10;
}

http {
    server {
        listen 9000;

        access_log /dev/stdout combined;
        error_log /dev/stderr;

        # Max size of chunks
        client_max_body_size 10m;



        location = /upload {
            auth_request /auth; # This is the line that messes it up, if I remove this, the upload succeeds
            lua_code_cache off;
            set $storage backend_file;
            set $bu_checksum on;

            set $file_storage_path /srv/http/uploads;
            set $backend_url /finish;
            set $package_path '../?.lua';
            content_by_lua_file ../big-upload.lua;
        }

        location = /auth {
            internal;
            proxy_pass http://127.0.0.1/uploadtokens/auth;
            proxy_pass_request_body off;
        }

        location = /finish {
            internal;
            proxy_pass http://127.0.0.1/finish;
        }
    }
}

I'm on 4fb4541 and I have used the build_nginx_with_docker.sh script to build nginx, the only change I made was adding the --with-http_auth_request_module flag.

Errors when uploading multiple files simultaneously

I use nginx-big-upload on my site for mass videos uploading. But when somebody tries to upload 3-5 files simultaneously, the server starts to response with 500 errors in the middle of upload. Is there a way to correct it? Or just resume upload after the error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.