benoitc / gunicorn Goto Github PK
View Code? Open in Web Editor NEWgunicorn 'Green Unicorn' is a WSGI HTTP Server for UNIX, fast clients and sleepy applications.
Home Page: http://www.gunicorn.org
License: Other
gunicorn 'Green Unicorn' is a WSGI HTTP Server for UNIX, fast clients and sleepy applications.
Home Page: http://www.gunicorn.org
License: Other
Gunicorn should support switching user/group for workers. I see no reason to change master?
I'm using a command like this one to run a gunicorn server for a Django site: gunicorn_django -w 2 -b 0.0.0.0:8000 -p gunicorn.pid -D
This works fine and the server is started properly. If I run kill -HUP 12345
with the actual PID of the master process everything also works fine -- the server gracefully restarts and all is well.
The problem is that I don't want to look up the PID of the master process every time I want to restart it. I'd like to use something like this to figure it out automatically:
kill -HUP `cat gunicorn.pid`
When I do this the existing processes are always killed, but a new process is only spawned about 50% of the time.
I'm wondering if there's a problem with the new process trying to lock gunicorn.pid
for writing while the cat
command still has it open or something similar. It's entirely possible that something else is wrong though, so I'm not really sure.
I'm on OS X 10.6 in case that matters.
I started up a server with 5 worker processes, then increased that number by 1 by sending a TTIN signal, and then attempted to decrease that number by 1 by sending a TTOU signal. However, I noticed that no processes were actually killed, so I sent another TTOU signal, at which point the arbiter crashed.
Here's the console output from that session:
INFO: Booted Arbiter: 62992
INFO: Worker 62993 booting
INFO: Worker 62994 booting
INFO: Worker 62995 booting
INFO: Worker 62996 booting
INFO: Worker 62997 booting
INFO: Handling signal: ttin
INFO: Worker 62999 booting
INFO: Handling signal: ttou
INFO: Handling signal: ttou
ERROR: Unhandled exception in main loop.
Traceback (most recent call last):
File "/Users/ericflo/.virtualenvs/realitypick/src/gunicorn/gunicorn/arbiter.py", line 143, in run
self.murder_workers()
File "/Users/ericflo/.virtualenvs/realitypick/src/gunicorn/gunicorn/arbiter.py", line 207, in murder_workers
for (pid, worker) in self.WORKERS.iteritems():
RuntimeError: dictionary changed size during iteration
INFO: Worker 62995 exiting.
INFO: Worker 62994 exiting.
INFO: Worker 62997 exiting.
INFO: Worker 62996 exiting.
Sending a HUP or TERM signal to the arbiter process probably should not result in zombie worker processes. HUP shouldn't kill the arbiter process, either. INT seems to work properly.
If anyone's running into this, this bash-fu helped me :)
ps aux | grep python | awk '{print $2}' | xargs kill
If you send a TTOU or do anything else that calls
arbiter:reap_workers()gunicorn will kill the newest worker first. In cases where you'd want to cycle periodically [because of memleaks or whatnot] its probably more sane to kill oldest first
If a client disconnects while gunicorn is either sending a response or processing a request, the worker will turn up a broken pipe traceback. This is pretty common on crowded servers and results in a lot of noise in your logs.
Gunicorn should handle these exceptions by logging with either the INFO or WARN loglevel (I'd go with what nginx uses, cant recall right now) and close the socket instead of letting socket generate that broken pipe traceback.
Do you think it's a good idea if we have some way to run lots of applications via the same gunicorn instance, so that they use the same python interpreter(s).
At least for me, this will be very useful, because it'll save lots of RAM, as I have many small applications (with very low load) that run on the same machine.
Gunicorn fails to start when using run_gunicorn 0.0.0.0:8001 for example, the error is in row 47:
host, port = host.split(':', 1)
should be
host, port = bind.split(':', 1)
Cheers!
Peppe
At the moment it just uses settings.py, and the following:
gunicorn_django -b 0.0.0.0:8000 production_settings
doesn't seem to make any difference.
The example gunicorn.conf.py at http://gunicorn.org/configuration.html doesn't seem to be valid Python. The lambda statements don't close properly. Trying to use that file as a basis for my own config, I got this:
RuntimeError: before_fork hook isn't a callable
Are lines being truncated or something?
latest gunicorn on a karmic box
gunicorn_django -b 127.0.0.1:8000 -D --workers= 4
kill -TTOU ****
now 3 are runnings
kill -HUP ****
4 are back again
It's not 'critical' tho.
in order to be able to use any wsgi app with tornado based workers, non tornado.web.Application based apps have to be wrapped with tornado.wsgi.WSGIContainer. See: http://github.com/ckreutzer/gunicorn/commit/aaf9e1168ae1a1bab02ee860bfec45e5dde28278
gunicorn is crashing if used with suds lib: https://fedorahosted.org/suds/ and it probably comes from "socket.setdefaulttimeout(tm)" with Python 2.5.2
Usage: http://www.friendpaste.com/ONxIlbMgLvXObUVP5Iz9L
Error : http://www.friendpaste.com/1tnKdLI9wzHVG52mvWonF3
Python : 2.5.2
Gunicorn : Latest tip ... issue #7 fixed
Running : gunicorn_django --port=3001 --workers=4 --host=127.0.0.1
Add the following line in your settings.py:
http://friendpaste.com/4Igb3sDGzdY7PR860OgX1f
Gives you the following trace:
http://friendpaste.com/5NHucaCHLjXTt22SH8luMe
This used to work with gunicorn 0.2
Regards,
xav
Lots of people have been asking on how to configure gunicorn for use with a virtualenv. We should add a FAQ or section to the deployment. We should also add a note on using gunicorn from a virtualenv from runit or supervisord
I've been doing some load testing of my app on gunicorn with the gevent arbiter, but under concurrent load the server will completely lock up if I exhaust the available file descriptors (in my case it's opening too many sockets to talk to the database). I can do some tuning on the system and my code to help avoid this, but the app server shouldn't completely lock up.
I've included a minimal test case to reproduce this by quickly creating a bunch of tempfiles to use up the file descriptors.
breakage.py:
import tempfile
files = []
def application(environ, start_response):
files.append(tempfile.mkstemp())
start_response('200 OK', [('Content-type', 'text/plain'), ('Content-length', '2')])
return ['ok']
Benchmark command:
$ ab -n 3000 -c 100 http://127.0.0.1:8000/
Log:
$ gunicorn -k egg:gunicorn#gevent breakage
2010-05-12 01:03:34 [32292] [INFO] Arbiter booted
2010-05-12 01:03:34 [32292] [INFO] Listening at: http://0.0.0.0:80
2010-05-12 01:03:34 [32295] [INFO] Worker spawned (pid: 32295)
Traceback (most recent call last):
File "/mnt/application/lib/python2.6/site-packages/gevent/greenlet.py", line 388, in run
File "/mnt/application/lib/python2.6/site-packages/gunicorn/workers/ggevent.py", line 55, in acceptor
File "/mnt/application/lib/python2.6/site-packages/gevent/socket.py", line 255, in accept
error: [Errno 24] Too many open files
<Greenlet at 0x9a286ec: <bound method GEventWorker.acceptor of <gunicorn.workers.ggevent.GEventWorker object at 0x99af26c>>(<Pool at 0x9a30c8c set([<Greenlet at 0x9a28a6c: <b)> failed with error
2010-05-12 01:03:53 [32295] [ERROR] Error processing request.
Traceback (most recent call last):
File "/mnt/application/lib/python2.6/site-packages/gunicorn/workers/async.py", line 29, in handle
File "/mnt/application/lib/python2.6/site-packages/gunicorn/workers/async.py", line 58, in handle_request
File "/mnt/application/lib/python2.6/site-packages/gunicorn/http/request.py", line 181, in read
File "/mnt/application/lib/python2.6/site-packages/gunicorn/http/request.py", line 68, in read
File "/mnt/application/lib/python2.6/site-packages/gevent/socket.py", line 333, in recv
File "/mnt/application/lib/python2.6/site-packages/gevent/socket.py", line 137, in wait_read
File "core.pyx", line 293, in gevent.core.read_event.__init__ (gevent/core.c:3255)
File "core.pyx", line 243, in gevent.core.event.add (gevent/core.c:2414)
IOError: [Errno 2] No such file or directory
2010-05-12 01:03:53 [32295] [ERROR] Error processing request.
Traceback (most recent call last):
File "/mnt/application/lib/python2.6/site-packages/gunicorn/workers/async.py", line 29, in handle
File "/mnt/application/lib/python2.6/site-packages/gunicorn/workers/async.py", line 61, in handle_request
File "/mnt/application/breakage.py", line 5, in application
File "/usr/local/encap/python-2.6.5/lib/python2.6/tempfile.py", line 293, in mkstemp
File "/usr/local/encap/python-2.6.5/lib/python2.6/tempfile.py", line 228, in _mkstemp_inner
OSError: [Errno 24] Too many open files: '/tmp/tmpuJ_8V1'
Specifically, for the various frameworks like werkzeug, web.py, itty. So on and such forth.
Latest HEAD (d2561ae) crashes if gunicorn is run without the "--pid" option and the pid file is None:
Traceback (most recent call last): File "/mnt/application/mopho-v5/bin/gunicorn", line 8, in load_entry_point('gunicorn==0.8.1', 'console_scripts', 'gunicorn')() File "/mnt/application/mopho-v5/lib/python2.6/site-packages/gunicorn-0.8.1-py2.6.egg/gunicorn/main.py", line 98, in run main("%prog [OPTIONS] APP_MODULE", get_app) File "/mnt/application/mopho-v5/lib/python2.6/site-packages/gunicorn-0.8.1-py2.6.egg/gunicorn/main.py", line 80, in main Arbiter(cfg, app).run() File "/mnt/application/mopho-v5/lib/python2.6/site-packages/gunicorn-0.8.1-py2.6.egg/gunicorn/arbiter.py", line 123, in run self.start() File "/mnt/application/mopho-v5/lib/python2.6/site-packages/gunicorn-0.8.1-py2.6.egg/gunicorn/arbiter.py", line 96, in start self.pidfile.create(self.pid) File "/mnt/application/mopho-v5/lib/python2.6/site-packages/gunicorn-0.8.1-py2.6.egg/gunicorn/pidfile.py", line 20, in create oldpid = self.validate() File "/mnt/application/mopho-v5/lib/python2.6/site-packages/gunicorn-0.8.1-py2.6.egg/gunicorn/pidfile.py", line 54, in validate with open(self.path, "r") as f: TypeError: coercing to Unicode: need string or buffer, NoneType found
There are situations where you want to run stuff after forking away. random.seed() is one good example which would improves situations like these: http://pb.lericson.se/p/awtguy/
depending on the operating system, the owner of the pid file and unix socket file may be the group of the process (that can be changed via -g option) or the parent directory group.
This behavior can be chosen with Linux, but Freebsd & OSX impose the group to be the one of the parent dir.
When I use the -g GROUP flag, I expect the socket file to be owned by GROUP and not by the parent directory group. Unfortunately, the behavior is OS dependent.
With an empty gunicorn.conf.py
$ gunicorn_django --config gunicorn.conf.py settings
Traceback (most recent call last):
File "/usr/bin/gunicorn_django", line 9, in
load_entry_point('gunicorn==0.8.0', 'console_scripts', 'gunicorn_django')()
File "/usr/lib/pymodules/python2.5/gunicorn/main.py", line 146, in run_django
main("%prog [OPTIONS] [SETTINGS_PATH]", get_app)
File "/usr/lib/pymodules/python2.5/gunicorn/main.py", line 70, in main
cfg = Config(opts.dict, opts.config)
File "/usr/lib/pymodules/python2.5/gunicorn/config.py", line 56, in init
self.cfg.pop("builtins")
Don't really know why my module doesn't have a builtins, but replacing that line with
self.cfg.pop("builtins", None)
... makes it all work.
File "/Users/gawel/eggs/gunicorn-0.6.4-py2.6.egg/gunicorn/worker.py", line 163, in handle
except UnexpectedShutdown:
NameError: global name 'UnexpectedShutdown' is not defined
Hi,
(Thanks so much for writing gnunicorn. I've only played with a little bit but so far it's awesome!)
I'm wondering if it's possible to define/pass stuff from the gunicorn command-line down to an actual worker application that's being spawned. The example I'm thinking of is this:
http://github.com/migurski/TileStache/blob/master/examples/tilestache_gunicorn.py
Where the only thing that really needs to change is the actual config defined outside of app(). Is there a way that I can define a path to a JSON file (or whatever) as a CLI argument and then read it inside def app.
For example:
$> gunicorn --foo bar.json test:app
And then:
def app (environ, start_response):
data = json.loads( <-- where somehow I have access to --foo here -->)
Does that make sense?
Cheers,
cStringIO will be considerably faster than StringIO, when it's available.
Changing:
from StringIO import StringIO
to
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
here: http://github.com/benoitc/gunicorn/blob/master/gunicorn/http/parser.py#L1 should be all that's needed
Add a way to specify SCRIPT_NAME env variable (or other environment variables), to be able to use prefix in urls.
gunicorn should have some way to keep any memory leaks in the app under control. A max_requests option should be enough.
To make the string substituion complete row 67 should be changed from:
print "Development server is running at unix:/" % addr
to:
print "Development server is running at unix:/%s" % addr
Cheers!
Peppe
:)
Since last week, it seems not possible to kill a daemonized application anymore (spotted around Friday). It seems to get the QUIT event but doesn't quit then. It works as expected with eventlet or the default one.
Thanks for taking a look at this.
$ cat app.py
def application(environ, start_response):
start_response("200 OK", [("Content-Type", "text/html; charset=utf-8")])
return ["Hello, World!"]
$ gunicorn --log-level=DEBUG --log-file=app.log --pid=app.pid -D -k "egg:gunicorn#gevent" app:application
$ kill -QUIT `cat app.pid`
$ cat app.log
2010-05-09 20:22:47 [7061] [INFO] Arbiter booted
2010-05-09 20:22:47 [7061] [INFO] Listening at: http://127.0.0.1:8000
2010-05-09 20:22:47 [7063] [DEBUG] Booting worker: 7063 (age: 1)
2010-05-09 20:22:47 [7063] [INFO] Worker spawned (pid: 7063)
2010-05-09 20:23:02 [7061] [INFO] Handling signal: quit
$ ps -A | grep gunicorn
7061 ? 00:00:00 gunicorn
7063 ? 00:00:00 gunicorn
The following code reproduces the issue. No matter the arbiter or the timeout or the keepalive values, gunicorn crashes. Event the try: except: does not prevent gunicorn to crash.
:::Python
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
#
# Example code from Eventlet sources
from wsgiref.validate import validator
import urllib2
import eventlet
#
#eventlet.monkey_patch(all=True)
#@validator
def app(environ, start_response):
"""Simplest possible application object"""
data = ''
try:
f = urllib2.urlopen('http://google.fr/')
except:
data += 'Unable to open an url with urllib'
data += 'Hello, WorldWideWeb!\n'
status = '200 OK'
response_headers = [
('Content-type','text/plain'),
('Content-Length', str(len(data)))
]
start_response(status, response_headers)
return iter([data])
As per my understanding when unicorn workers runs on a unix socket the available worker picks up the request and serves the response.
If we are running unicorn on a ip:port is the load balancing done by unicorn? Is n't it nginx proxying requests to each upstream with simple round robin ?
Lets say you do something like this in your app: headers.append(("P3P",'CP="CAO PSA OUR"'))
.. gunicorn incorrectly rewrites my header as:
% curl -i -s http://127.0.0.1:8000 | grep "P3" P3p: CP="CAO PSA OUR"
Looking at top/htop I can see that every time I send a master process the HUP signal (kill -HUP pid) new master and worker processes are created. I expected the process to kill workers only. I was hoping this would be useful to reload python code lightly. (I have scripted deployments that takes advantage of the HUP reload feature and my vserver easily ran out of memory after a couple of deployments because of rouge processes).
I have updated to 0.8 and still notice the same behavior. I'm using the runit daemonize scheme.
I'd like to see some kind of "status" page which allows for interaction with gunicorn and its awesome signalling system. On top of this, I'd like to see some statistics collecting.
Since I'm still not sure about some of the stuff proposed here, feel free to add ideas to the stack.
I imagine a status page which allows GETs for viewing info and POST for sending a signal. This page has a simple default deny rule based on remote_addr which is configurable from the config file. It defaults to 127.0.0.1
but turns into 0.0.0.0
when gunicorn is run in debug mode. This status page is only accessible if turned on (config), but turned on while in debug. . A potential todo could be a "read only" user.
Doing a GET /_status
(or whatever) returns a JSON dict with the following info:
This piece of software would also accept POSTs - for instance:
# restart master $ curl -i -s -d "signal=HUP" http://127.0.0.1/_status | head -n1 HTTP/1.1 200 OK # kill non-existing worker with pid 1432 curl -i -s -d "signal=HUP" http://127.0.0.1/_status HTTP/1.0 500 INTERNAL SERVER ERROR Server: gunicorn/0.6 Date: Tue, 23 Feb 2010 21:02:23 GMT Status: 500 INTERNAL SERVER ERROR Connection: close Content-Type: text/plain Content-Length: 31 { "errmsg": "no such worker" }
This would allow you to handle workers (read: restarting those who have gone haywire) or spawn additional ones.
I should start off by nothing that this happens in 0.6.2+ (ie, no released). Currently at commit 05d4673.
After killing a couple of workers manually (kill -INT ), master (and all the workers) bails with the following traceback:
2010-02-28 21:38:49 [3154] [INFO] Unhandled exception in main loop: Traceback (most recent call last): File "/usr/lib64/python2.6/site-packages/gunicorn-0.6.2-py2.6.egg/gunicorn/arbiter.py", line 179, in run self.murder_workers() File "/usr/lib64/python2.6/site-packages/gunicorn-0.6.2-py2.6.egg/gunicorn/arbiter.py", line 355, in murder_workers diff = time.time() - os.fstat(worker.tmp.fileno()).st_ctime ValueError: I/O operation on closed file
Using latest tip :
Seems that there is a typo :
http://github.com/benoitc/gunicorn/blob/master/gunicorn/util.py#L116
http should be replace with html
traceback:
http://friendpaste.com/1pBVq1SRpROjEO28O7scC6
Regards,
xav
Because the PyPI version if failing when loading a gunicorn.conf.py
, I'm trying to use the git
version which fails with pip / virtualenv.
$ virtualenv --no-site-packages foo
$ pip -E foo install -e git+git://github.com/benoitc/gunicorn.git#egg=gunicorn
/usr/lib64/python2.6/site-packages/pip-0.6.3-py2.6.egg/pip/runner.py run on Thu Apr 29 08:30:24 2010
Obtaining gunicorn from git+git://github.com/benoitc/gunicorn.git#egg=gunicorn
Cloning git://github.com/benoitc/gunicorn.git to ./foo/src/gunicorn
Found command git at /usr/bin/git
Running command /usr/bin/git clone -q git://github.com/benoitc/gunicorn.git /home/yoan/tmp/foo/src/gunicorn
Running command /usr/bin/git checkout -q origin/master
Running setup.py egg_info for package gunicorn
running egg_info
creating gunicorn.egg-info
writing gunicorn.egg-info/PKG-INFO
writing top-level names to gunicorn.egg-info/top_level.txt
writing dependency_links to gunicorn.egg-info/dependency_links.txt
writing entry points to gunicorn.egg-info/entry_points.txt
writing manifest file 'gunicorn.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
reading manifest file 'gunicorn.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'gunicorn.egg-info/SOURCES.txt'
Exception:
Traceback (most recent call last):
File "/usr/lib64/python2.6/site-packages/pip-0.6.3-py2.6.egg/pip/basecommand.py", line 115, in main
self.run(options, args)
File "/usr/lib64/python2.6/site-packages/pip-0.6.3-py2.6.egg/pip/commands/install.py", line 155, in run
requirement_set.install_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/usr/lib64/python2.6/site-packages/pip-0.6.3-py2.6.egg/pip/req.py", line 868, in install_files
finder.add_dependency_links(req_to_install.dependency_links)
File "/usr/lib64/python2.6/site-packages/pip-0.6.3-py2.6.egg/pip/req.py", line 295, in dependency_links
return self.egg_info_lines('dependency_links.txt')
File "/usr/lib64/python2.6/site-packages/pip-0.6.3-py2.6.egg/pip/req.py", line 274, in egg_info_lines
data = self.egg_info_data(filename)
File "/usr/lib64/python2.6/site-packages/pip-0.6.3-py2.6.egg/pip/req.py", line 236, in egg_info_data
filename = self.egg_info_path(filename)
File "/usr/lib64/python2.6/site-packages/pip-0.6.3-py2.6.egg/pip/req.py", line 269, in egg_info_path
assert len(filenames) == 1, "Unexpected files/directories in %s: %s" % (base, ' '.join(filenames))
AssertionError: Unexpected files/directories in /home/yoan/tmp/foo/src/gunicorn: /home/yoan/tmp/foo/src/gunicorn/gunicorn.egg-info /home/yoan/tmp/foo/src/gunicorn/examples/pylonstest/pylonstest.egg-info
Finishing the install manually works though!? Like the latest release works as well.
$ cd foo/src/gunicorn
$ python setup.py install
…
Installed /home/yoan/tmp/foo/lib/python2.6/site-packages/gunicorn-0.8.0-py2.6.egg
Processing dependencies for gunicorn==0.8.0
Finished processing dependencies for gunicorn==0.8.0
Gunicorn currently adds a "status"-header (see http://github.com/benoitc/gunicorn/blob/master/gunicorn/http/response.py#L27 ) too all requests.
This header is absent in both HTTP 1.0 and 1.1 and is afaik only used for CGI "speak".
Don't see this as a demand to remove it, just to discuss the relevance so that the developers can add a more useful description to why gunicorn actually needs it.
On well trafficked sites, temporary storing for instance file uploads could severely affect your disk performance - so you usually store it in tmpfs instead.
I would like to change the path where gunicorn stores these things, for instance through a config setting.
I was looking around the web to see how people have tested Unicorn and found this: http://cmelbye.github.com/2009/10/04/thin-vs-unicorn.html
I decided to run this on my Macbook, just to pound on gunicorn a bit. I ran this to start gunicorn:
cd examples; gunicorn test:app
And this apache bench test:
ab -n 10000 -c 1000 http://127.0.0.1:8000/
Besides the too many sockets issue (argh Apple) once the test fails, the arbiter proc is locked at around 100% CPU usage and never recedes.
Ran test on my FreeBSD 7 machine, and get this error:
ERROR: Error processing request. [[Errno 32] Broken pipe]
Traceback (most recent call last):
File "build/bdist.freebsd-7.1-RELEASE-i386/egg/gunicorn/worker.py", line 112, in run
self.handle(conn, addr)
File "build/bdist.freebsd-7.1-RELEASE-i386/egg/gunicorn/worker.py", line 125, in handle
response.send()
File "build/bdist.freebsd-7.1-RELEASE-i386/egg/gunicorn/http/response.py", line 54, in send
self.write(chunk)
File "build/bdist.freebsd-7.1-RELEASE-i386/egg/gunicorn/http/response.py", line 38, in write
self.io.send(data)
File "build/bdist.freebsd-7.1-RELEASE-i386/egg/gunicorn/http/iostream.py", line 68, in send
return self.sock.send(data)
error: [Errno 32] Broken pipe
Which leaves the 'arbiter' processes pegged at 100% CPU usage for quite a long time. It appears to lower though on FreeBSD. I did a ktrace of the process and this was returned:
22968 python RET recvfrom 0
22968 python CALL recvfrom(0x7,0x2857f014,0x1000,0,0,0)
22968 python GIO fd 7 read 0 bytes
""
I've yet to figure out a simple way to get dtrace output on my Mac.
I'll take a look tomorrow morning after work to see if I can figure out where the issue is, but wanted to make note incase anyone else ran into something similar. I'll also test on a fresh Debian VM tomorrow to make sure this isn't a BSD socket issue.
There needs to be a sane method of logging for new code, I'd like to work on this adding in quite a bit of debug logging available with a cli switch. Spoke to davisp in IRC, and figured I'd post an issue to point out what I was working on. Ideas, comments welcome.
gunicorn configures the so-called root logger, and then creates its subloggers as descendants of that one root logger rather than its own single subroot logger. In other words, it current looks like this:
rooot <- configured / | \ A B C
The problem with this is that the root logger gets configured with handlers and whatnot, which makes logging.basicConfig
(a common way for many applications to quickly configure the logging package) stop working -- it just does nothing.
The suggestion is to make it look like this:
root | gunicorn <- configured / | \ A B C
This would then allow gunicorn to configure the gunicorn sublogger, making it not interfere with other stuff. One could also potentially stop logs from going to the parent application's logs.
In main.py lines ~70-71, gunicorn loads the app before loading any specified configuration file.
This ordering doesn't allow mangling the python path before (eg.) Django is imported, requring virtualenv nonsense and/or a seperate file to do that, or exporting PYTHONPATH from the parent process. These aren't so pretty, especially when simply reversing the "get_app(...)" and "Config(..)" lines allows me to put all of that stuff in my gunicorn.conf.py.
Just like unicorn, gunicorn should be able to run via unix://socket
Hello,
Patch is downloadable here : http://1cafe.fr/gunicorn.diff.txt
Patch applied preview is available here : http://1cafe.fr/gunicorn/
SVG logo is downloadable here : http://1cafe.fr/gunicorn.svg
Regards,
xav
I'm trying to serve Trac with gunicorn. Unfortunately, this does not work out of the box, so I'm looking for a bit of help here.
I get this traceback while using this WSGI file.
BTW, Trac runs fine with tracd. Could somebody help me out on this one?
Regards, Sander
start_response() in http/request:Request refers to a non-existent attribute on self.response at line 52:
def start_response(self, status, headers, exc_info=None):
if exc_info:
try:
if self.response and self.response.sent_headers:
raise exc_info[0], exc_info[1], exc_info[2]
finally:
exc_info = None
I assume what's wanted here is headers_sent.
I plan a transition of several django-based sites from using fcgi to gunicorn. I encountered several issues. The setup of the web/application server:
FreeBSD 8.0-RELEASE-p2 (16cores); Python 2.6.2; nginx/0.7.65; gunicorn 0.7.2 (16 workers); django 1.1
When not running gunicorn in daemon mode, there are no fatal problems at all. When detached using --daemon, some requests (most probably POST) causes "[Errno 38] Socket operation on non-socket" on worker which results in Broken pipe ->Unhandled exception in the master. Gunicorn goes down competely.
Logs: https://gist.github.com/8a0c82fd79ed4f86107c
When I put the server under real traffic (avg 40rq/s), I see occasionally (yet lots of noise daily) the following issues:
Traceback (most recent call last):
File "/data/sw/python/current/lib/python2.6/site-packages/gunicorn-0.7.2-py2.6.egg/gunicorn/worker.py", line 168, in handle req.response.write(item)
File "/data/sw/python/current/lib/python2.6/site-packages/gunicorn-0.7.2-py2.6.egg/gunicorn/http/response.py", line 53, in write write(self.req.socket, arg, self.chunked)
File "/data/sw/python/current/lib/python2.6/site-packages/gunicorn-0.7.2-py2.6.egg/gunicorn/util.py", line 129, in write sock.sendall(data)
File "<string>", line 1, in sendall
error: [Errno 57] Socket is not connected
or another one (not so common) ...
File "/data/sw/python/current/lib/python2.6/site-packages/Django-1.1-py2.6.egg/django/http/multipartparser.py", line 375, in next
data = self.flo.read(self.chunk_size)
File "/data/sw/python/current/lib/python2.6/site-packages/Django-1.1-py2.6.egg/django/http/multipartparser.py", line 405, in read
return self._file.read(num_bytes)
File "/data/sw/python/current/lib/python2.6/site-packages/gunicorn-0.7.2-py2.6.egg/gunicorn/http/tee.py", line 103, in read
return self._ensure_length(dest, length)
File "/data/sw/python/current/lib/python2.6/site-packages/gunicorn-0.7.2-py2.6.egg/gunicorn/http/tee.py", line 210, in _ensure_length
data = self._tee(length - len(dest.getvalue()))
File "/data/sw/python/current/lib/python2.6/site-packages/gunicorn-0.7.2-py2.6.egg/gunicorn/http/tee.py", line 180, in _tee
raise UnexpectedEOF("remote closed the connection")
TypeError: object.__new__() takes no parameters
besides this I see quite often "[WARNING] Ignoring EPIPE" or getting "WORKER TIMEOUT"
The first problem mentioned is, imho, a bug. As for the others, I dont know. Still, when I'm not getting that much errors just couple of "broken pipes" here and there which is ok.
i get the following exception when i try to start tornado workers:
Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/bin/gunicorn", line 8, in load_entry_point('gunicorn==0.8.0', 'console_scripts', 'gunicorn')() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/gunicorn-0.8.0-py2.6.egg/gunicorn/main.py", line 98, in run main("%prog [OPTIONS] APP_MODULE", get_app) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/gunicorn-0.8.0-py2.6.egg/gunicorn/main.py", line 80, in main Arbiter(cfg, app).run() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/gunicorn-0.8.0-py2.6.egg/gunicorn/arbiter.py", line 62, in __init__ self.worker_class = cfg.worker_class File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/gunicorn-0.8.0-py2.6.egg/gunicorn/config.py", line 90, in worker_class worker_class.setup() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/gunicorn-0.8.0-py2.6.egg/gunicorn/workers/gtornado.py", line 34, in setup patch_request_handler() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/gunicorn-0.8.0-py2.6.egg/gunicorn/workers/gtornado.py", line 18, in patch_request_handler web = sys.modules.pop("tornado.web") KeyError: 'tornado.web'
my setup:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.