Git Product home page Git Product logo

julia's Introduction

julia Build Status

中文

A small yet high performance http server and reverse proxy. You may view it as a tiny nginx.

Environment

  • gcc 5.4.0
  • linux 4.4.0

Dependency

Install

  $ git submodule update --remote --recursive
  $ cd src && make all
  $ make install # root required

Run

$ julia # roo required, default listening at 8000

You may modify the config file to specify the port if there is a confliction.

Debug

As julia run as a daemon, it is not convenient to debug. Follow the steps to make it run in debug mode:

  1. change the INSTALL_DIR in Makefile to your local repo, like:
INSTALL_DIR = /home/foo/julia/ # the last slash required
  1. turn on the debug instruction in config.json
"debug": true,

Exciting

Making a http server and host your site is not extremely cool? Then, what if your server is compiled by your own compiler?

Yes, i wrote a C11 compiler(let me call it wgtcc). It compiles julia, and the site runs well :) You can install wgtcc, then compile julia:

  $ make CC=wgtcc all

It surprised me that the server compiled by wgtcc runs extremely fast!

have fun :)

Todo

  1. fastcgi
  2. chunked transform
  3. benchmark

Reference

  1. nginx
  2. lighttpd
  3. juson

中文

julia 是一个短小精悍的高并发http服务器和反向代理。你可以将他想象成tiny nginx。我用它搭了博客。静态文件由julia serve, 动态内容pass到后端的uwsgi(类似于fpm)。

安装 调试 请参照上方。

最酷的是,这个服务器,包括正在运行的博客,都是我自己写的C编译器编译的。如果你对编译器也感兴趣,可以看这里。按照其readme安装wgtcc后,你可以这样编译julia:

  $ make CC=wgtcc all

have fun :)

性能

benchmark: ab

均为8 worker, 1KiB静态页面

并发 nginx julia
10 52K 54K
100 56K 56K
1000 46K 44K

julia's People

Contributors

wgtdkp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

julia's Issues

疑惑和bug?

从知乎而来,看了楼主的代码,感觉受益匪浅。有几个疑惑:
1 hashmap那个结构,第一次见到这种写法,感觉有点意思,这种写法是什么专门的写法吗?
2 有没有可能buffer里面一次读取的是一个完整的包和第二个包的一部分(百度了下,这种是不是叫粘包?),那该怎么处理?
3 在server.c 中124行为什么要调用get_pid()?132行 daemon(1, 0);没有看到这个函数实现?
下面有几个我感觉是bug的地方。
4 get_pid函数中中没有关闭文件。
5 request_handle_uri函数中用了 r->port = atoi(uri->port.data);uri->port.data 并不是用‘\0’结尾的字符串,同理header_handle_content_length函数中int len = atoi(val->data)。
6 parse_uri函数中解析uri,uri->port.data没有设置值,uri->host.len没有设置值。

再次感谢楼主的代码。

Cannot close connection if buffer_recv failed

If the client closed the connection, buffer_recv will return an error. Through, we know the connection is closed. But, at the mean time, there could be another EPOLLOUT event, and we will try to send through this connection. There will be Bad file descriptor, as we have release the connection structure. There should be another data structure to book mark issues like this.

Error on backend return small msg.

The function handle_upstream didn't handle the small msg correctly:

int handle_upstream(connection_t* uc) {
    request_t* r = uc->r;
    buffer_t* b = &r->sb;
    int err = buffer_recv(b, uc->fd);
    if (err == OK) {
        // The connection has been closed by peer
        return ERROR; // didn't connection_enable_out(r->c) when first called the err is ok !!!
    } else if (err == ERROR) {
        // Error in backend
        response_build_err(r, 503);
        return ERROR;
    } else if (buffer_full(b)) {
        //connection_disable_in(uc);
    }
    connection_enable_out(r->c);
    return err;
}

test code:

# server side
from flask import Flask
app = Flask(__name__)

@app.route('/small')
def test_small():
    return "small message"

Supporting pipelining

The server does not support pipelining. A new request may reach without receiving response from server first. Thus, it is possible that two separate requests be in single recv buffer.

Infinite loop

test code:

# server side
from flask import Flask
app = Flask(__name__)
@app.route('/inf')
def test_inf():
    return "a" * 2048 + "bcd"

the log msg is:

root@VM-148-6-ubuntu:/home/ubuntu/code/python/web_test# tail worker.log 
[DEBUG][2017: 04: 02: 16:35:06] [pid: 24348]handle upstream
[DEBUG][2017: 04: 02: 16:35:06] [pid: 24348]handle response
[DEBUG][2017: 04: 02: 16:35:06] [pid: 24348]handle upstream
[DEBUG][2017: 04: 02: 16:35:06] [pid: 24348]handle response
[DEBUG][2017: 04: 02: 16:35:06] [pid: 24348]handle upstream
[DEBUG][2017: 04: 02: 16:35:06] [pid: 24348]handle response
[DEBUG][2017: 04: 02: 16:35:06] [pid: 24348]handle upstream
[DEBUG][2017: 04: 02: 16:35:06] [pid: 24348]handle response
[DEBUG][2017: 04: 02: 16:35:06] [pid: 24348]handle upstream
[DEBUG][2017: 04: 02: 16:35:06] [pid: 24348]handle response
root@VM-148-6-ubuntu:/home/ubuntu/code/python/web_test# tail worker.log 
[DEBUG][2017: 04: 02: 16:37:37] [pid: 24348]handle response
[DEBUG][2017: 04: 02: 16:37:37] [pid: 24348]handle upstream
[DEBUG][2017: 04: 02: 16:37:37] [pid: 24348]handle response
[DEBUG][2017: 04: 02: 16:37:37] [pid: 24348]handle upstream
[DEBUG][2017: 04: 02: 16:37:37] [pid: 24348]handle response
[DEBUG][2017: 04: 02: 16:37:37] [pid: 24348]handle upstream
[DEBUG][2017: 04: 02: 16:37:37] [pid: 24348]handle response
[DEBUG][2017: 04: 02: 16:37:37] [pid: 24348]handle upstream
[DEBUG][2017: 04: 02: 16:37:37] [pid: 24348]handle response

seems it fall in an infinite loop

epoll, but still blocking I/O

现在虽然上了epoll,但是原来基于多线程的blocking I/O的做法还没有改过来,因为需要做的改动比较大,现在还没有做完

ab test keep-alive failed

It seems that "ab -n 10000 -k -c 1000" will result in no keep-alive requests, which means julia does not really keep-alive well.

Dead loop if expire r->uc and activate r->c later.

Situation below will bring the server into dead loop:
handle_upstream -> ok -> expire(uc) -> handle_response -> send_response_buffer -> r->uc != NULL -> AGAIN -> connection_activate(c) which will also connection_activate(uc).

Add timer to close long connection

If the client do not close the connection and there is no error, the connection will hold forever.
The max time a connection can live should be configurable in config file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.