Git Product home page Git Product logo

bud's Issues

Connection reset by peer

Hi,
we have bud in production for quite some time now and we're pretty happy with it (still waiting for a core dump on #74).

There is one customer who complains about regular "Connection reset by peer" errors. We were unable to reproduce that with our monitoring, but our monitoring is not a demanding as a browser.

I'm pretty clueless how we could best debug this. There are quite some -104 errors in the logs:

Sep 22 16:55:19 ssl-fe01 bud[1981]: client 0x1135e260 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 16:55:40 ssl-fe01 bud[1981]: client 0x1135e260 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 16:55:40 ssl-fe01 bud[9625]: client 0xdbfb300 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 16:55:40 ssl-fe01 bud[19663]: client 0xe55b760 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 16:55:40 ssl-fe01 bud[1997]: client 0xe064170 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 16:55:40 ssl-fe01 bud[1981]: client 0x1142c490 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 16:55:40 ssl-fe01 bud[19663]: client 0xe6d4a90 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 16:55:40 ssl-fe01 bud[1997]: client 0xe104e10 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 16:56:41 ssl-fe01 bud[1981]: client 0x113ac260 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:01:39 ssl-fe01 bud[1981]: client 0x11468170 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:01:47 ssl-fe01 bud[19663]: client 0xe871e60 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:01:49 ssl-fe01 bud[1981]: client 0x114a8ca0 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:01:49 ssl-fe01 bud[1997]: client 0xe22a8a0 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:03:29 ssl-fe01 bud[19663]: client 0xe8f8c00 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:07:45 ssl-fe01 bud[1981]: client 0x1142c490 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:07:47 ssl-fe01 bud[9625]: client 0xdb3bf30 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:07:56 ssl-fe01 bud[19663]: client 0xe6d4a90 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:08:10 ssl-fe01 bud[19663]: client 0xe8adff0 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:10:12 ssl-fe01 bud[9625]: client 0xd9807b0 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:11:26 ssl-fe01 bud[19663]: client 0xe6d4a90 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:19:15 ssl-fe01 bud[16419]: client 0x14b2400 on frontend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:19:25 ssl-fe01 bud[16419]: client 0x14b2400 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:21:19 ssl-fe01 bud[16418]: client 0x1e4a1b0 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)

When I see that correctly the client and/or bud closes or drops the connection and then bud tries to communicate on that connection, gets the -104 error and forcefully closes the client and backend connection.

I checked with nginx (backend):

  • There is no "Bad request" (not fully received/processed HTTP request) in the access logs
  • There is not single error for any connection received from bud recorded in the nginx error.log

I'm pretty baffled because on plain HTTP we don't see any connection resets by nginx and I can't find any clues in the nginx logs. Dropped connections by iptables or conntrack etc is unlikely because we monitor the relevant stats. Timing from the logs (everything from backend connect and frontend new to the force close happens in the same second) suggests that no timeouts are involved.

Sample los:

Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on backend connecting to 127.0.0.1:10010
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend new
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend after read_cb() => 517
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend SSL_read() => -1
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend uv_write(137) iovcnt: 1
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend immediate write
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend write_cb => 137
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend recycle
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend SSL_read() => -1
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on backend connect 0
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend SSL_read() => -1
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend after read_cb() => 1158
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend SSL_read() => 1078
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on backend uv_write(1129) iovcnt: 1
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on backend immediate write
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on backend write_cb => 1129
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend SSL_read() => -1
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend recycle
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend SSL_read() => -1
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on backend after read_cb() => -104
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend SSL_read() => -1
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on backend closed because: uv_read_start(client) cb returned -104 (connection reset by peer)
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend force closing (and waiting for other)
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on backend force closing (and waiting for other)
Sep 22 17:21:19 ssl-fe01 bud[16419]: client 0x15016b0 on frontend close_cb

Any ideas how to debug this further?

handshake failures produce strange log output

I've got Bud running in front of a CouchDb instance. Curling it like so:

curl -X HEAD https://<couchurl>

results in the following bud log…

(ntc) [1] client 0x1b9bc70 on frontend closed because: uv_shutdown(client) cb returned -107 (socket is not connected)

…and eventually, the following curl response:

curl: (18) transfer closed with 61 bytes remaining to read

Yet,

curl https://<couchurl>

…works just fine.

I'm not 100% sure this is a bud error, so maybe this is a support request to explain the error message? :)

x-forward option not working

It looks like my backend is never receiving the x-forwarded-for header.

Headers it does get:

{ host: 'localhost:1443',
  connection: 'keep-alive',
  'cache-control': 'max-age=0',
  accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
  'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2046.3 Safari/537.36',
  'accept-encoding': 'gzip,deflate,sdch',
  'accept-language': 'en-US,en;q=0.8',
  dnt: '1' }

My config: https://gist.github.com/joeybaker/c7746eab0cce51dee3e2

OpenSSL Engine Support

It would be awesome if bud supported changing the OpenSSL engine at runtime. For example, we currently utilize a hardware security module which implements an SSL engine so that keys are stored in hardware, but can be utilized by any program which uses OpenSSL (and allows for the engine to be set). Would it be possible to add another configuration item for bud?

ticket key rotation

Bud should be capable of synchronous tls ticket key rotation through external service. This should perhaps be done by polling an external service.

x-forwarded-for

I'd love to see the ability for x-forwarded-for, etc. headers to be injected into the backend request.

Admittedly, I'm unfamiliar with the code base, so I don't know if the request is parsed via HTTP or not (i see http_parser.c as a dep) but if it is, i'd love to see the ability to inject these (or other) headers to the backend.

If the response is not HTTP parsed, then the overhead from this maybe to substantial to be worth it.

benchmarks

It'd be nice to see some kind of numbers with regards to how bud compares performance-wise to some popular TLS/SSL terminators like pound, stud, nginx, haproxy, etc.

undefined reference to bud_trace_*

I'm trying to build the latest version (0.8.1) and gets some errors.

uname -a
Linux ip-10-0-0-152 3.2.0-58-virtual #88-Ubuntu SMP Tue Dec 3 17:58:13 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

  LINK(target) /home/ubuntu/sources/bud/out/Release/bud
/home/ubuntu/sources/bud/out/Release/obj.target/bud/src/avail.o: In function `bud_client_connect_cb':
avail.c:(.text+0x763): undefined reference to `bud_trace_backend_connect'
/home/ubuntu/sources/bud/out/Release/obj.target/bud/src/client.o: In function `bud_client_create':
client.c:(.text+0x5d0): undefined reference to `bud_trace_frontend_accept'
/home/ubuntu/sources/bud/out/Release/obj.target/bud/src/client.o: In function `bud_client_close_cb':
client.c:(.text+0xbad): undefined reference to `bud_trace_end'
collect2: ld returned 1 exit status
make: *** [/home/ubuntu/sources/bud/out/Release/bud] Error 1
make: Leaving directory `/home/ubuntu/sources/bud/out'

npm install on linux (ubuntu) hangs

The npm based install hangs forever after printing out:

> [email protected] preinstall /usr/local/lib/node_modules/bud-tls
> node-gyp configure && node-gyp rebuild && node npm/locate.js

gyp WARN EACCES user "undefined" does not have permission to access the dev dir "/root/.node-gyp/0.10.33"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/usr/local/lib/node_modules/bud-tls/.node-gyp"

Compiling from source works fine.

JSON parse fails with leading newline

If the json config specified by --conf has a leading newline it fails to parse.

Not a huge issue, just an interesting side effect of some JSON stringification/template logic I'm using.

Proxyline is broken when using workers

proxyline_fmt is an empty string when bud_client_prepend_proxyline are using it. It does look like bud_server_format_proxyline is not called in the workers or it is called after the configuration is passed to the workers.

Proxyline works if I sett workers to zero in the configuration.

glibc: corrupted double-linked list

In one of your deployments we saw that all SSL websites stopped working. Attempts to connect failed:

curl -v https://www.[foobar].net/
* About to connect() to www.[foobar] port 443 (#0)
*   Trying 91.216.248.**... connected
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to www.[foobar].net:443 
* Closing connection #0
curl: (35) Unknown SSL protocol error in connection to www.[foobar].net:443 

The bud log showed the following output:

(wrn) [9845] client 0x4c9acd0 on frontend SNI from json failed: "Failed to load or parse JSON: <SNI Response>"
*** glibc detected *** bud: corrupted double-linked list: 0x00000000116daa00 ***

Kernel ring buffer/dmesg:

[345812.084141] traps: bud[3256] general protection ip:5cad48 sp:7fffdcf1ca30 error:0 in bud[400000+2ae000]

A rebstart of bud solved the problem. What do you need to dig further into the problem?

support passphrases

Please add support for SSL certs that have passphrases. Ideally it would have a mechanism like apache where it can execute a file or that echos the phrase via a bash script or something. Although folks I am sure would like it just in plaintext in the config as well depending on their security model.

bud on multiple boxes

We're going to deploy bud on multiple boxes in different DCs for DNS round-robin failover.

In the docs is a hint that ticket rotation will be a problem in this case (if I understand correctly). What is the setup for synchronized ticket rotation?

And: will there be any other problems?

Do we have proxy chain support?

Let's say we are already behind a proxy(In my case Cloudflare) and it has added x-forwarded headers. In that case, does bud build a proxy chain for x-forwarded like this.

headers['x-forwarded-for'] = 'client-ip, cloudflare-ip';

Documentation for testing

Mostly just an FYI until I can make a PR.

  • Raise ulimit
  • ab on OSX is borked
  • use ab -f tls1 or ab -f ssl3

TLS false start & SSL_MODE_HANDSHAKE_CUTTHROUGH

Perhaps more of a question / clarification... My understanding is that there is nothing to enable on the server to support false-start -- this is a client-side decision? E.g. Chrome requires forward secrecy + NPN.

/* Enable TLS False Start */
- perhaps I'm wrong, but this seems unnecessary. The config flag in bud is what threw me off, as I didn't expect to see that there... Am I missing something?

http://boinc.berkeley.edu/android-boinc/libssl/patches/handshake_cutthrough.patch

bud leaks file descriptiors (heavily)

So a few hours ago we got the first symptoms of a new problem where clients can't connect to bud anymore. The debug output from cul looks like this:

* About to connect() to system-zeus.lima-city.de port 443 (#0)
*   Trying 212.83.45.137... connected
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to system-zeus.lima-city.de:443 
* Closing connection #0

Bud debug output says:

(dbg) [3436] client 0x61b9640 on backend connecting to 127.0.0.1:10010
(dbg) [3436] client 0x61b9640 on frontend close_cb
(dbg) [3436] received handle on ipc
(dbg) [3436] client 0x61b9640 on backend connecting to 127.0.0.1:10010
(dbg) [3436] client 0x61b9640 on frontend close_cb
(dbg) [3436] received handle on ipc
(dbg) [3436] client 0x61b9640 on backend connecting to 127.0.0.1:10010
(dbg) [3436] client 0x61b9640 on frontend close_cb
(dbg) [3436] received handle on ipc
(dbg) [3436] client 0x61b9640 on backend connecting to 127.0.0.1:10010
(dbg) [3436] client 0x61b9640 on frontend close_cb
(dbg) [3436] received handle on ipc

Other workers seem to work normally. Over time more and more of the workers start to do the same.

So I was thinking that some problem occurred in the requests before and left the worker in a bad state, and found that bud opens too many files (output from the same worker before):

(dbg) [3436] received handle on ipc
(dbg) [3436] client 0x5fc5390 on backend connecting to 127.0.0.1:10010
(dbg) [3436] client 0x5fc5390 on frontend new
(dbg) [3436] client 0x5fc5390 on backend connect 0
(dbg) [3436] client 0x5fc5390 on frontend SSL_read() => -1
(dbg) [3436] client 0x5fc5390 on frontend after read_cb() => 222
(dbg) [3436] client 0x5fc5390 on backend ssl_cert_cb {0}
(ntc) [3436] client 0x5fc5390 on frontend failed to request SNI: "uv_tcp_connect(http_req) returned -24 (too many open files)"
(dbg) [3436] client 0x5fc5390 on frontend SSL_read() => -1
(ntc) [3436] client 0x5fc5390 on frontend closed because: SSL_read(client) - 1 (cert cb error)
(dbg) [3436] client 0x5fc5390 on frontend force closing (and waiting for other)
(dbg) [3436] client 0x5fc5390 on backend force closing (and waiting for other)
(ntc) [3436] client 0x5fc5390 on frontend closed because: SSL_read(client) - 1 ((null))
(dbg) [3436] client 0x5fc5390 on frontend close_cb

So as this problem started only after #76 was fixed, maybe the fix introduced a new bug that leads to HTTP requests no longer closed and thus leaking file descriptors. Could that be possible, that the fix introduced that problem? Right now the problem seems to emerge very fast as we see a worker hit the ulimit (1024) in only a couple of minutes.

It is possible though that there is another problem as I see strange debug output from the workers:

(dbg) [20210] received handle on ipc
(dbg) [20210] client 0x4182a80 on backend connecting to 127.0.0.1:10010
(dbg) [20210] client 0x4182a80 on frontend new
(dbg) [20210] client 0x4182a80 on backend connect 0
(dbg) [20210] client 0x4182a80 on frontend SSL_read() => -1
(dbg) [20210] received handle on ipc
(dbg) [20210] client 0x41a1720 on backend connecting to 127.0.0.1:10010
(dbg) [20210] client 0x41a1720 on frontend new
(dbg) [20210] received handle on ipc
(dbg) [20210] client 0x41b2550 on backend connecting to 127.0.0.1:10010
(dbg) [20210] client 0x41b2550 on frontend new
(dbg) [20210] client 0x41a1720 on backend connect 0
(dbg) [20210] client 0x41a1720 on frontend SSL_read() => -1
(dbg) [20210] client 0x41b2550 on backend connect 0
(dbg) [20210] client 0x41b2550 on frontend SSL_read() => -1
(dbg) [20210] received handle on ipc
(dbg) [20210] client 0x41dfca0 on backend connecting to 127.0.0.1:10010
(dbg) [20210] client 0x41dfca0 on frontend new
(dbg) [20210] client 0x41dfca0 on backend connect 0
(dbg) [20210] client 0x41dfca0 on frontend SSL_read() => -1
(dbg) [20210] received handle on ipc
(dbg) [20210] client 0x41fef60 on backend connecting to 127.0.0.1:10010
(dbg) [20210] client 0x41fef60 on frontend new
(dbg) [20210] client 0x41fef60 on backend connect 0
(dbg) [20210] client 0x41fef60 on frontend SSL_read() => -1

So something is going on there that may be the cause of the leak. Any ideas?

Bud binds after dropping privileges

Hi,
thanks for your great work!

I have noticed that if I run bud with privilege dropping enabled that it appears to bind after dropping privileges. So it is not possible to use bud on port 443 without running as root, but that's IMHO not acceptable for a public facing server.

I thought that it would be possible to bind to 1443 and forward the packets with iptables, but that sounds like a crazy hack to me. I'd rather see that bud can listen on 443 and drop privileges. Is that possible or is there a special problem that prevents bud from doing so?

Btw the output with the user config option set looks like this:

(dbg) [15057] master starting
uv_tcp_bind(server) returned -13
permission denied

allow cli to accept a piped config file

It would be great if we could pipe the contents of the config json to the cli instead of passing a file path. This would allow for dynamic configuration.

e.g. instead of

bud -c bud.json

this would be great

# or similar…
cat bud.json | bud -c 

Bud sends no X-Forwarded-For for a portion of the Chrome requests

We have a working bud setup now but we are seeing some connections where the first requests don't have an X-Forwarded-For header.

This very rare but breaks quite a lot of stuff when it happens.

Setup is a "normal" bud setup on 1.2.4 and a nginx-lua backend that reads and sets the X-Forwarded-For header into a SHM dictionary on the first request. To make sure we don't miss the first request we added a "seen" key that is set on the first request of the connection. We additionally dump all headers.

After a while I noticed that I can't reproduce with curl or Firefox, only in chrome. The headers confirmed that only chrome (like Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.132 Safari/537.36) is affected.
Accept-Encoding is always "gzip, deflate, sdch" (maybe helpful?), and we strip the Accept-Encoding at the nginx layer because of legacy system behaviour.

Reproduction is simple: just hit F5 a few times in chrome and on a few requests the IP is not set.

Any ideas?

Segmentation fault (core dumped)

$ gdb --args ./out/Release/bud --config default-config.json
GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2.1) 7.4-2012.04
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
<http://bugs.launchpad.net/gdb-linaro/>...
Reading symbols from /home/christian/bud/out/Release/bud...done.
(gdb) run
Starting program: /home/christian/bud/out/Release/bud --config default-config.json
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

Program received signal SIGSEGV, Segmentation fault.
0x000000000054a048 in uv__loop_alive (loop=0x0) at ../deps/uv/src/unix/core.c:252
252       return uv__has_active_handles(loop) ||
(gdb) backtrace
#0  0x000000000054a048 in uv__loop_alive (loop=0x0) at ../deps/uv/src/unix/core.c:252
#1  0x000000000054a0a0 in uv_run (loop=0x0, mode=UV_RUN_NOWAIT) at ../deps/uv/src/unix/core.c:262
#2  0x000000000040aa22 in bud_config_destroy ()
#3  0x000000000040abdf in bud_config_free ()
#4  0x000000000040968f in bud_config_cli_load ()
#5  0x0000000000405069 in main ()

Error code when cyaSSL tries to connect to bud

When using cyaSSL (https://github.com/cyassl/cyassl) as a client, I get an error code returned, saying "INCOMPLETE_DATA".
Any way, bud-tls works as expected. It is the first time I encounter such a problem.

  • Steps to reproduce :
git clone https://github.com/cyassl/cyassl.git
cd cyassl
./autogen.sh
./configure --enable-debug --disable-shared
make test
### Test against a bud-tls server
./examples/client/client -h {yourserver.xyz} -p {port_number} -d -g
  • In cyaSSL, the "INCOMPLETE_DATA" error is related to :

        /* make sure can read the message */
    if (*inOutIdx + size > totalSz)
        return INCOMPLETE_DATA;

    in src/internal.c.

  • bud-tls 0.34.2 running on Centos 6.6, with node v0.12.1 and npm v2.5.1 (same with node v0.10.6 and npm v1.3.6)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.