Git Product home page Git Product logo

coolgpus's People

Contributors

andyljones avatar v-iashin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

coolgpus's Issues

console blanks when running coolgpus

Thanks so much for creating coolgpus. It's been a great tool for keeping our multi-GPU machine cool.

I have a monitor plugged into my first GPU which usually shows a console which is useful for getting access to our server if there are any hardware problems.

But after launch coolgpus that display goes blank and unresponsive. When I kill coolgpus the console returns.

Is there a way to configure the xorg.conf files or make any other changes to prevent this from happening?

Thanks for any help you can offer and for the great tool you've written.

Server already active and failed to close it

I'm trying to set all fans to 100%.

Result of sudo $(which coolgpus) --speed 99 99 :

Traceback (most recent call last):
  File "/usr/local/bin/coolgpus", line 266, in <module>
    run()
  File "/usr/local/bin/coolgpus", line 259, in run
    with xservers(buses) as displays:
  File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/usr/local/bin/coolgpus", line 172, in xservers
    kill_xservers()
  File "/usr/local/bin/coolgpus", line 163, in kill_xservers
    raise IOError('There are already X servers active. Either run the script with the `--kill` switch, or kill them yourself first')
OSError: There are already X servers active. Either run the script with the `--kill` switch, or kill them yourself first

Result of sudo $(which coolgpus) --speed 99 99 --kill :

Killing all running X servers, including 1671
Awaiting X server shutdown
Awaiting X server shutdown
Awaiting X server shutdown
Awaiting X server shutdown

I don't know how to find the server, and close it myself.

More precise calibration

Problem
Let's assume I have two GPUs. I run my code on the first one (87C). The second one is idle (30C).

I would like the first one to follow the rule --temp 70 90 --speed 50 75.
On the second one: --temp 10 70 --speed 20 20.

If I use the rule --temp 70 90 --speed 50 75 I end up with the second GPU fan running 70% all the time despite being idle.

The more GPUs you have, the more annoying the problem becomes.

Suggestion
The idea is to create a piecewise function

  1. sudo $(which coolgpus) --temp 10 69 70 90 --speed 20 20 50 75
  2. sudo $(which coolgpus) --temp_gpu0 70 90 --speed_gpu0 50 75 --temp_gpu1 10 70 --speed_gpu1 20 20

Defunct Xorg processes

After running coolgpus for 15 days, we've encountered a problem where:

  • there were defunct (Z) or uninterruptible IO (D) Xorg processes
  • coolgpus would keep waiting for them to be killed (which is not possible)
  • fan speeds were not updated, reaching 88 degrees Celsius
  • restarting coolgpus would result in an error

coolgpus detects running Xorg processes with:

# pgrep Xorg
1422
1423
1424
1425

But these are defunct or uninterruptible processes:

# ps aux | grep Xorg
root        1422  0.1  0.0      0     0 ?        Ds   Jul22  32:01 [Xorg]
root        1423  0.1  0.0      0     0 ?        Zsl  Jul22  35:03 [Xorg] <defunct>
root        1424  0.0  0.0      0     0 ?        Ds   Jul22  15:43 [Xorg]
root        1425  0.1  0.0      0     0 ?        Zsl  Jul22  29:37 [Xorg] <defunct>

At this point coolgpus was hung, i.e., running but no logs in 4 days:

Aug 02 21:27:44 hal1 coolgpus[1134]: GPU :0, 27C -> [30%-30%]. Leaving speed at 30%
Aug 02 21:27:44 hal1 coolgpus[1134]: GPU :1, 29C -> [30%-30%]. Leaving speed at 30%
Aug 02 21:27:44 hal1 coolgpus[1134]: GPU :2, 60C -> [43%-49%]. Setting speed to 43%
Aug 02 21:27:44 hal1 coolgpus[1134]: GPU :3, 30C -> [30%-30%]. Leaving speed at 30%

After restarting:

# systemctl status coolgpus
● coolgpus.service - Headless GPU Fan Control
     Loaded: loaded (/etc/systemd/system/coolgpus.service; enabled; vendor preset: enabled)
     Active: deactivating (stop-sigterm) (Result: exit-code) since Thu 2020-08-06 18:47:08 PDT; 146ms ago
    Process: 2303229 ExecStart=/usr/local/bin/coolgpus/coolgpus --kill (code=exited, status=1/FAILURE)
   Main PID: 2303229 (code=exited, status=1/FAILURE)
      Tasks: 6 (limit: 154078)
     Memory: 39.8M
     CGroup: /system.slice/coolgpus.service
             ├─1422 [Xorg]
             └─1424 [Xorg]
Aug 06 18:47:08 hal1 coolgpus[2303229]:   File "/usr/local/bin/coolgpus/coolgpus", line 238, in run
Aug 06 18:47:08 hal1 coolgpus[2303229]:     with xservers(buses) as displays:
Aug 06 18:47:08 hal1 coolgpus[2303229]:   File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
Aug 06 18:47:08 hal1 coolgpus[2303229]:     return next(self.gen)
Aug 06 18:47:08 hal1 coolgpus[2303229]:   File "/usr/local/bin/coolgpus/coolgpus", line 159, in xservers
Aug 06 18:47:08 hal1 coolgpus[2303229]:     kill_xservers()
Aug 06 18:47:08 hal1 coolgpus[2303229]:   File "/usr/local/bin/coolgpus/coolgpus", line 148, in kill_xservers
Aug 06 18:47:08 hal1 coolgpus[2303229]:     raise IOError('Failed to kill existing X servers. Try killing them yourself before running this script')
Aug 06 18:47:08 hal1 coolgpus[2303229]: OSError: Failed to kill existing X servers. Try killing them yourself before running this script
Aug 06 18:47:08 hal1 systemd[1]: coolgpus.service: Main process exited, code=exited, status=1/FAILURE

Any idea on why we ended up with defunct processes?
For the ones with uninterruptible IO state (D), I could collect these stack traces:

# cat /proc/1422/stack
[<0>] down+0x47/0x60
[<0>] nvkms_close_common+0x1a/0x70 [nvidia_modeset]
[<0>] nvkms_close+0x6d/0xa0 [nvidia_modeset]
[<0>] nvidia_frontend_close+0x2f/0x50 [nvidia]
[<0>] __fput+0xcc/0x260
[<0>] ____fput+0xe/0x10
[<0>] task_work_run+0x8f/0xb0
[<0>] do_exit+0x351/0xac0
[<0>] do_group_exit+0x47/0xb0
[<0>] get_signal+0x169/0x890
[<0>] do_signal+0x34/0x6c0
[<0>] exit_to_usermode_loop+0xbf/0x160
[<0>] do_syscall_64+0x163/0x190
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9

# cat /proc/1424/stack
[<0>] down+0x47/0x60
[<0>] nvkms_close_common+0x1a/0x70 [nvidia_modeset]
[<0>] nvkms_close+0x6d/0xa0 [nvidia_modeset]
[<0>] nvidia_frontend_close+0x2f/0x50 [nvidia]
[<0>] __fput+0xcc/0x260
[<0>] ____fput+0xe/0x10
[<0>] task_work_run+0x8f/0xb0
[<0>] do_exit+0x351/0xac0
[<0>] rewind_stack_do_exit+0x17/0x20

At least, one could make coolgpus ignore defunct processes?

An improvement of the systemd unit file

According the test on my machine, the killmode and kill signal should be specified in the systemd unit file (added to the [Service] section) :

ExecStop=/bin/kill -2 $MAINPID
KillMode=none

Otherwise, when executing "systemctl stop", the default SIGTERM signal will be sent to the main process(python) and child process(Xorg), result in the fan speed control not be released, the "assign(display, '[gpu:0]/GPUFanControlState=0')" statment will not be executed, and the fan speed control will not return to the stock automode.

Anyway, thanks very much, it's a wonderful tool and it solved my fan speed control problem.

Partially-headed servers

Servers where some GPUs have displays attached and some don't is trickier than the fully-headless case. The first and obvious issue is that some display IDs are already occupied. That's fixed by picking displays :10, :11, :12 etc, which aren't commonly used. Then the script will actually run!

Unfortunately, it also blanks your physical display, presumably because it's nicking the GPU off of your primary X server. This is fixed for by only looking at GPU buses that nvidia-smi reports as 'not displayed'. That leaves these problems:

  • How'd you figure out which physical display IDs correspond to which PCI buses, so the script can manage those too? This seems easy but Google's failed me so far. xdpyinfo seemed promising, but the extension that presumably has the bus info in - NV-CONTROL - isn't supported.
  • When launching X servers for non-displayed GPUs, the monitor will blank and you need to hit Ctrl+Alt+F2 to get back to the desktop. I think this is something to do with X 'resetting' VTs, because the same problem was originally showing up every time nvidia-settings was called. That was suppressed by -novtswitch and passing a new VT ID, but the blank-on-launch persists.

So yeah, if you've got a partially headless box and want to fix this up

git clone https://github.com/andyljones/coolgpus.git
cd coolgpus
git checkout partial-head
sudo $(which coolgpus)

and Ctrl+Alt+F2 back to your desktop.

There's an error.

If I run the program, the web console screen doesn't appear on ipmi.

Couldn't connect to accessibility bus: Failed to connect to socket

I run coolgpus on a 8 gpu machine (ubuntu18.04, NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 ) .

It reports some errors and the main issue is

(nvidia-settings:38841): dbind-WARNING **: 13:57:53.241: Couldn't connect to accessibility bus: Failed to connect to socket /tmp/dbus-OxChDaN4Rm: Connection refused

I try methods in https://unix.stackexchange.com/questions/230238/x-applications-warn-couldnt-connect-to-accessibility-bus-on-stderr, but it doesn't work, so I want to find some help here.

The full log is here log.LOG

Errors despite Xorg and nvidia-settings/smi on PATH.

I just looked at this again. I got some weird behavior where I could run it the twice just fine, but then I aborted it with Ctrl+c twice (not a smooth termination of XServers) and now I can no longer run it. On another machine, I get the same error from the start (it never worked):

Traceback (most recent call last):
  File "/home/tim/anaconda3/bin/coolgpus", line 177, in <module>
    run()
  File "/home/tim/anaconda3/bin/coolgpus", line 174, in run
    manage_fans(displays)
  File "/home/tim/anaconda3/bin/coolgpus", line 168, in manage_fans
    assign(display, '[gpu:0]/GPUFanControlState=0')
  File "/home/tim/anaconda3/bin/coolgpus", line 143, in assign
    check_output(['nvidia-settings', '-a', command, '-c', display], stderr=STDOUT)
  File "/home/tim/anaconda3/lib/python3.7/subprocess.py", line 395, in check_output
    **kwargs).stdout
  File "/home/tim/anaconda3/lib/python3.7/subprocess.py", line 487, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['nvidia-settings', '-a', '[gpu:0]/GPUFanControlState=0', '-c', ':0']' returned non-zero exit status 1.

I tried to figure it out but was not able to (Xorg, nvidia-settings, and nvidia-smi are on PATH). The XServers seem to be starting just fine. Do you have any ideas on how to debug this?

coolgpus works but has a limit

i set the fan speed to 90% and i can see the fan speed did speed up from 20.
However, when it reached 50, it stopped growing and became fixed(55).

how's this so?

Fan control for cards with multiple addressable fans

It would be better if there were a way to control both fan zones on cards that support this. On a local copy of the repo I get around this by parsing the addressable fans for each card according to the contents of

$> nvidia-settings -q all

and then issuing the appropriate update commands in set_speed. This is a relatively minor fix, but handling this appropriately on multi-GPU machines may be more involved.

f'temp and speed should have the same length'

sudo $(which coolgpus) --speed 99 99
File "/usr/bin/coolgpus", line 28
assert len(args.temp) == len(args.speed), f'temp and speed should have the same length'
^
SyntaxError: invalid syntax

Huh??

coolgpus Driver Mismatch error

Hi guys,
Iḿ having trouble to make this module work. Is there any restrictions to use this?
I have an RTX 3090, with the following drivers:
image
I'm using this as a server connected through ssh for training deep learning models purposes. I just assembled this server a couple of weeks ago, and during these days I started intensive training load getting not so high temperatures (around 65C full load) but I notice fan speed is close to 70%.

I installed coolgpus but I got a Driver mismatch error with code exit 255. I got hell of a scare, because nvidia-smi stopped working giving this message NVML driver mismatch and I thought I ruined my drivers, and needed to reinstall over everything in my server... So I quickly uninstall this and reboot and now it is working, but I really like the idea of using a pip installable for controlling fan speed.

Any help to make this work will be deeply appreciated.

Alfonso

To run an error FileNotFoundError: [Errno 2] No such file or directory: 'Xorg': 'Xorg'

(base) feadre@e2680v2:$ sudo $(which coolgpus) --temp 20 55 80 --speed 5 30 99
[sudo] password for feadre:
No existing X servers, we're good to go
Starting xserver: Xorg :0 -once -config /tmp/cool-gpu-00000000:03:00.03f8wzkac/xorg.conf
Traceback (most recent call last):
File "/home/feadre/anaconda3/bin/coolgpus", line 266, in
run()
File "/home/feadre/anaconda3/bin/coolgpus", line 259, in run
with xservers(buses) as displays:
File "/home/feadre/anaconda3/lib/python3.7/contextlib.py", line 112, in enter
return next(self.gen)
File "/home/feadre/anaconda3/bin/coolgpus", line 177, in xservers
servers[bus] = xserver(displays[bus], bus)
File "/home/feadre/anaconda3/bin/coolgpus", line 137, in xserver
p = Popen(xorgargs)
File "/home/feadre/anaconda3/lib/python3.7/subprocess.py", line 800, in init
restore_signals, start_new_session)
File "/home/feadre/anaconda3/lib/python3.7/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'Xorg': 'Xorg'
(base) feadre@e2680v2:
$ sudo $(which coolgpus)
No existing X servers, we're good to go
Starting xserver: Xorg :0 -once -config /tmp/cool-gpu-00000000:03:00.0g2ui0rm2/xorg.conf
Traceback (most recent call last):
File "/home/feadre/anaconda3/bin/coolgpus", line 266, in
run()
File "/home/feadre/anaconda3/bin/coolgpus", line 259, in run
with xservers(buses) as displays:
File "/home/feadre/anaconda3/lib/python3.7/contextlib.py", line 112, in enter
return next(self.gen)
File "/home/feadre/anaconda3/bin/coolgpus", line 177, in xservers
servers[bus] = xserver(displays[bus], bus)
File "/home/feadre/anaconda3/bin/coolgpus", line 137, in xserver
p = Popen(xorgargs)
File "/home/feadre/anaconda3/lib/python3.7/subprocess.py", line 800, in init
restore_signals, start_new_session)
File "/home/feadre/anaconda3/lib/python3.7/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'Xorg': 'Xorg'

Limit GPU binding with CUDA_VISIBLE_DEVICES or so

Hello, and, first all I'd like to thank you for project, it's still the best way we found to workaround NVIDIA cooling issues.

To the point. Thanks to latest NVIDIA drivers updates, now instead of usual primary contexts [with nwidia-smi tool] we have displayed all contexts created. So if earlier we've got output like this:

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      3541      G   /usr/libexec/Xorg                   8MiB |
|    1   N/A  N/A      3543      G   /usr/libexec/Xorg                   8MiB |
|    2   N/A  N/A      3544      G   /usr/libexec/Xorg                   8MiB |
|    3   N/A  N/A      3546      G   /usr/libexec/Xorg                   8MiB |
|    4   N/A  N/A      3548      G   /usr/libexec/Xorg                   8MiB |
|    5   N/A  N/A      3549      G   /usr/libexec/Xorg                   8MiB |
|    6   N/A  N/A      3550      G   /usr/libexec/Xorg                   8MiB |
|    7   N/A  N/A      3552      G   /usr/libexec/Xorg                   8MiB |
+-----------------------------------------------------------------------------+

...now we have:

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A    400553      G   /usr/libexec/Xorg                   8MiB |
|    0   N/A  N/A    400554      G   /usr/libexec/Xorg                   0MiB |
|    0   N/A  N/A    400555      G   /usr/libexec/Xorg                   0MiB |
|    0   N/A  N/A    400556      G   /usr/libexec/Xorg                   0MiB |
|    0   N/A  N/A    400557      G   /usr/libexec/Xorg                   0MiB |
|    0   N/A  N/A    400558      G   /usr/libexec/Xorg                   0MiB |
|    0   N/A  N/A    400559      G   /usr/libexec/Xorg                   0MiB |
|    0   N/A  N/A    400560      G   /usr/libexec/Xorg                   0MiB |
|    1   N/A  N/A    400553      G   /usr/libexec/Xorg                   0MiB |
|    1   N/A  N/A    400554      G   /usr/libexec/Xorg                   8MiB |
|    1   N/A  N/A    400555      G   /usr/libexec/Xorg                   0MiB |
|    1   N/A  N/A    400556      G   /usr/libexec/Xorg                   0MiB |
|    1   N/A  N/A    400557      G   /usr/libexec/Xorg                   0MiB |
|    1   N/A  N/A    400558      G   /usr/libexec/Xorg                   0MiB |
|    1   N/A  N/A    400559      G   /usr/libexec/Xorg                   0MiB |
|    1   N/A  N/A    400560      G   /usr/libexec/Xorg                   0MiB |
|    2   N/A  N/A    400553      G   /usr/libexec/Xorg                   0MiB |
|    2   N/A  N/A    400554      G   /usr/libexec/Xorg                   0MiB |
|    2   N/A  N/A    400555      G   /usr/libexec/Xorg                   8MiB |
|    2   N/A  N/A    400556      G   /usr/libexec/Xorg                   0MiB |
|    2   N/A  N/A    400557      G   /usr/libexec/Xorg                   0MiB |
|    2   N/A  N/A    400558      G   /usr/libexec/Xorg                   0MiB |
|    2   N/A  N/A    400559      G   /usr/libexec/Xorg                   0MiB |
|    2   N/A  N/A    400560      G   /usr/libexec/Xorg                   0MiB |
|    3   N/A  N/A    400553      G   /usr/libexec/Xorg                   0MiB |
|    3   N/A  N/A    400554      G   /usr/libexec/Xorg                   0MiB |
|    3   N/A  N/A    400555      G   /usr/libexec/Xorg                   0MiB |
|    3   N/A  N/A    400556      G   /usr/libexec/Xorg                   8MiB |
|    3   N/A  N/A    400557      G   /usr/libexec/Xorg                   0MiB |
|    3   N/A  N/A    400558      G   /usr/libexec/Xorg                   0MiB |
|    3   N/A  N/A    400559      G   /usr/libexec/Xorg                   0MiB |
|    3   N/A  N/A    400560      G   /usr/libexec/Xorg                   0MiB |
|    4   N/A  N/A    400553      G   /usr/libexec/Xorg                   0MiB |
|    4   N/A  N/A    400554      G   /usr/libexec/Xorg                   0MiB |
|    4   N/A  N/A    400555      G   /usr/libexec/Xorg                   0MiB |
|    4   N/A  N/A    400556      G   /usr/libexec/Xorg                   0MiB |
|    4   N/A  N/A    400557      G   /usr/libexec/Xorg                   8MiB |
|    4   N/A  N/A    400558      G   /usr/libexec/Xorg                   0MiB |
|    4   N/A  N/A    400559      G   /usr/libexec/Xorg                   0MiB |
|    4   N/A  N/A    400560      G   /usr/libexec/Xorg                   0MiB |
|    5   N/A  N/A    400553      G   /usr/libexec/Xorg                   0MiB |
|    5   N/A  N/A    400554      G   /usr/libexec/Xorg                   0MiB |
|    5   N/A  N/A    400555      G   /usr/libexec/Xorg                   0MiB |
|    5   N/A  N/A    400556      G   /usr/libexec/Xorg                   0MiB |
|    5   N/A  N/A    400557      G   /usr/libexec/Xorg                   0MiB |
|    5   N/A  N/A    400558      G   /usr/libexec/Xorg                   8MiB |
|    5   N/A  N/A    400559      G   /usr/libexec/Xorg                   0MiB |
|    5   N/A  N/A    400560      G   /usr/libexec/Xorg                   0MiB |
|    6   N/A  N/A    400553      G   /usr/libexec/Xorg                   0MiB |
|    6   N/A  N/A    400554      G   /usr/libexec/Xorg                   0MiB |
|    6   N/A  N/A    400555      G   /usr/libexec/Xorg                   0MiB |
|    6   N/A  N/A    400556      G   /usr/libexec/Xorg                   0MiB |
|    6   N/A  N/A    400557      G   /usr/libexec/Xorg                   0MiB |
|    6   N/A  N/A    400558      G   /usr/libexec/Xorg                   0MiB |
|    6   N/A  N/A    400559      G   /usr/libexec/Xorg                   8MiB |
|    6   N/A  N/A    400560      G   /usr/libexec/Xorg                   0MiB |
|    7   N/A  N/A    400553      G   /usr/libexec/Xorg                   0MiB |
|    7   N/A  N/A    400554      G   /usr/libexec/Xorg                   0MiB |
|    7   N/A  N/A    400555      G   /usr/libexec/Xorg                   0MiB |
|    7   N/A  N/A    400556      G   /usr/libexec/Xorg                   0MiB |
|    7   N/A  N/A    400557      G   /usr/libexec/Xorg                   0MiB |
|    7   N/A  N/A    400558      G   /usr/libexec/Xorg                   0MiB |
|    7   N/A  N/A    400559      G   /usr/libexec/Xorg                   0MiB |
|    7   N/A  N/A    400560      G   /usr/libexec/Xorg                   8MiB |
+-----------------------------------------------------------------------------+

Is it possible to limit Xorg processes with something like CUDA_VISIBLE_DEVICES environment variable ( https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/ )?

I guess some minor changes are needed somewhere arond this line so each Xorg instance run like CUDA_VISIBLE_DEVICES=1 Xorg ... .

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.