Git Product home page Git Product logo

traceroute-caller's Introduction

traceroute-caller

Version Build Status Coverage Status GoDoc Go Report Card

Local Development

Using docker-compose you can run a local instance of traceroute-caller that operates in concert with events from measurementlab/tcpinfo and using annotation from measurement-lab/uuid-annotator.

You must have a recent version of the Docker server configured and running in your local environment. As well, your local environment must include a recent version of docker-compose.

$ docker-compose version
docker-compose version 1.27.4, build 40524192
docker-py version: 4.3.1
CPython version: 3.7.7
OpenSSL version: OpenSSL 1.1.1g  21 Apr 2020

In the root directory of traceroute-caller, start a local build using sample files in ./testdata.

docker-compose up

This will create and run three containers. Container names are prefixed by the current working directory name (i.e., traceroute-caller). After the containers are running, trigger a network connection from within one of those containers. For example:

docker exec -it traceroute-caller_traceroute-caller_1 apt-get update

The logs from traceroute-caller should indicate that files are being saved under ./local/*.

ls -lR ./local

Use docker-compose down to stop the containers and remove resources before restarting your docker-compose environment.

docker-compose down
docker-compose up

Traceroute Examiner Tool: trex

The trex command line tool in this repo can examine scamper MDA traceroutes that are in .jsonl format and do the following:

  1. Extract single-path traceroutes from an MDA traceroute.
  2. List traceroutes that took longer than a specified duration.
  3. List complete and incomplete traceroutes.

Note:

  • Not all traceroutes are complete. That is, not all traceroutes trace all the way to the destination IP address.
  • Different hops associated with the same flow ID constitute a single path.
  • The order of hops in a path is determined by the TTL.
  • Unresponsive hops are marked as an asterisk ("*").
  • It is possible for a hop to return multiple replies to a probe. Therefore, for the same flow ID and TTL, there may be zero, one, or more than one replies.
  • When showing single-paths, only complete paths (if any) are printed.
  • If you need to see all paths, use the "-v" flag to enable the verbose mode.

The easiest way to get started with trex is to first fetch an archive of M-Lab's MDA traceroutes to examine. This can be done as shown below:

$ mkdir ~/traceroutes
$ cd ~/traceroutes
$ gsutil cp gs://archive-measurement-lab/ndt/scamper1/2021/10/01/20211001T003000.005106Z-scamper1-mlab1-lis02-ndt.tgz .
$ tar xzf 20211001T003000.005106Z-scamper1-mlab1-lis02-ndt.tgz

The above command extracts individual traceroute files to a directory called 2021. Now build the trex tool as shown below:

$ git clone https://github.com/m-lab/traceroute-caller
$ cd traceroute-caller/cmd/trex
$ go build

The above command builds trex and now you can use it to examine the traceroute files that you extracted. If trex examines more than one file, it prints statistics on how many files were found, how many were skipped because they were not .jsonl files, how many errors, etc.

# Show usage message.
$ ./trex -h
Usage: ./trex [-cehv] [-d <seconds>] path [path...]
path  a pathname to a file or directory (if directory, all files are processed recursively)
-h    print usage message and exit
-c    print flow IDs and file names of traceroutes that completed ("--" for incomplete traceroutes)
-d    print times and file names of traceroutes that took more than the specified duration
-e    print examples how to use this tool and exit
-v    enable verbose mode (mostly for debugging)

# Show examples.
Examples:
# Extract and print a single-path traceroute (if it exists) from a traceroute file
$ trex /traceroutes/2022/04/01/20220401T001905Z_ndt-qqvlt_1647967485_000000000009379D.jsonl

file: /traceroutes/2022/04/01/20220401T001905Z_ndt-qqvlt_1647967485_000000000009379D.jsonl
src: 209.170.110.216
dst: 199.19.248.6
scamper start: 1648772345
tracelb start: 1648772345 (0 seconds after scamper start)
scamper stop:  1648772346 (1 seconds after scamper start)
flowid: 1
TTL    TX(ms)   RX(ms)    RTT(ms)  IP address
  1       N/A      N/A      0.000  209.170.110.193
  2       150      151      0.653  213.248.100.57
  3      1055     1062      7.244  199.19.248.6  <=== destination

The TX and RX columns are elapsed transmit and receive times since the tracelb
command was started.


# Same command as above but enable the verbose mode (useful for debugging).
$ trex -v /traceroutes/2022/04/01/20220401T001905Z_ndt-qqvlt_1647967485_000000000009379D.jsonl

/traceroutes/2022/04/01/20220401T001905Z_ndt-qqvlt_1647967485_000000000009379D.jsonl
Tracelb.Src: 209.170.110.216
Tracelb.Dst: 199.19.248.6
Tracelb.Nodes[0] 209.170.110.193
  Tracelb.Nodes[0].Links[0][0] 213.248.100.57
    Tracelb.Nodes[0].Links[0][0].Probes[0].Flowid: 1
    Tracelb.Nodes[0].Links[0][0].Probes[1].Flowid: 2
    Tracelb.Nodes[0].Links[0][0].Probes[2].Flowid: 3
    Tracelb.Nodes[0].Links[0][0].Probes[3].Flowid: 4
    Tracelb.Nodes[0].Links[0][0].Probes[4].Flowid: 5
    Tracelb.Nodes[0].Links[0][0].Probes[5].Flowid: 6
Tracelb.Nodes[1] 213.248.100.57
  Tracelb.Nodes[1].Links[0][0] 199.19.248.6
    Tracelb.Nodes[1].Links[0][0].Probes[0].Flowid: 1

file: /traceroutes/2022/04/01/20220401T001905Z_ndt-qqvlt_1647967485_000000000009379D.jsonl
src: 209.170.110.216
dst: 199.19.248.6
scamper start: 1648772345
tracelb start: 1648772345 (0 seconds after scamper start)
scamper stop:  1648772346 (1 seconds after scamper start)
flowid: 1
TTL    TX(ms)   RX(ms)    RTT(ms)  IP address
  1       N/A      N/A      0.000  209.170.110.193
  2       150      151      0.653  213.248.100.57
  3      1055     1062      7.244  199.19.248.6  <=== destination

flowid: 2
TTL    TX(ms)   RX(ms)    RTT(ms)  IP address
  1       N/A      N/A      0.000  209.170.110.193
  2       301      302      0.644  213.248.100.57

flowid: 3
TTL    TX(ms)   RX(ms)    RTT(ms)  IP address
  1       N/A      N/A      0.000  209.170.110.193
  2       452      453      0.707  213.248.100.57

flowid: 4
TTL    TX(ms)   RX(ms)    RTT(ms)  IP address
  1       N/A      N/A      0.000  209.170.110.193
  2       603      604      0.608  213.248.100.57

flowid: 5
TTL    TX(ms)   RX(ms)    RTT(ms)  IP address
  1       N/A      N/A      0.000  209.170.110.193
  2       754      754      0.621  213.248.100.57

flowid: 6
TTL    TX(ms)   RX(ms)    RTT(ms)  IP address
  1       N/A      N/A      0.000  209.170.110.193
  2       904      905      0.673  213.248.100.57


# Print all traceroute files in a directory hierarchy that took longer than 5 minutes
$ trex -d 300 /traceroutes/2021
 428 /traceroutes/2021/10/01/20211001T000053Z_ndt-292jb_1632518393_00000000000516D4.jsonl
 386 /traceroutes/2021/10/01/20211001T000151Z_ndt-292jb_1632518393_000000000005160D.jsonl
...

files found:                          425
files skipped (not .jsonl):             0
files that could not be read:           0
files that could not be parsed:         0
files successfully parsed:            425
files with no traceroute data:          0

minimum duration:                       4 seconds
maximum duration:                     456 seconds
average duration:                     220 seconds


# Print flow ID of complete traceroutes ("--" if incomplete) in a directory hierarchy
$ ./trex -c /traceroutes/2021
 1 /traceroutes/2021/10/01/20211001T000014Z_ndt-292jb_1632518393_00000000000516C8.jsonl
 1 /traceroutes/2021/10/01/20211001T000015Z_ndt-292jb_1632518393_00000000000516C9.jsonl
-- /traceroutes/2021/10/01/20211001T000023Z_ndt-292jb_1632518393_00000000000516C4.jsonl
...

files found:                          425
files skipped (not .jsonl):             0
files that could not be read:           0
files that could not be parsed:         0
files successfully parsed:            425
files with no traceroute data:          0
files with complete traceroutes:      149  (35%)

traceroute-caller's People

Contributors

cristinaleonr avatar dependabot[bot] avatar gfr10598 avatar matthieugouel avatar maxmouchet avatar nkinkade avatar pboothe avatar robertodauria avatar stephen-soltesz avatar yachang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

traceroute-caller's Issues

Race between different tests

==================
360WARNING: DATA RACE
361Write at 0x00000109b118 by goroutine 18:
362  github.com/m-lab/traceroute-caller.TestMainWithConnectionListener()
363      /home/travis/gopath/src/github.com/m-lab/traceroute-caller/caller_test.go:52 +0x522
364  testing.tRunner()
365      /home/travis/.gimme/versions/go1.11.13.linux.amd64/src/testing/testing.go:827 +0x162
366
367Previous read at 0x00000109b118 by goroutine 39:
368  github.com/m-lab/traceroute-caller.main.func1()
369      /home/travis/gopath/src/github.com/m-lab/traceroute-caller/caller.go:76 +0x70
370
371Goroutine 18 (running) created at:
372  testing.(*T).Run()
373      /home/travis/.gimme/versions/go1.11.13.linux.amd64/src/testing/testing.go:878 +0x659
374  testing.runTests.func1()
375      /home/travis/.gimme/versions/go1.11.13.linux.amd64/src/testing/testing.go:1119 +0xa8
376  testing.tRunner()
377      /home/travis/.gimme/versions/go1.11.13.linux.amd64/src/testing/testing.go:827 +0x162
378  testing.runTests()
379      /home/travis/.gimme/versions/go1.11.13.linux.amd64/src/testing/testing.go:1117 +0x4ee
380  testing.(*M).Run()
381      /home/travis/.gimme/versions/go1.11.13.linux.amd64/src/testing/testing.go:1034 +0x2ee
382  main.main()
383      _testmain.go:46 +0x221
384
385Goroutine 39 (finished) created at:
386  github.com/m-lab/traceroute-caller.main()
387      /home/travis/gopath/src/github.com/m-lab/traceroute-caller/caller.go:74 +0x978
388  github.com/m-lab/traceroute-caller.TestMain()
389      /home/travis/gopath/src/github.com/m-lab/traceroute-caller/caller_test.go:37 +0x301
390  testing.tRunner()
391      /home/travis/.gimme/versions/go1.11.13.linux.amd64/src/testing/testing.go:827 +0x162
392==================
393

The two tests in caller_test.go have race conditions around the global cancel() function.

Readability improvements

This is just a few suggestions for improving readability for the Tracer and cachedTest code. It isn't fully fleshed out, so a little more thought is required.

The fact that Trace() and CreateCacheTrace both write files is confusing. It seems like it would be much clearer if there was a function that generates a trace, and a function that creates a file.

If that were the case, then you could replace the channel with a sync.Once, and have a GetData(ip string) that uses once.Do() to create the trace in the cache entry. The you would get the cache entry, call GetData() on the cache entry, then call the function to create the file.

There is some question as to whether we should be saving the original UUID, since that reduces anonymity, but ideally, only the IP address would be needed to create a trace, and only the trace data (without any connection metadata or original UUID) would be stored, which would simplify things further. The ip address would be passed to Trace, and the Connection would only be passed to SaveTrace.

So:
type Tracer interface {
// Just creates the body, not the metadata header.
Trace(ip string) string
}

// Creates the header, appends the body, saves the file.
func SaveTrace(conn Connection, t time.Time, data string) {
...
}

type cachedTest struct {
timeStamp time.Time
uuid string // If we really need the original uuid.
data string
once sync.Once
}

func (ct *cachedTest) getTest(t Tracer, ip string, uuid string) (string, string) {
once.Do(func() {
ct.uuid = uuid // kinda ugly - better ways to do this.
ct.data = t.Trace(ip)
})

return ct.data

}

scamper.go:86: Scamper exited with error: signal: segmentation fault

The frequency of scamper segfaults in staging is still very high. The consequence of this is the restart of traceroute container. The restarts result in a noisy summary of available containers relative to pre-release of v0.3.3. That alone might be okay but about 25% of the host containers not monitorable, suggesting the highest rate of segfaults in the host deployment.

traceroute-host

mlab3-lax03 has tons of traceroute tests w/ 0 probec

https://pantheon.corp.google.com/storage/browser/_details/archive-mlab-oti/ndt/traceroute/2019/11/01/20191101T020014.044139Z-traceroute-mlab3-lax03-ndt.tgz?project=mlab-oti&organizationId=433637338589

All tests in this tarball is <1k bytes.

A typical one is like:

======================

{"UUID":"ndt-9fw2l_1572433802_000000000001C6A1","TracerouteCallerVersion":"bc092be","CachedResult":true,"CachedUUID":"ndt-9fw2l_1572433802_000000000001C6A3"}
{"type":"cycle-start", "list_name":"/tmp/scamperctrl:16626", "id":1, "hostname":"ndt-9fw2l", "start_time":1572739620}
{"type":"tracelb", "version":"0.1", "userid":0, "method":"icmp-echo", "src":"::ffff:173.205.3.101", "dst":"::ffff:68.190.243.205", "start":{"sec":1572739620, "usec":959242, "ftime":"2019-11-03 00:07:00"}, "probe_size":60, "firsthop":1, "attempts":3, "confidence":95, "tos":0, "gaplimit":3, "wait_timeout":5, "wait_probe":250, "probec":0, "probec_max":3000, "nodec":0, "linkc":0}
{"type":"cycle-stop", "list_name":"/tmp/scamperctrl:16626", "id":1, "hostname":"ndt-9fw2l", "stop_time":1572739620}

scamper segfault on mlab2.lga0t

a good number scamper segfaults on mlab2.lga0t. Is this expected? Log messages look like: scamper[5963]: segfault at 0 ip 0000558ffcd236f0 sp 00007ffed36cd0f8 error 4 in scamper[558ffcd14000+92000]

Traces in traceroute-caller may be timing out a lot

I made two changes - adding a metric for trace latency, and changing panics to errors.

In testing on sandbox, I noticed that roughly 90% of traces are timing out:

TYPE trace_time_seconds histogram

trace_time_seconds_bucket{outcome="error",le="10"} 0
trace_time_seconds_bucket{outcome="error",le="21.5"} 0
trace_time_seconds_bucket{outcome="error",le="46.4"} 0
trace_time_seconds_bucket{outcome="error",le="100"} 0
trace_time_seconds_bucket{outcome="error",le="215"} 0
trace_time_seconds_bucket{outcome="error",le="464"} 90
trace_time_seconds_bucket{outcome="error",le="1000"} 90
trace_time_seconds_count{outcome="error"} 90
trace_time_seconds_bucket{outcome="success",le="10"} 1
trace_time_seconds_bucket{outcome="success",le="21.5"} 2
trace_time_seconds_bucket{outcome="success",le="46.4"} 2
trace_time_seconds_bucket{outcome="success",le="100"} 3
trace_time_seconds_bucket{outcome="success",le="215"} 8
trace_time_seconds_bucket{outcome="success",le="464"} 12
trace_time_seconds_bucket{outcome="success",le="1000"} 12
trace_time_seconds_count{outcome="success"} 12

I'm running again after fixing a bug in error handling, but I expect that we have been overlooking this problem for some time, and unaware of it because traceroute parsing has also been having a lot of problems.

ipcache is not fully threadsafe

It is designed to be used like:

if !cache.Has(ip) {
  cache.Add(ip)
  Trace(ip)
}

but this check-then-add pattern is not threadsafe. Instead, we should use an unconditional Add that returns true if the item was not previously in the cache. Then the usage would be:

if cache.Add(ip) {
  Trace(ip)
}

which (as long as Add is mutex-protected) is threadsafe.

Number of tests drop dramatically compared to paris traceroute binary

I compare the site traffic in 2019/05 (Paris Tracerout) vs. 2019/08 (scamper):

mlab3-den04: 609300, 63800
mlab3-beg01: 660600, 91800
mlab2-mil02: 465900, 149437
mlab2-sea06: 260300, 21000

One possible reason is that scamper test took longer time to complete compared to PT. So the 60 seconds timeout will make most of scamper test fail in the middle.

Attached BigQuery used:

#legacySQL
SELECT

COUNT(TestTime) AS num

FROM (

SELECT TestTime

FROM [mlab-oti.batch.traceroute]

WHERE
DATE(_PARTITIONTIME) BETWEEN DATE("2019-05-01") AND DATE("2019-06-01")
AND uuid = ""
AND Parseinfo.TaskFileName CONTAINS "mlab2-sea06"

)

====================

#legacySQL
SELECT

COUNT(uuid) AS num

FROM (

SELECT uuid

FROM [mlab-oti.batch.traceroute]

WHERE
  DATE(_PARTITIONTIME) BETWEEN DATE("2019-08-01") AND DATE("2019-09-01")
  AND Parseinfo.TaskFileName CONTAINS "mlab2-sea06"
)

ipcache does not cache results

As written, ipcache does not cache the results, which means that UUIDs corresponding to flows with the IP in the cache don't get any data. Instead, the system should cache the result and provide the cached result when a redundant request comes in.

It should also add a metadata field or fields to indicate that (a) the result was retrieved from the cache and (b) the UUID of the connection that initially populated the cache.

Traceroute-caller misses some stuff

In particular, not every successful NDT test has a corresponding traceroute. We should understand why this is AND/OR make it so that no connections are missed.

daily tests data drop over past 10 days

2 | 2020-01-26 | 8539584 |  
-- | -- | --
3 | 2020-01-25 | 8802175 |  
4 | 2020-01-24 | 8804368 |  
5 | 2020-01-23 | 8989835 |  
6 | 2020-01-22 | 9547450 |  
7 | 2020-01-21 | 9553054 |  
8 | 2020-01-20 | 9684823 |  
9 | 2020-01-19 | 9787798 |  
10 | 2020-01-18 | 10059376 |  
11 | 2020-01-17 | 9847430 |  
12 | 2020-01-16 | 10236158 |  
13 | 2020-01-15 | 10198442 |  
14 | 2020-01-14 | 10056294 |  
15 | 2020-01-13 | 10208497

unexpected IP address in traceroute file name

This is from ETL parser log:

JSON parsing failed with error unexpected end of JSON input for 2019/10/31/ndt-9sng7/20191031T205313Z_2600:3c02::17:d802_16b47e.jsonl

pt.go:531: JSON parsing failed with error unexpected end of JSON input for 2019/10/29/ndt-9sng7/20191029T174213Z_2600:3c02::17:d802_162087.jsonl

pt.go:531: JSON parsing failed with error unexpected end of JSON input for 2019/10/31/ndt-vg4pn/20191031T212206Z_2600:3c02::17:d802_174d51.jsonl

Open file descriptor leak

Looking at prometheus metrics for process_open_fds for traceroute that there is a fd leak. See image below:

traceroute-fdleak

After a rollout on the 16th, the fd count has steadily increased until machine reboots.

Only lga0* nodes are shown for convenience. The pattern is global. Originally started on ~Nov 8th in staging and Nov 14th in production.

sum by(machine, container, deployment) (process_open_fds{machine=~".*", container=~"traceroute"})

Handling failed trace in IPCache

For current IPCache implementation, there are other traces that might be waiting for a going on one.

If that trace failed with an error message (scamper failure, timeout failure, etc), there was no proper handling through current channel implementation (no error message sent back).

Traceroute Scamper .json should not have double quotes

The current sample output using "cat" is:

{"schema_version":""1"","uuid":""ndt-cbppg_1589996412_000000000000A94A"","testtime":"0001-01-01T00:00:00Z","start_time":1590086579,"stop_time":1590086579,"scamper_version":""0.1"","serverIP":""4.14.159.101"","clientIP":""4.14.159.86"","probe_size":44,"probec":1,"hop":null,"cached_result":true,"cached_uuid":""ndt-cbppg_1589996412_000000000000A689"","traceroutecaller_commit":""608d1a7""}

The code in tracer/scamper.go use

json.Marshal
ioutil.WriteFile

To create the .json file

Scamper mode now failing in sandbox

Tried out scamper mode on sandbox ndt pods. All traces are now failing:

2021/06/27 11:35:39 scamper.go:132: Trace failed in context 0xc000294540 (error: exit status 255: scamper -I "tracelb -P icmp-echo -q 3 -W 25 -O ptr 2600:3c02:e000:185:0:242:ac11:2" -o- -O json)

traceroute-caller sometimes calls traceroute twice for a given connection

The following Bigquery query discovers 300+ times traceroute-caller was called multiple times for a given UUID:

with 
uuids as (
  SELECT COUNT(*) as count, uuid, Parseinfo.TaskFileName as fname
  FROM `mlab-staging.batch.traceroute`
  group by uuid, Parseinfo.TaskFileName
)
select uuid, fname from uuids where count > 1 and uuid != ""

This seems pretty obviously incorrect, and we should fix it. Note that the UUID appearing multiple times here is in fact correct - it's the same connection (and so the same UUID) causing multiple calls to scamper's traceroute system.

fast-mda-traceroute pycaracal build failures

Builds of fast-mda-traceroute have failed since 2022-12-16.

# RUN pip3 install --no-binary pycaracal --no-cache-dir --verbose fast-mda-traceroute==0.1.10

Which concludes with errors like below, but the resolution is unknown.

2022-12-17T23:45:42Z #25 173.1 pip._internal.exceptions.InstallationError: Could not build wheels for pycaracal which use PEP 517 and cannot be installed directly

The build step can be enabled once it is working as intended again and we intend to include fast mda in TRC.

Create output directories for generated datatypes

The traceroute-caller service should create output directories on startup so archiving processes like pusher or jostler have access to these directories. If traceroute-caller creates the directory it should guarantee that it has write permission and the archiving processes can start minimal races.

This functionality is preferred to externally managing the creation of these directories or waiting until traceroute-caller runs its first trace and writes the output. The timing of this is not always guaranteed.

Proposal: Add admission controller to traceroute-caller

It looks like traceroute-caller, at least with scamper-daemon, has limited capacity, and, even with -p 1000, seem to get slower and slower if there are too many requests coming in.

If we limit the number of concurrent traces, it will improve latency, and likely have little or no effect on throughput.

These dashboard panels for gru01 basically shows that, for current deployment, things work ok until about 60 to 70 concurrent traces, then rapidly start to get much worse. This happen at around 15 trace per minute, which is much too slow for our busier sites.

Screen Shot 2021-06-26 at 10 55 18 AM

So, perhaps we should limit the number of concurrent traces we allow to start. We should evaluate the practical throughput with the pending deployment, and set a corresponding threshold for rejecting new traceroute requests. It appears that the limit can be fairly conservative - perhaps 30 or 40, as the throughput is quite insensitive to the concurrency.

Invalid test detected

pt.go:168: Invalid test gs://archive-mlab-staging/ndt/traceroute/2019/12/15/20191215T080346.438322Z-traceroute-mlab4-nuq07-ndt.tgz 2019/12/15/20191215T063922Z_ndt-qv4lg_1575424974_0000000000002F72.jsonl

===========================

Test seems truncated:

{"UUID":"ndt-78s2l_1575426876_0000000000001F25","TracerouteCallerVersion":"966aa47","CachedResult":false,"CachedUUID":""}
{"type":"cycle-start", "list_name":"/tmp/scamperctrl:1566", "id":1, "hostname":"ndt-78s2l", "start_time":1576352716}

Metrics needed

How long does it take for the process of running a traceroute to run, from end to end?

How many traceroutes have been performed?

How many times has the scamper client exited with a non-zero error code?

We should have metrics for all of these.

Bring back Timeout for trace

There used to be 60s timeout in python version traceroute caller for Legacy Paris Traceroute binary.

We might bring it back with 2 to 5 minutes timeout in this Go implementation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.