Git Product home page Git Product logo

corpkit's Introduction

corpkit: sophisticated corpus linguistics

Join the chat at https://gitter.im/interrogator/corpkit DOI Travis PyPI ReadTheDocs Docker Automated build Anaconda-Server Badge

NOTICE: corpkit is now deprecated and unmaintained. It is superceded by buzz, which is better in every way.

corpkit is a module for doing more sophisticated corpus linguistics. It links state-of-the-art natural language processing technologies to functional linguistic research aims, allowing you to easily build, search and visualise grammatically annotated corpora in novel ways.

The basic workflow involves making corpora, parsing them, and searching them. The results of searches are CONLL-U formatted files, represented as pandas objects, which can be edited, visualised or exported in a lot of ways. The tool has three interfaces, each with its own documentation:

  1. A Python API
  2. A natural language interpreter
  3. A graphical interface

A quick demo for each interface is provided in this document.

Feature summary

From all three interfaces, you can do a lot of neat things. In general:

Parsing

Corpora are stored as Corpus objects, with methods for viewing, parsing, interrogating and concordancing.

  • A very simple wrapper around the full Stanford CoreNLP pipeline
  • Automatically add annotations, speaker names and metadata to parser output
  • Detect speaker names and make these into metadata features
  • Multiprocessing
  • Store dependency parsed texts, parse trees and metadata in CONLL-U format

Interrogating corpora

Interrogating a corpus produces an Interrogation object, with results as Pandas DataFrame attributes.

  • Search corpora using regular expressions, wordlists, CQL, Tregex, or a rich, purpose built dependency searching syntax
  • Interrogate any dataset in CONLL-U format (e.g. the Universal Dependencies Treebanks)
  • Collocation, n-gramming
  • Restrict searches by metadata feature
  • Use metadata as symbolic subcorpora
  • Choose what search results return: show any combination of words, lemmata, POS, indices, distance from root node, syntax tree, etc.
  • Generate concordances alongside interrogations
  • Work with coreference annotation

Editing results

Interrogation objects have edit, visualise and save methods, to name just a few. Editing creates a new Interrogation object.

  • Quickly delete, sort, merge entries and subcorpora
  • Make relative frequencies (e.g. calculate results as percentage of all words/clauses/nouns ...)
  • Use linear regression sorting to find increasing, decreasing, turbulent or static trajectories
  • Calculate p values, etc.
  • Keywording
  • Simple multiprocessing available for parsing and interrogating
  • Results are Pandas objects, so you can do fast, good statistical work on them

Visualising results

The visualise method of Interrogation objects uses matplotlib and seaborn if installed to produce high quality figures.

  • Many chart types
  • Easily customise titles, axis labels, colours, sizes, number of results to show, etc.
  • Make subplots
  • Save figures in a number of formats

Concordancing

When interrogating a corpus, concordances are also produced, which can allow you to check that your query matches what you want it to.

  • Colour, sort, delete lines using regular expressions
  • Recalculate results from edited concordance lines (great for removing false positives)
  • Format lines for publication with TeX

Other stuff

  • Language modelling
  • Save and load results, images, concordances
  • Export data to other tools
  • Switch between API, GUI and interpreter whenever you like

Installation

Via pip:

pip install corpkit

Via Git:

git clone https://github.com/interrogator/corpkit
cd corpkit
python setup.py install

Via Anaconda:

conda install -c interro_gator corpkit

Creating a project

Once you've got everything installed, you'll want to create a project---this is just a folder hierarchy that stores your corpora, saved results, figures and so on. You can do this in a number of ways:

Shell

new_project junglebook
cp -R chapters junglebook/data

Interpreter

> new project named junglebook
> add ../chapters

Python

>>> import shutil
>>> from corpkit import new_project
>>> new_project('junglebook')
>>> shutil.copytree('../chapters', 'junglebook/data')

You can create projects and add data via the file menu of the graphical interface as well.

Ways to use corpkit

As explained earlier, there are three ways to use the tool. Each has unique strengths and weaknesses. To summarise them, the Python API is the most powerful, but has the steepest learning curve. The GUI is the least powerful, but easy to learn (though it is still arguably the most powerful linguistics GUI available). The interpreter strikes a happy middle ground, especially for those who are not familiar with Python.

Interpreter

The first way to use corpkit is by entering its natural language interpreter. To activate it, use the corpkit command:

$ cd junglebook
$ corpkit

You'll get a lovely new prompt into which you can type commands:

corpkit@junglebook:no-corpus> 

Generally speaking, it has the comforts of home, such as history, search, backslash line breaking, variable creation and ls and cd commands. As in IPython, any command beginning with an exclamation mark will be executed by the shell. You can also write scripts and execute them with corpkit script.ck, or ./script.ck if you have a shebang.

Making projects and parsing corpora

# make new project
> new project named junglebook
# add folder of (subfolders of) text files
> add '../chapters'
# specify corpus to work on
> set chapters as corpus
# parse the corpus
> parse corpus with speaker_segmentation and metadata and multiprocess as 2

Searching and concordancing

# search and exclude
> search corpus for governor-function matching 'root' \
...    excluding governor-lemma matching 'be'

# show pos, lemma, index, (e.g. 'NNS/thing/3')
> search corpus for pos matching '^N' showing pos and lemma and index

# further arguments and dynamic structuring
> search corpus for word matching any \
...    with subcorpora as pagenum and preserve_case

# show concordance lines
> show concordance with window as 50 and columns as LMR

# colouring concordances
> mark m matching 'have' blue

# recalculate results
> calculate result from concordance

Variables, editing results

# variable naming
> call result root_deps
# skip some numerical subcorpora
> edit root_deps by skipping subcorpora matching [1,2,3,4,5]
# make relative frequencies
> calculate edited as percentage of self
# use scipy to calculate trends and sort by them
> sort edited by decrease

Visualise edited results

> plot edited as line chart \
...    with x_label as 'Subcorpus' and \
...    y_label as 'Frequency' and \
...    colours as 'summer'

Switching interfaces

# open graphical interface
> gui
# enter ipython with current namespace
> ipython
# use a new/existing jupyter notebook
> jupyter notebook findings.ipynb

API

Straight Python is the most powerful way to use corpkit, because you can manipulate results with Pandas syntax, construct loops, make recursive queries, and so on. Here are some simple examples of the API syntax:

Instantiate and search a parsed corpus

### import everything
>>> from corpkit import *
>>> from corpkit.dictionaries import *

### instantiate corpus
>>> corp = Corpus('chapters-parsed')

### search for anything participant with a governor that
### is a process, excluding closed class words, and 
### showing lemma forms. also, generate a concordance.
>>> sch = {GF: roles.process, F: roles.actor}
>>> part = corp.interrogate(search=sch,
...                         exclude={W: wordlists.closedclass},
...                         show=[L],
...                         conc=True)

You get an Interrogation object back, with a results attribute that is a Pandas DataFrame:

          daisy  gatsby  tom  wilson  eye  man  jordan  voice  michaelis  \
chapter1     13       2    6       0    3    3       0      2          0   
chapter2      1       0   12      10    1    1       0      0          0   
chapter3      0       3    0       0    3    8       6      1          0   
chapter4      6       9    2       0    1    3       1      1          0   
chapter5      8      14    0       0    3    3       0      2          0   
chapter6      7      14    9       0    1    2       0      3          0   
chapter7     26      20   35      10   12    3      16      9          5   
chapter8      5       4    1      10    2    2       0      1         10   
chapter9      1       1    1       0    3    3       1      1          0   

Edit and visualise the result

Below, we make normalised frequencies and plot:

### calculate and sort---this sort requires scipy
>>> part = part.edit('%', SELF)

### make line subplots for the first nine results
>>> plt = part.visualise('Processes, increasing', subplots=True, layout=(3,3))
>>> plt.show()

There are also some more detailed API examples over here. This document is fairly thorough, but now deprecated, because the official docs are now over at ReadTheDocs.

Example figures


Shifting register of scientific English



Participants and processes in online forum talk



Riskers and mood role of risk words in print news journalism

Graphical interface

Screenshots coming soon! For now, just head here.

Contact

Twitter: @interro_gator

Cite

McDonald, D. (2015). corpkit: a toolkit for corpus linguistics. Retrieved from https://www.github.com/interrogator/corpkit. DOI: http://doi.org/10.5281/zenodo.28361

corpkit's People

Contributors

gitter-badger avatar interrogator avatar jamesdavidson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

corpkit's Issues

Docstings need to be moved to classes

Docstrings are provided for functions like interrogator(), editor() and plotter(), but these are no longer the preferred way to access their functionality. Instead, the method approach of corpus.interrogate() or interrogation.edit() is now used. Is there a way, I wonder, to auto-migrate the docstrings? Or should they be swapped out from the functions and put into the methods?

AttributeError: module 'pip' has no attribute 'main'

I can not run the interface, could anyone help me?

$ python -m corpkit.gui
Traceback (most recent call last):
File "/home/rodriguesfas/anaconda3/lib/python3.6/site-packages/corpkit/gui.py", line 7171, in install
importlib.import_module(name)
File "/home/rodriguesfas/anaconda3/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'tkintertable'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/rodriguesfas/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/rodriguesfas/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/rodriguesfas/anaconda3/lib/python3.6/site-packages/corpkit/gui.py", line 7180, in
install(*tkintertablecode)
File "/home/rodriguesfas/anaconda3/lib/python3.6/site-packages/corpkit/gui.py", line 7174, in install
pip.main(['install', loc])
AttributeError: module 'pip' has no attribute 'main'

global name 'lexemes' is not defined

Trying to use corpkit with a 64 bit Python 2.7.11 Anaconda installation.

on 'from dictionaries.process_types import processes' I get

'NameError: global name 'lexemes' is not defined'

plotter() legends getting partially cut off

When using plotter() without IPython, the legend might not completely show up when a figure is drawn.

A workaround is to use the save argument, which will save a version with the full legend. Or, use IPython, and the %matplotlib inline magic.

ValueError: Invalid control character at: line 1120 column 21 (char 28474)

Follwing error

17:42:39: Parsing finished. Moving parsed files into place ...
Traceback (most recent call last):
  File "/home/d/anaconda2/lib/python2.7/site-packages/corpkit/env.py", line 2168, in interpreter
    out = run_command(tokens)  
  File "/home/d/anaconda2/lib/python2.7/site-packages/corpkit/env.py", line 1113, in run_command
    out = command(tokens[1:])
  File "/home/d/anaconda2/lib/python2.7/site-packages/corpkit/env.py", line 1437, in parse_corpus
    parsed = to_parse.parse(**kwargs)  
  File "/home/d/anaconda2/lib/python2.7/site-packages/corpkit/corpus.py", line 930, in parse
    **kwargs
  File "/home/d/anaconda2/lib/python2.7/site-packages/corpkit/make.py", line 356, in make_corpus
    coref=coref, metadata=metadata)
  File "/home/d/anaconda2/lib/python2.7/site-packages/corpkit/conll.py", line 1113, in convert_json_to_conll
    data = json.load(fo)
  File "/home/d/anaconda2/lib/python2.7/json/__init__.py", line 291, in load
    **kw)
  File "/home/d/anaconda2/lib/python2.7/json/__init__.py", line 339, in loads
    return _default_decoder.decode(s)
  File "/home/d/anaconda2/lib/python2.7/json/decoder.py", line 364, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/home/d/anaconda2/lib/python2.7/json/decoder.py", line 380, in raw_decode
    obj, end = self.scan_once(s, idx)
ValueError: Invalid control character at: line 1120 column 21 (char 28474)

Upgrade to Python 3

I admit it, it's time to upgrade to Python 3. Probably would only take a couple of hours...

Documentation of internals

Looking at this as a programmer, the first thing I want to understand is the data structures. What are the inputs, outputs and intermediaries? Usually all that's necessary for this kind of documentation is a sketch of how the various entities are mapped to basic constructs like sets, lists, maps, tuples, booleans, numbers, strings, symbols or nested variants of the same (ie trees and the like). And perhaps a note about how they get encoded in files (ie CSV, Penn Treebank or Python pickles).

For example, I jotted down some notes (https://github.com/jamesdavidson/corpkit/blob/hacking/DATA.md) whilst going through the code. If you'd like, I can help you write this kind of documentation.

Main methods/functions should use `Corpus`/`Interrogation` classes

The API recently evolved into something class-heavy, with Corpus and Interrogation objects. The actual interrogating, concordancing, editing, etc., does not exploit any information in these classes, though it would simplify the code. That should get done, to help make things more easily extensible.

Plain text search error

When trying to make plain text search I get error in the console

18:39:45: Interrogating corpus ...
Traceback (most recent call last):
  File "corpkit/corpkit-gui.py", line 520, in runner
    command()
  File "corpkit/corpkit-gui.py", line 1270, in do_interrogation
    interrodata = interrogator(corpus_fullpath.get(), selected_option, **interrogator_args)
  File "/Users/ourzhumt/projects/corpkit/corpkit/interrogator.py", line 1627, in interrogator
    result_from_file = plaintext_simple_search(query, data)
  File "/Users/ourzhumt/projects/corpkit/corpkit/interrogator.py", line 740, in plaintext_simple_search
    for m in range(matches):
TypeError: range() integer end argument expected, got list.

coreNLP error

I've downloaded the standalone OSX corpkit app but I'm getting error messages that have to do with coreNLP. After creating a new project, opening it, and adding a corpus, I get the following message when I try to parse the corpus: "CoreNLP parser not found. Download/install it?". When I click "Yes", nothing happens and no download is initiated. I've tried downloading coreNPL separately and re-setting the CoreNLP path so as to make sure there are no mismatches, but when I try to change the path, I get a message that coreNLP has not been not found in the folder I selected that I know for sure contains coreNLP. Any thoughts on why I'm having these issues with coreNLP and corpkit?

Multilingual support

corpkit is currently oriented toward English, but nothing stops at least some features from being extended to other languages. I should be able to get around to the basics (encodings, as well as multilingual tokenisation) soon.

Parsing errors - "EOFError" and "UnicodeDecodeError"

Hello,

I'm currently using corpkit as a research tool for my master's thesis in library and information science as well as for experimenting on my spare-time. It works well but occasionally I get a few error messages when parsing a corpus which I don't understand. Could you perhaps explain them to me?

They're either

"EOFError: EOF when reading a line"

or

"UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 0: ordinal not in range(128)"

Most recently, these messages occured when trying to parse a plain text file consisting of James Joyce's Ulysses retrieved from Project Gutenberg (https://www.gutenberg.org/ebooks/4300). As far as I understand the file is encoded to UTF-8 and should work fine.

Thanks in advance.

Saving settings on exit

When program exits, last user corpus path should be stored and loaded upon next start.

Possible improvements:

  1. If two program launches result in crash, this value can be dropped.
  2. Last few corpora used can be stored as "recent"

UnicodeDecodeError

Hello,

when I try to parse a corpus, I get the following error message:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xcc in position 33: ordinal not in range(128)
I attach the log file

Thank you for your help,

Stefania

log-02.txt

Resolve competing documentation

There's now a really large GitHub readme and a basic user guide on ReadTheDocs. Should probably soon move bits and pieces from GitHub into Example blocks in RTD, and keep GitHub for the basics.

Not getting prompted to download coreNLP (before deleting some large files)/Now can't parse after downloading coreNLP

I'm just getting started with corpkit. I loaded my first corpus called "Chapters" and clicked on "Parse: Chapters." Nothing happens.

I'm attaching the log file.
log-00.txt

Also, perhaps this is related to my problem:
The Files listed in the GUI under "Files in subcorpus: "name of the selected subcorpus" are not the ones I put there or that can be found in the directory in the project in question. That is the case for one subcorpus. There are two. For the second one, no files are listed under "Files in the subcorpus: "name of the selected subcorpus." Even though there are several in the directory for the project in the "data" folder.

Thank you for the help.

GUI resolution

It seems that the GUI is poorly sized on some screens. Tkinter doesn't do a lot to make this automatically fixable, but it's still something that can be addressed. Watch this space!

Code coverage

corpkit is now for Python 2/3, for terminal, notebook, GUI and environment, using or not using multiprocessing, and working with XML, .conll, plaintext and lists of tokens. Comprehensive tests are needed to check that the tools function correctly in all environments---current tests cover only core functionality.

byte-compiling /home/d/anaconda2/lib/python2.7/site-packages/corpkit/model_interro.py to model_interro.pyc Sorry: IndentationError: unexpected indent (model_interro.py, line 33)

byte-compiling /home/d/anaconda2/lib/python2.7/site-packages/corpkit/model_interro.py to model_interro.pyc
Sorry: IndentationError: unexpected indent (model_interro.py, line 33)
          name = subc.name
        dat = Counter(subc.to_dict())
        train(dat, name=name)
    print('Model created.')
    return model

        scores[subc.name] = score_text_with_model(trained)
    return sorted(scores.items(), key=lambda x: x[1], reverse=True)

Documentation for GUI

There is now a working GUI, which also helps with downloading and running Stanford CoreNLP, in corpkit/interface.py. It lacks the kind of documentation that would make it useful to a possible end user. It has a lot of features, and they'll need to be explained!

Parse tree visualisation location

When viewing parse trees in the Build tab of the GUI, the trees don't always show up in the centre of their frame, and you have to drag them into view. This is not ideal. Currently not sure how to fix this.

Can't pause parallel-processing interrogations

Right now, anything interrogation using multiprocessing can't be paused, because I don't know how to communicate simultaneously with each thread. I don't think I'll be fixing this one anytime soon.

User guide for ReadTheDocs

The GitHub readme is too long, and some of the documentation on ReadTheDocs is sparse. Some things should migrate over there (dedicated pages for Building, Interrogating, Concordancing, Editing and Visualising?) but this requires rewriting in RST format, which is a pain.

.app, .exe for GUI

I want to put together a nice app version of the GUI, so that people don't need to install and run via command line, etc. One problem is what to bundle: a lot of external modules are required that would make the thing HUGE if all bundled together! If anybody sees this and has experience with py2app or pyinstaller, let me know!

Legend issues when making area plots

Currently, plotter(x, y, kind='area') is not respecting the legend_pos keyword arg, and is duplicating the entry labels. Shouldn't be too tough to fix.

UnboundLocalError: local variable 'outpath' referenced before assignment

Now I get following error: on command : parse corpus


ParserAnnotator: 173.3 sec.
NERCombinerAnnotator: 14.6 sec.
DeterministicCorefAnnotator: 8.2 sec.  
TOTAL: 197.9 sec. for 45060 tokens at 227.7 tokens/sec.
Pipeline setup: 6.7 sec.
Total time for StanfordCoreNLP pipeline: 211.0 sec.
14:34:22: Parsing finished. Moving parsed files into place ...
Traceback (most recent call last):
  File "/home/domas/anaconda2/lib/python2.7/site-packages/corpkit/env.py", line 2168, in interpreter
    out = run_command(tokens)
  File "/home/domas/anaconda2/lib/python2.7/site-packages/corpkit/env.py", line 1113, in run_command
    out = command(tokens[1:])
  File "/home/domas/anaconda2/lib/python2.7/site-packages/corpkit/env.py", line 1437, in parse_corpus
    parsed = to_parse.parse(**kwargs) 
  File "/home/domas/anaconda2/lib/python2.7/site-packages/corpkit/corpus.py", line 930, in parse
    **kwargs
  File "/home/domas/anaconda2/lib/python2.7/site-packages/corpkit/make.py", line 388, in make_corpus
    rename_all_files(outpath)
UnboundLocalError: local variable 'outpath' referenced before assignment

TypeError: 'bool' object is not iterable on Linux

I get following error on Ubuntu 14 machine


corpkit@cowtest:no-corpus> add ../  
concordance  search       testing        
corpkit@cowtest:no-corpus> add ../../  
concordance  search       testing        
corpkit@cowtest:no-corpus> add ../fulltxt/anonymized  
../fulltxt/anonymized added to /corpus/cowtest/data/anonymized.  
corpkit@cowtest:no-corpus> set anonymized as corpus  
Corpus: /corpus/cowtest/data/anonymized  
corpkit@cowtest:anonymized> parse corpus  
Making list of files ...   
Traceback (most recent call last):  
  File "/home/domas/anaconda2/lib/python2.7/site-packages/corpkit/env.py", line 2168, in interpreter  
    out = run_command(tokens)  
  File "/home/domas/anaconda2/lib/python2.7/site-packages/corpkit/env.py", line 1113, in run_command  
    out = command(tokens[1:])  
  File "/home/domas/anaconda2/lib/python2.7/site-packages/corpkit/env.py", line 1437, in parse_corpus  
    parsed = to_parse.parse(**kwargs)  
  File "/home/domas/anaconda2/lib/python2.7/site-packages/corpkit/corpus.py", line 930, in parse  
    **kwargs  
  File "/home/domas/anaconda2/lib/python2.7/site-packages/corpkit/make.py", line 228, in make_corpus  
    out_ext=kwargs.get('output_format'))  
TypeError: 'bool' object is not iterable  
corpkit@cowtest:anonymized> 

NameError: global name 'Corpus' is not defined

After I've installed the latest corpkit (2.1.1), I wanted to parse my corpus (which worked in the previous version - at least for approx. 40% of the texts) and received the message NameError: global name 'Corpus' is not defined. What did I do wrong?
Below you will find the log - hoping that it provides a clue...
Thx for your help!

log-00.txt

Sustainable storage of larger files

At present, there are a few very large files weighing the repository down. Most notably, the corpkit .app, which actually contains corpkit, as well as matplotlib, etc. This, and the data files, need to go into Git Large File Storage, or similar, once that gets enabled for this repo.

Refactor edit() method

Since learning to use Pandas better, it occurs to me that the result editing backend could be dramatically simplified and sped up using vectorised methods.

Multiple popups possible

In many cases, it's currently possible to open a popup window (i.e. Wordlists or Coding scheme) multiple times at once. This shouldn't be possible.

Mixed Python 2 and 3 expressions

If using python 2: will encounter error when trying to import "tkinter", because it should be "Tkinter" for python 2 (for python 3, "tkinter" is the right one)

If using python 3: module "string" has no attribute "lower", because in python 3 it has been renamed as "ascii_lowercase"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.