Git Product home page Git Product logo

gsoc2018-spacy's Introduction

Google Summer of Code 2018 Project - spaCy now speaks Greek

Welcome to the home repository of Greek language integration for spaCy.

This project is developed for Google Summer of Code 2018, under the auspices of GFOSS - Open Technologies Alliance.

Readme Contents

  1. Project links
  2. Problem Statement
  3. Results
  4. Deliverables
    • Tokenizer
    • Lemmatizer
    • Sentence splitter
    • Stop words
    • Norm exceptions
    • Named Entities dataset
    • Lexical attributes
    • Part of Speech Tagger
    • NER Tagger
    • Noun chunks
    • Sentiment Analyzer
    • Topic classifier
  5. Future work
  6. People

Project links

For this project, there is a daily basis timeline that keeps track of all the progress done for Google Summer of Code 2018. You can view the timeline here.

There is also a report page for the final evaluation for Google Summer of Code 2018. You can view the report page here.

What is really important, is the project Wiki, which holds information about every aspect of the addition of the Greek language to spaCy. You can view the project Wiki here.

Also, there is the NLPBuddy repository. NLPBuddy is a side result project of Google Summer of Code on top of spaCy which supports high quality NLP features such as syntax analysis, emotion analysis, topic classification and of course makes use of the Greek language support. You can find the repository here and the Wiki page of this demo here.

Problem statement and project goals

We live in the era of data. Every minute, 3.8 billion internet users, produce content; more than 120 million emails, 500,000 Facebook comments, 3 million Google searches. If we want to process that amount of data efficiently, we need to process natural language. Open source projects such as spaCy, textblob, or NLTK contribute significantly to that direction and thus they need to be reinforced.

This project is about improving the quality of Natural Language Processing of Greek Language.

The project goals can be categorized as following:

  1. Addition of Greek language to spaCy. Status: Complete
  2. Production of models for Part-Of-Speech (POS) tagging, Dependency Analysis (DEP) and Named Entities Recognition (NER), with and without word vectors. Status: Complete
  3. An open-source text analysis tool (demo) in which everyone can perform common NLP tasks in 7 languages. Status: Complete.
  4. Bonus goal: Usage of the addition of Greek language for sentiment analysis and other challenging NLP tasks.

Results

Addition of Greek language to spaCy

Greek language has been successfully integrated to spaCy, which was actually the most important goal of the project.

There were two pull requests for this purpose; the first was the initial addition of the language and the second pull request contained important optimizations that made the support for the Greek language probably the most complete in terms of features after the English language. 

Addition of the language: You can see the first pull request here.

Optimizations to the Greek language class: You can see the second pull request here.

Each part of the process of integrating Greek language to spaCy is discussed in detail in the Wiki page of the project.

Greek language models

Two models for Greek language have been produced. There is an ongoing process of uploading them to spaCy release.

After that, you will be able to install them with the following commands:

python3 -m spacy download el_core_web_sm
python3 -m spacy download el_core_web_lg

Greek language models support most of the capabilities that you will find in the deliverables section. Sentence splitting, tokenization, Part Of Speech Tagging, Syntax Analysis using DEP tags, Named Entities Recognition, lexical attributes extraction, norm exceptions and stop-words lists, are all included the Greek language models. The big Greek model (el_core_web_lg) includes word vectors so it supports features such as similarity detection between texts.  You can find more about the models production, usage and maintenance, in the models page of the wiki.  Some visualizations from the models usage:

NLPBuddy demo

An open-source text analysis tool has been developed as a demonstration of the project results.

The demo leverages Spacy's capabilities to extract as much information as possible from a raw text.

Experiment yourself with the demo: https://nlpbuddy.io

Briefly, in this demo you can perform the following tasks with your text:

  1. Language identification (performed using langid library).
  2. Text tokenization.
  3. Sentence splitting.
  4. Lemmatization.
  5. Part of Speech tags identification.
  6. Named Entity Recognition (Location, Person, Organization).
  7. Text summarization (uses Gensim's implementation of the TextRank algorithm).
  8. Keywords extraction.
  9. For the Greek language:
    • Text classification among the following categories: Sports, Science, World News, Greek News, Environment, Politics, Art, Health, Science. The Greek classifier is built with FastText and is trained in 20.000 articles labeled in these categories. Accuracy reaches 90%,
    • Text subjectivity analysis.
    • Emotion analysis. It detects the main text emotion among the following emotions: Anger, Disgust, Fear, Happiness, Sadness, Surprise.
  10. Lexical attributes. Find numerals, urls and emails.
  11. Noun chunks. Get the noun phrases of the text such as "the red bicycle".

Currently, it supports the features mentioned above for text in one of the following languages: Greek, English, German, Spanish, Portuguese, French, Italian and Dutch.

Text can either be provided or imported from a URL. Libraries used: python readability, BeautifulSoup4.

Note: All the functionalities that demo supports (and some more) are implemented as modules so anybody can use them independently.  Those modules are extensively discussed in the deliverables section. The central idea is that this Google Summer of Code project should produce results that are going to be used later on from people all around the world. For that reason, together with my mentor, Markos Gogoulos, we have implemented an API for the Demo so anybody can access the results that it provides (see more here).

Improvements in spaCy

A side goal of the project is to empower spaCy itself.

There is an open-dialogue with the creators of spaCy, who we would like to thank for their continuous support and enthusiasm.

Documentation improvements

A pull request for documentation improvements was successfully merged.

The pull request was about a small error found in the spaCy documentation in the pseudocode provided for overriding the spaCy tokenizer.

You can see the pull request here.

Sharing awareness

I am invited to write an article for Explosion AI Blog regarding the integration of Greek language to spaCy due to the innovative approaches followed during Google Summer of Code 2018. There is an ongoing process of writing and evaluation of this article till its' publication which may be after the end of Google Summer of Code.  A link to the post will be published here when it's ready.

Innovative approaches

In the process of integrating Greek language to spaCy some new approaches are followed. Hopefully, these approaches will inspire other languages too.

  • The Greek language is the second language that follows a rule based lemmatization procedure.
  • There were no available data for training NER classifier, so there was a need for creating data. A fast procedure of annotating data using Prodigy annotation tool is proposed for future reference.

Deliverables

Deliverables are independent functionality submodules or/and useful resources that were produced either during the process of integrating Greek language to spaCy or during the process of experimenting with the functionalities of spaCy and the demo implementation.

A list of the deliverables and a short description of each of them follows. You can find the functionality submodules in the res/modules folder of the project repo (here), serving as examples for usage.

If you want to learn more, there is an individual page for each of them in the project wiki or the demo wiki.

Deliverables list

  1. Tokenizer

    You can use this submodule having one of the produced greek models in order to split your sentence(s) to tokens, independently of the others spaCy modules. Sample input: Θέλω να μου σπάσεις αυτήν την πρόταση σε κομμάτια Sample output: [Θέλω, να, μου, σπάσεις, αυτήν, την, πρόταση, σε, κομμάτια] Submodule link.

  2. Lemmatizer

    This submodule is for sentences lemmatization.

    Sample input:

    Τα σύμβολα του αγώνα.
    

    Sample output:

    Original token: Τα , Lemma: τα
    Original token: σύμβολα , Lemma: σύμβολο
    Original token: του , Lemma: του
    Original token: αγώνα , Lemma: αγώνας
    Original token: . , Lemma: .   
    

    Greek lemmatizer is special because it follows a rule based approach. You can find extensive documentation about lemmatizer in the corresponding wiki page. Submodule link.

  3. Sentence Splitter

    You can use this submodule using one of the produced greek models in order to split sentences in a greek text independently of the rest of the spaCy modules.

    Sample input:

    Αυτή είναι μια πρόταση. Αυτή είναι μια δεύτερη πρόταση. Και αυτή μια τρίτη πρόταση. 
    

    Sample output:

    [ Αυτή είναι μια πρόταση., Αυτή είναι μια δεύτερη πρόταση., Και αυτή μια τρίτη πρόταση.] 
    

    Submodule link.

  4. Stop-words list.

In computing, stop words are words which are filtered out before or after processing of natural language data. Though "stop words" usually refers to the most common words in a language, there is no single universal list of stop words used by all natural language processing tools, and indeed not all tools even use such a list. Some tools specifically avoid removing these stop words to support phrase search.

The stop-words wiki page is available here. The final list with the stop-words of Greek language can be found here.

  1. Norm exceptions list.

spaCy usually tries to normalise words with different spellings to a single, common spelling. This has no effect on any other token attributes, or tokenization in general, but it ensures that equivalent tokens receive similar representations. This can improve the model's predictions on words that weren't common in the training data, but are equivalent to other words – for example, "realize" and "realise", or "thx" and "thanks".

The norm-exceptions wiki page is available here. The final list with the stop-words of Greek language can be found here.

  1. Named Entities dataset.

    For Greek language, there was no available dataset for Named Entities. So, we had to create our own annotated dataset using Prodigy. The annotated dataset is available here.  You can learn more about NER and Prodigy in the following links: Link 1Link 2

  2. Lexical attributes functions.

Each token of a spaCy doc is checked against some potential attributes. In this way, urls, nums and other types of special tokens can be seperated from the normal tokens. 

Sample input:

 Η ιστοσελίδα για το demo μας είναι: https://nlp.wordames.gr 

Sample output:

Url: https://nlp.wordames.gr 

Submodule link

  1. Part of Speech Tagger.

You can use this submodule having one of the produced greek models in order to get part of speech tags for your tokens, independently of the others spaCy modules. 

Sample input:

Η δημοκρατία είναι το πιο ανθρώπινο πολίτευμα. 

Sample output:

Token: Η Tag: DET
Token: δημοκρατία Tag: NOUN
Token: είναι Tag: AUX
Token: το Tag: DET
Token: πιο Tag: ADV
Token: ανθρώπινο Tag: ADJ
Token: πολίτευμα Tag: NOUN
Token: . Tag: PUNCT

Visualized output using displaCy:

  1. DEP Tagger.

You can use this submodule having one of the produced greek models in order to analyze syntax of your text, independently of the others spaCy modules. 

  • Get DEP tags.

    Sample input:

    Η δημοκρατία είναι το πιο ανθρώπινο πολίτευμα.
    

    Sample output:

    Token:η, DEP tag: det
    Token:δημοκρατία, DEP tag: nsubj
    Token:είναι, DEP tag: cop
    Token:το, DEP tag: det
    Token:πιο, DEP tag: advmod
    Token:ανθρώπινο, DEP tag: amod
    Token:πολίτευμα, DEP tag: ROOT
    Token:., DEP tag: punct
    
  • Navigate/Visualize the DEP tree.

    Sample input:

    Ο Κώστας αγόρασε πατάτες και τις άφησε πάνω στο ψυγείο. 
    

    Sample output:

     				        αγόρασε
    __________________|______
    |       |    |            άφησε
    |       |    |        ______|__________
    |       |  Κώστας    |      |    |   ψυγείο
    |       |    |       |      |    |     |
    πατάτες .    Ο      και    τις  πάνω  στο
        
    

    Visualization code source.

    Submodule link.

  1. NER Tagger.

Named-entity recognition (NER) (also known as entity identification, entity chunking and entity extraction) is a subtask of information extraction that seeks to locate and classify named entities in text into pre-defined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc.

The greek language models support the following NER tags: ORG, PERSON, LOC, GPE, EVENT, PRODUCT. Having one of the greek models, you can use the NER tagger: 

Visualization using displacy:

For extensive documentation of NER tagger for Greek language, check the corresponding wiki page. Submodule link.

  1. Noun chunks.

    Noun chunks are "base noun phrases" – flat phrases that have a noun as their head. You can think of noun chunks as a noun plus the words describing the noun – for example, "the lavish green grass" or "the world's largest tech fund".

    In the latest pull request noun chunks for Greek language are supported. 

    You can view the submodule here.

  2. Sentiment analyzer.

This submodule gives you a subjectivity score for your text and an emotion analysis . 

Sample input:

Έχω μείνει έκπληκτος! Πώς γίνεται αυτό; Η έκπληξη είναι τόσο μεγάλη! Α, τώρα εξηγούνται όλα. 

Sample output:

Subjectivity: 16.666666666666664%
Main emotion: surprise. Emotion score: 33.333333333333336%

Currently available only for the Greek language.  Submodule link.

  1. Topic classifier.

This submodule is for text classification. It can categorize text in the following categories: Sports, Science, World News, Greek News, Environment, Politics, Art, Health, Science.  Currently available only for the Greek language. 

Future Work

In this section, some suggestions for future work are listed. There are difficulty labels assigned to each task and some guidelines to start with. For more info on contribution, you can always have a look at the contribute page of the project wiki.

  • Add more rules to lemmatizer (Difficulty: easy) Greek language follows a rule based lemmatization technique. It is highly suggested to have a look in the lemmatizer wiki page to understand more about the approach followed. If you do, you will find out how scalable Greek language lemmatization is. Adding rules should be as easy as completing some lines in this file. For more info, check the contribute wiki page.

  • Overwrite the spaCy tokenizer (Difficulty: hard)

    Each language modifies the spaCy tokenization procedure by adding tokenizer exceptions. The tokenizer exceptions approach is not scalable for languages such as Greek. The reasons are pretty much the same as with the lemmatizer. A new approach, rule-based tokenization is proposed. The suggested steps are the following:

    1. Rewrite the spaCy tokenizer in pure Python, following the pseudo-code provided here. This is already done, you can find the code here.
    2. Write regex expressions to catch the following phenomena of Greek language: "εκθλίψεις", "αφαιρέσεις", "αποκοπές".
    3. Transform the tokens that match one of the phenomena mentioned above, to other(s) tokens using transformation rules.
  • Improve models accuracy (Difficulty: medium)

  • Implement topic classifier for other languages as well (Difficulty: medium)

  • Implement sentiment analyzer for other languages as well (Difficulty: medium)

  • Implement attitude detector and integrate it to demo (Difficulty: hard)

People

  • Google Summer of Code 2018 Student: Ioannis Daras
  • Mentor: Markos Gogoulos
  • Mentor: Panos Louridas

gsoc2018-spacy's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gsoc2018-spacy's Issues

text analysis demo

A working demo (API plus an intuitive UI).
Languages: Greek (possibly English too)

Provided a text, it gives you the following (at least)

  • Lemma (for each word, inline in tooltip)
  • POS (for each word, inline in tooltip)
  • Category (scikit learn classifier, 10 labels, train on 10k labeled texts from GR online newspapers - release model)
  • summarize (investigate gensim Textrank implementation, plus other options)
  • keywords (lemmatize, lower, frequency based. Checkout gensim.summarization.keywords too)
  • named entities (loc, person, org)
  • sentences (sentence1,2...x: show sentence splitting)
  • text tokenized - shows tokens
  • POS : show all POS and words, in one place
  • Language identification: should provide percentage of words found, and highest ranking amongst provided languages - even if this is only gr/en

How to use sentiment analyzer of Greek Spacy?

Hello and thank you for your work it is amazing. Can you please inform me how to use the sentiment analyzer that you have created? I have installed spacy-nightly and downloaded the el_core_news_md although the submodules for sentiment analysis do not exist there. :/

Σχόλιο στο https://eellak.ellak.gr/2018/08/27/oloklirothikan-me-epitichia-ta-10-erga-tou-organismou-anichton-technologion-sto-google-summer-of-code-2018/

Η Προσθήκη ελληνικής υποστήριξης στην βιβλιοθήκη NLP του Spacy.io δεν φαίνεται πολύ καλή. Τα Ελληνικά, οι λατινικές γλώσσες και οι σλαβικές γλώσσες δεν είναι σαν τα αγγλικά ή τα ιαπωνικά όπου ο ορισμός μιας λέξης καθορίζει το μέρος του λόγου, πχ τα αγγλικά ουσιαστικά χωρίζονται με βάση τον αριθμό(ενικός, πληθυντικός) και τα ρήματα χωρίζονται σε 3 μονολεκτικούς χρόνους.
Η αναγνώριση του μέρους του λόγου στα ελληνικά μπορεί να γίνει με μία αναζήτηση στο wiktionary, αλλά η εξαγωγή τον χαρακτηριστικών του μέρους του λόγου(πτώση, αριθμός, γένος, φωνή, χρόνος, πρόσωπο) θέλει πολύ δουλειά ιδιαίτερα στα ρήματα. Η lexigram έχει κάνει καλή δουλειά αλλά δεν είναι ανοιχτό λογισμικό, δεν προχώρησαν στο NLP-Wordnet-Machine Learning, μάλλον χρεοκόπησαν, σαν την Magenta.
... Το σχόλιο

Problem installing el_core_web_sm

How to reproduce the problem

I'm trying to install spaCy on Google's Colab (Google's version of Jupyter), and test its capabilities, as well as how well it can handle the Greek language. I had no problems installing:

  • en_core_web_trf
  • el_core_news_sm

But it seems that there's an issue install either of the Greek gsoc2018-spacy models:

  • el_core_web_sm
  • el_core_web_lg

The code is pretty straight forward:

# Print CUDA version
!nvcc --version

# Install necessary packages for spaCy
!pip install -U pip setuptools wheel
# Install the latest version of spaCy
!pip install "spacy[cuda110,transformers,lookups]>=3.1"

# Install Greek models for spaCy
!python -m spacy download en_core_web_trf
!python -m spacy download el_core_web_lg

Here's a copy to the Colab Notebook: Spacy_Greek_language.ipynb

2021-08-05 11:45:27.512036: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0

✘ No compatible package found for 'el_core_web_sm' (spaCy v3.1.1)

2021-08-05 11:45:31.568570: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0

✘ No compatible package found for 'el_core_web_lg' (spaCy v3.1.1)

Your Environment

  • Operating System: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
  • Python Version Used: 3.7.11
  • spaCy Version Used: 3.1.1
  • Environment Information: Build cuda_11.0_bu.TC445_37.28845127_0

Sentence splitter not working properly affecting part of speech tagger

Problem

I tried to run the sentence splitter submodule (sentence_splitter.py) but it didn't work in Greek language for me. I tried loading both el_core_news_sm and el_core_news_md and also tried inserting and encoding text in unicode utf-8. However it does not recognize different sentences but sees them as one. At the same time this affects the part of speech tagger.
Do you have any idea what might the problem be?

Thanks in advance.

Environment

spaCy version: 2.1.4
Location: /home/dimitris/.local/lib/python3.6/site-packages/spacy
Platform: Linux-4.18.0-17-generic-x86_64-with-Ubuntu-18.04-bionic
Python version: 3.6.7
Models: el, en

Inconsistent Dataset/Jsonl file

The dataset provided in the jsonl format has repeating values as labels for the same given spans.

This when loaded into spacy throws error as spacy doesnt support tagging same span with multiple entities.

  • spaCy version: 2.2.1
  • Platform: Linux-4.4.0-18362-Microsoft-x86_64-with-debian-stretch-sid
  • Python version: 3.7.3

ModuleNotFoundError: No module named 'spacy.symbols'

How to reproduce the problem

No lemmatization working using latest spacy version and 'el_core_news_md'. So I turn to this repository where:

pip install -r requirements.txt
python3 -m spacy download el_core_web_sm
Traceback (most recent call last):
  File "/home/konstantinos/.pyenv/versions/3.6.8/lib/python3.6/runpy.py", line 183, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/home/konstantinos/.pyenv/versions/3.6.8/lib/python3.6/runpy.py", line 142, in _get_module_details
    return _get_module_details(pkg_main_name, error)
  File "/home/konstantinos/.pyenv/versions/3.6.8/lib/python3.6/runpy.py", line 109, in _get_module_details
    __import__(pkg_name)
  File "/home/konstantinos/Repositories/gsoc2018-spacy/spacy/__init__.py", line 4, in <module>
    from .cli.info import info as cli_info
  File "/home/konstantinos/Repositories/gsoc2018-spacy/spacy/cli/__init__.py", line 1, in <module>
    from .download import download
  File "/home/konstantinos/Repositories/gsoc2018-spacy/spacy/cli/download.py", line 11, in <module>
    from .link import link
  File "/home/konstantinos/Repositories/gsoc2018-spacy/spacy/cli/link.py", line 9, in <module>
    from ..util import prints
  File "/home/konstantinos/Repositories/gsoc2018-spacy/spacy/util.py", line 20, in <module>
    from .symbols import ORTH
ModuleNotFoundError: No module named 'spacy.symbols'

Your Environment

Operating System:
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

Python Version Used:
Python 3.6.8

spaCy Version Used:
spacy==2.0.12.dev0
spacy-langdetect==0.1.2

Environment Information:

"όνομα σου" -> "όνομας σου"

Hello,

Can you figure out why I get

nlp = spacy.load("el_core_news_lg")
doc = 'το όνομα σου'
tokens = nlp(doc)
for token in tokens:
    print(token.lemma_)

-->

το
όνομας
σου

while το ονομα μου returns το ονομο μου.

Thank you,
Gerasimos

Website for evaluation

For the project to be successful, we need a url for the project. After a quick search to gsoc projects 2017, I noticed that it is commonly used to create a website for the project in which anybody can see the progress as plain text/demo and sometimes there is an analysis of the approaches followed and things that can be done for the project to be extended in the future. We already discussed about the demo, but it would be really good to also have a landing page to briefly describe the project and its' goals, a wiki page, and a contribution page. References to github repos, successful pull requests and other resources should also be included.

README improvements

Some issues spotted in the README file of the project:

  • Greek language models are not uploaded yet to spaCy. Remember to change the corresponding section.
  • Article for Explosion AI blog is not ready yet. Remember to change the corresponding section.
  • Extend/Justify more the innovative approaches .
  • Add images wherever possible.
  • Check for language/expression errors.
  • Write tests section when tests are ready.
  • Complete contribute instructions.
  • Check SPACE tag.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.