Git Product home page Git Product logo

amr-eager's Introduction

DEPRECATED: amr-eager

A more updated version of this parser, supporting other languages, is available at: https://github.com/mdtux89/amr-eager-multilingual

AMR-EAGER [1] is a transition-based parser for Abstract Meaning Representation (http://amr.isi.edu/).

Installation

NOTE: THIS REPO IS NOT MAINTAINED ANYMORE. Consider using https://github.com/mdtux89/amr-eager-multilingual instead.

Run the parser with pretrained model

Note: the input file must contain English sentences (one sentence for line), see contrib/sample-sentences.txt for example.

Preprocessing:

./preprocessing.sh -s <sentences_file>

You should get the output files in the same directory as the input files, with the prefix <sentences_file> and extensions .out and .sentences.

python preprocessing.py -f <sentences_file>

You should get the output files in the same directory as the input files, with the prefix <sentences_file> and extensions .tokens.p, .dependencies.p.

Parsing:

python parser.py -f <file> -m <model_dir>

If you wish to have the list of all nodes and edges in a JAMR-like format, add option -n. Without -m the parser uses the model provided in the directory LDC2015E86.

Mac users: the pretrained models seem to have compatibility errors when running on Mac OS X.

Evaluation

We provide evaluation metrics to compare AMR graphs based on Smatch (http://amr.isi.edu/evaluation.html). The script computes a set of metrics between AMR graphs in addition to the traditional Smatch code:

  • Unlabeled: Smatch score computed on the predicted graphs after removing all edge labels
  • No WSD. Smatch score while ignoring Propbank senses (e.g., duck-01 vs duck-02)
  • Named Ent. F-score on the named entity recognition (:name roles)
  • Wikification. F-score on the wikification (:wiki roles)
  • Negations. F-score on the negation detection (:polarity roles)
  • Concepts. F-score on the concept identification task
  • Reentrancy. Smatch computed on reentrant edges only
  • SRL. Smatch computed on :ARG-i roles only

The different metrics are detailed and explained in [1], which also uses them to evaluate several AMR parsers. (Some of the metrics were recently fixed and updated)

cd amrevaluation
./evaluation.sh <file>.parsed <gold_amr_file>

To use the evaluation script with a different parser, provide the other parser's output as the first argument.

Train a model

  • Preprocess training and validation sets:

    ./preprocessing.sh <amr_file>
    python preprocessing.py --amrs -f <amr_file>
    
  • Run the oracle to generate the training data:

    python collect.py -t <training_file> -m <model_dir>
    python create_dataset.py -t <training_file> -v <validation_file> -m <model_dir>
    
  • Train the three neural networks:

    th nnets/actions.lua --model_dir <model_dir>
    th nnets/labels.lua --model_dir <model_dir>
    th nnets/reentrancies.lua --model_dir <model_dir>
    

    (use also --cuda if you want to use GPUs).

  • Finally, move the .dat models generated by Torch in <model_dir>/actions.dat, <model_dir>/labels.dat and <model_dir>/reentrancies.dat.

  • To evaluate the performance of the neural networks run

    th nnets/report.lua <model_dir>
    
  • Note: If you used GPUs to train the models,you will need to uncomment the line require cunn from nnets/classify.lua.

Open-source code used:

References

[1] "An Incremental Parser for Abstract Meaning Representation", Marco Damonte, Shay B. Cohen and Giorgio Satta. Proceedings of EACL (2017). URL: https://arxiv.org/abs/1608.06111

amr-eager's People

Contributors

mdtux89 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amr-eager's Issues

.parsed file is empty

@mdtux89 I am using pretrained LDC2015E86 model and my sentence file name is: sentence.txt
I ran the following commands:

./preprocessing.sh -s sentence.txt 
python preprocessing.py -f sentence.txt
python parser.py -f sentence.txt  -m LDC2015E86

There was no error whatsoever but the output file is empty.
These are the logs for the commands:

./preprocessing.sh: line 15: /disk/ocean/public/tools/jamr2016/scripts/config.sh: No such file or directory
panic: swash_fetch got swatch of unexpected bit width, slen=1024, needents=64 at cdec-master/corpus/support/quote-norm.pl line 149, <STDIN> line 1.
Running CoreNLP..
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [0.6 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [2.2 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [0.5 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [1.5 sec].
[main] INFO edu.stanford.nlp.time.JollyDayHolidays - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/defs.sutime.txt
May 25, 2018 3:54:21 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFO: Read 83 rules
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.sutime.txt
May 25, 2018 3:54:22 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFO: Read 267 rules
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
May 25, 2018 3:54:22 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFO: Read 25 rules
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator parse
[main] INFO edu.stanford.nlp.parser.common.ParserGrammar - Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... 
done [0.3 sec].

Processing file /home/irlab/Documents/share/shivam_iitv/amr-eager/sentence.txt.sentences ... writing to /home/irlab/Documents/share/shivam_iitv/amr-eager/sentence.txt.out
Annotating file /home/irlab/Documents/share/shivam_iitv/amr-eager/sentence.txt.sentences
done.
Annotation pipeline timing information:
TokenizerAnnotator: 0.0 sec.
WordsToSentencesAnnotator: 0.0 sec.
POSTaggerAnnotator: 0.0 sec.
MorphaAnnotator: 0.0 sec.
NERCombinerAnnotator: 0.0 sec.
ParserAnnotator: 0.0 sec.
TOTAL: 0.0 sec. for 0 tokens at 0.0 tokens/sec.
Pipeline setup: 5.8 sec.
Total time for StanfordCoreNLP pipeline: 5.9 sec.
Done!

Adapting to Portuguese

I'm trying to adapt your AMRParser to the Portuguese Language.
I'm getting an error in preprocessing.py file.

Sentence 414
(2, 'nsubj', 0)
(2, 'advmod', 1)
(2, 'ROOT', 2)
(4, 'det', 3)
(2, 'dobj', 4)
(4, 'adpmod', 5)
(5, 'adpobj', 6)
(8, 'advmod', 7)
(2, 'xcomp', 8)
(10, 'det', 9)
(8, 'nsubj', 10)
(10, 'adpmod', 11)
(13, 'det', 12)
Traceback (most recent call last):
File "preprocessing.py", line 161, in
run(args.file, args.amrs)
File "preprocessing.py", line 122, in run
dependencies.append((indexes[d[0]], d[1], indexes[d[2]]))
IndexError: list index out of range

`Sentence #414 (14 tokens):
Esse aí disse o principezinho para si mesmo raciocina um pouco como o bêbado
[Text=Esse CharacterOffsetBegin=18007 CharacterOffsetEnd=18011 PartOfSpeech=PROP Lemma=Esse NamedEntityTag=0]
[Text=aí CharacterOffsetBegin=18012 CharacterOffsetEnd=18014 PartOfSpeech=ADV Lemma=aí NamedEntityTag=0]
[Text=disse CharacterOffsetBegin=18015 CharacterOffsetEnd=18020 PartOfSpeech=V Lemma=dizer NamedEntityTag=0]
[Text=o CharacterOffsetBegin=18021 CharacterOffsetEnd=18022 PartOfSpeech=DET Lemma=o NamedEntityTag=0]
[Text=principezinho CharacterOffsetBegin=18023 CharacterOffsetEnd=18036 PartOfSpeech=N Lemma=principezinho NamedEntityTag=0]
[Text=para CharacterOffsetBegin=18037 CharacterOffsetEnd=18041 PartOfSpeech=PRP Lemma=para NamedEntityTag=0]
[Text=si CharacterOffsetBegin=18042 CharacterOffsetEnd=18044 PartOfSpeech=PERS Lemma=se NamedEntityTag=0]
[Text=mesmo CharacterOffsetBegin=18045 CharacterOffsetEnd=18050 PartOfSpeech=DET Lemma=mesmo NamedEntityTag=0]
[Text=raciocina CharacterOffsetBegin=18051 CharacterOffsetEnd=18060 PartOfSpeech=V Lemma=raciocinar NamedEntityTag=0]
[Text=um=pouco CharacterOffsetBegin=18061 CharacterOffsetEnd=18069 PartOfSpeech=ADV Lemma=um=pouco NamedEntityTag=0]
[Text=como CharacterOffsetBegin=18070 CharacterOffsetEnd=18074 PartOfSpeech=PRP Lemma=como NamedEntityTag=0]
[Text=o CharacterOffsetBegin=18075 CharacterOffsetEnd=18076 PartOfSpeech=DET Lemma=o NamedEntityTag=0]
[Text=bêbado CharacterOffsetBegin=18077 CharacterOffsetEnd=18083 PartOfSpeech=ADJ Lemma=bêbado NamedEntityTag=0]
(ROOT (S (NP (DEM Esse)) (VP (ADV aí) (VP (V disse) (NP (ART o) (N' (N principezinho) (PP (P para) (NP (NP (NP (PRS si)) (S (VP (ADV mesmo) (VP (V raciocina) (ADVP (ADV um) (ADV pouco)))))) (NP (CONJ como) (NP (ART o) (N bêbado)))))))))))

nsubj(disse-3, Esse-1)
advmod(disse-3, aí-2)
root(ROOT-0, disse-3)
det(principezinho-5, o-4)
dobj(disse-3, principezinho-5)
adpmod(principezinho-5, para-6)
adpobj(para-6, si-7)
advmod(raciocina-9, mesmo-8)
xcomp(disse-3, raciocina-9)
det(pouco-11, um-10)
nsubj(raciocina-9, pouco-11)
adpmod(pouco-11, como-12)
det(bêbado-14, o-13)
adpcomp(como-12, bêbado-14)`

Could you help me?

error in downloading

In the execution of ./download.sh it cant fetch the the 'resources_single.tar.gz'...

error details :

--2020-04-29 14:35:58-- http://kinloch.inf.ed.ac.uk/public/direct/amreager/resources_single.tar.gz
Resolving kinloch.inf.ed.ac.uk (kinloch.inf.ed.ac.uk)... 2001:630:3c1:33:d6ae:52ff:feea:3003, 129.215.33.82
Connecting to kinloch.inf.ed.ac.uk (kinloch.inf.ed.ac.uk)|2001:630:3c1:33:d6ae:52ff:feea:3003|:80...
failed: Connection timed out.
Connecting to kinloch.inf.ed.ac.uk (kinloch.inf.ed.ac.uk)|129.215.33.82|:80... failed: Connection timed out.
Retrying.

.parsed file is empty!

$ ./preprocessing.sh -s contrib/sample-sentences.txt
Running CoreNLP..
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [0.6 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [1.2 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [0.5 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.7 sec].
[main] INFO edu.stanford.nlp.time.JollyDayHolidays - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.
Exception in thread "main" edu.stanford.nlp.util.ReflectionLoading$ReflectionLoadingException: Error creating edu.stanford.nlp.time.TimeExpressionExtractorImpl
at edu.stanford.nlp.util.ReflectionLoading.loadByReflection(ReflectionLoading.java:40)
at edu.stanford.nlp.time.TimeExpressionExtractorFactory.create(TimeExpressionExtractorFactory.java:57)
at edu.stanford.nlp.time.TimeExpressionExtractorFactory.createExtractor(TimeExpressionExtractorFactory.java:38)
at edu.stanford.nlp.ie.regexp.NumberSequenceClassifier.(NumberSequenceClassifier.java:82)
at edu.stanford.nlp.ie.NERClassifierCombiner.(NERClassifierCombiner.java:85)
at edu.stanford.nlp.pipeline.AnnotatorImplementations.ner(AnnotatorImplementations.java:108)
at edu.stanford.nlp.pipeline.AnnotatorFactories$6.create(AnnotatorFactories.java:333)
at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:85)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:375)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:139)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:135)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.main(StanfordCoreNLP.java:1222)
Caused by: edu.stanford.nlp.util.MetaClass$ClassCreationException: MetaClass couldn't create public edu.stanford.nlp.time.TimeExpressionExtractorImpl(java.lang.String,java.util.Properties) with args [sutime, {}]
at edu.stanford.nlp.util.MetaClass$ClassFactory.createInstance(MetaClass.java:235)
at edu.stanford.nlp.util.MetaClass.createInstance(MetaClass.java:380)
at edu.stanford.nlp.util.ReflectionLoading.loadByReflection(ReflectionLoading.java:38)
... 11 more
Caused by: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:488)
at edu.stanford.nlp.util.MetaClass$ClassFactory.createInstance(MetaClass.java:231)
... 13 more
Caused by: java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException
at de.jollyday.util.CalendarUtil.(CalendarUtil.java:42)
at de.jollyday.HolidayManager.(HolidayManager.java:73)
at de.jollyday.impl.XMLManager.(XMLManager.java:52)
at edu.stanford.nlp.time.JollyDayHolidays$MyXMLManager.(JollyDayHolidays.java:153)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:488)
at java.base/java.lang.Class.newInstance(Class.java:560)
at de.jollyday.HolidayManager.instantiateManagerImpl(HolidayManager.java:255)
at de.jollyday.HolidayManager.createManager(HolidayManager.java:276)
at de.jollyday.HolidayManager.getInstance(HolidayManager.java:194)
at edu.stanford.nlp.time.JollyDayHolidays.init(JollyDayHolidays.java:55)
at edu.stanford.nlp.time.Options.(Options.java:90)
at edu.stanford.nlp.time.TimeExpressionExtractorImpl.init(TimeExpressionExtractorImpl.java:45)
at edu.stanford.nlp.time.TimeExpressionExtractorImpl.(TimeExpressionExtractorImpl.java:39)
... 18 more
Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:582)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:190)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499)
... 34 more
Done!
$ python preprocessing.py -f contrib/sample-sentences.txt
$ python parser.py -f contrib/sample-sentences.txt
Writing file contrib/sample-sentences.txt.parsed ...

I tried but .parsed file is empty, 0 byte.

What should I do?

UnpicklingError while loading the model with PyTorch

`import torch
import torch.autograd
import torch.nn
import torch.multiprocessing
import torch.utils
import torch.legacy.nn
import torch.legacy.optim

xp = torch.load(r"D:\SDS\1_MachineLearning\amr-eager-master\LDC2015E86\reentrancies.dat")`

Traceback (most recent call last):

File "", line 9, in
xp = torch.load(r"D:\SDS\1_MachineLearning\amr-eager-master\LDC2015E86\reentrancies.dat")

File "D:\Anaconda3\envs\amr-eager\lib\site-packages\torch\serialization.py", line 261, in load
return _load(f, map_location, pickle_module)

File "D:\Anaconda3\envs\amr-eager\lib\site-packages\torch\serialization.py", line 399, in _load
magic_number = pickle_module.load(f)

UnpicklingError: invalid load key, '�'.

reentrancies.dat model weights could be downloaded from here.
What should I change?

Is this macOS problem?

Writing file contrib/sample-sentences.txt.parsed ...
Sentence 1: Chapter 1 .
Warning: Failed to load function from bytecode: binary string: bad header in precompiled chunkTraceback (most recent call last):
File "parser.py", line 164, in
main(args)
File "parser.py", line 103, in main
t = TransitionSystem(embs, data, "PARSE", args.model)
File "/Users/junhyun/amr-eager/transition_system.py", line 35, in init
self._classify = Classify(model_dir)
File "/Users/junhyun/.local/lib/python3.6/site-packages/PyTorch-4.1.1_SNAPSHOT-py3.6-macosx-10.7-x86_64.egg/PyTorchHelpers.py", line 20, in init
PyTorchAug.LuaClass.init(self, splitName, *args)
File "/Users/junhyun/.local/lib/python3.6/site-packages/PyTorch-4.1.1_SNAPSHOT-py3.6-macosx-10.7-x86_64.egg/PyTorchAug.py", line 255, in init
raise Exception(errorMessage)
Exception: ...s/junhyun/torch/install/share/lua/5.1/torch/File.lua:314: bad argument #1 to 'setupvalue' (function expected, got nil)

KeyError when preprocessing for the training data

When I try to preprocess the data for the training using the smatch_old in amrevaluation it generates this KeyError that I don't know why? The first line is the command I executed. (the printed 'i', 'segment',... is what I'm trying to debug with.)

(amr) /disk/ocean/yichao-liang/amr-eager$ python preprocessing.py --amrs -f LDC2020T02/train/train_amr.txt

<class 'amrevaluation.smatch_old.amr_edited.AMR'>
('i', 2)
('segment', '0.0')
('indexes', {'0.3.1': 'p', '0.1': 'd2', '0.0': 'm2', '0.3': 'a', '0.2': 't2', '0': 'm', '0.2.0': 'y2', '0.1.0': 'y', '0.3.0': 'i'})
('i', 6)
('segment', '0.0.0')
('indexes', {'0.3.1': 'p', '0.1': 'd2', '0.0': 'm2', '0.3': 'a', '0.2': 't2', '0': 'm', '0.2.0': 'y2', '0.1.0': 'y', '0.3.0': 'i'})
Traceback (most recent call last):
  File "preprocessing.py", line 157, in <module>
    run(args.file, args.amrs)
  File "preprocessing.py", line 32, in run
    data = AMRDataset(prefix, amrs)
  File "/disk/ocean/yichao-liang/amr-eager/amrdata.py", line 49, in __init__
    a = Alignments(prefix + ".alignments", allgraphs)
  File "/disk/ocean/yichao-liang/amr-eager/alignments.py", line 71, in __init__
    al[i].append(indexes[segment])
KeyError: '0.0.0'

Punctuation wrongly parsed

I parsed this sentence:

``Unless the Italian political system changes, Italy is condemned to political instability,'' said 
Sergio Romano, a former diplomat and political science professor.

and get this result back:

(v13 / say-01
    :ARG0 (v12 / ”)
    :time (v9 / condemn-01
        :ARG1 (v6 / change-01
            :ARG1 (v5 / system)
            :mod (v4 / politics)
            :ARG0 (v2 / country
                :name (v3 / name
                    :op1 "Italy")
                :wiki "Italy")
            :location (v7 / country
                :name (v8 / name
                    :op1 "Italy")
                :wiki "Italy"))
        :ARG2 (v11 / instability
            :mod (v10 / politics)))
    :ARG1 (v19 / and
        :op2 (v22 / have-org-role-91
            :ARG2 (v23 / professor)
            :ARG1 (v21 / science
                :mod (v20 / politics)))
        :op2 (v17 / have-org-role-91
            :ARG2 (v18 / diplomat)
            :time (v16 / former))
        :op1 (v14 / person
            :name (v15 / name
                :op1 "Sergio"
                :op2 "Romano")
            :wiki "Sergio_Romano"))
    :ARG1 (v1 / “))

The second line of the amr result is reported wrong by amr_hackathon's grammar parser:
:ARG0 (v12 / ”)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.