Git Product home page Git Product logo

khmer's Introduction

Research software impact Supported Python versions khmer build status Test coverage BSD-3 licensed

khmer

Welcome to khmer: k-mer counting, filtering, and graph traversal FTW!

The official source code repository is at https://github.com/dib-lab/khmer and project documentation is available online at http://khmer.readthedocs.io. See http://khmer.readthedocs.io/en/stable/introduction.html for an overview of the khmer project.

Getting help

See http://khmer.readthedocs.io/en/stable/user/getting-help.html for more details, but in brief:

Important note: cite us!

khmer is research software, so you should cite us when you use it in scientific publications! Please see the CITATION file for citation information.

The khmer library is a project of the Lab for Data Intensive Biology at UC Davis, and includes contributions from its members, collaborators, and friends.

Quick install

pip install khmer
pytest --pyargs khmer -m 'not known_failing and not jenkins and not huge and not linux'

See https://khmer.readthedocs.io/en/stable/user/install.html for more detailed installation instructions.

Contributing

We welcome contributions to khmer from the community! If you're interested in modifying khmer or contributing to its ongoing development see https://khmer.readthedocs.io/en/stable/dev/getting-started.html.

khmer's People

Contributors

adina avatar aditi9783 avatar ahaerpfer avatar alameldin avatar anotherthomas avatar betatim avatar bocajnotnef avatar camillescott avatar ctb avatar drtamermansour avatar echelon9 avatar emcd avatar fishjord avatar jasonpell avatar jiarong avatar jlippi avatar kaben avatar kdm9 avatar luizirber avatar mr-c avatar nbkingsley avatar pgarland avatar qingpeng avatar ramrs avatar safay avatar sguermond avatar shannonekj avatar standage avatar themangoemoji avatar theonehyer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

khmer's Issues

Add other binsizes into hashtable than 1 bit and 8 bits.

Right now, Hashbits supports 1 bit per hashtable entry and CountingHash supports 8 bits per hashtable entry. 2 and 4 bits/entry would be great extensions that could enable substantially decreased memory usage in some useful circumstances; not sure if its worth doing it for arbitrary numbers of bits, or if there would be performance penalties.

Integrate Sphinx Support into Setup Script

Integrating Sphinx support into the setup script is a simple task. If we do this, then we can trigger documentation builds as part of any CI testing for which we may have arranged.

Refactor C++ Test Drivers

The C++ test drivers consist of a lot of copy pasta, which violates the DRY principle. These should be cleaned up and have common functionality factored out to improve future maintainability and extensibility.

Integrate Pylint Support into Setup Script

Recently, I integrated support for 'pylint' into another project. This was very enlightening as it both caught some bugs and provided some useful software design feedback. I would like to do the same for 'khmer'. The configuration process is fairly simple. Using 'pylint' from the setup script is very simple: python setup.py lint

I am of the opinion that we should do this before an official release and include a pylint run as part of any CI testing we may do.

Occasional failures in multithreaded ReadParser / Python API (bleeding-edge branch)

Every 10-20 runs of multithreaded trial.py, I get --

Exception in thread Thread-6:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 505, in run
self.__target(_self.__args, *_self.__kwargs)
File "trial.py", line 10, in read_names
for n, read in enumerate(rparser):
ValueError: Invalid input file format.

Occasionally, there is also a hang. Not easy to reproduce, but happens reasonably frequently.

This is on Mac OS X 10.8.3, built with gcc and g++.

Local Copies of Dependencies

We require the 'nosetests' package. Many people do not have that installed by default. Attempting to run the test suite without it will result in errors. Currently, 'setuptools' only enforces prerequisites but cannot be told to fetch missing dependencies. There is a patch ( https://bitbucket.org/tarek/distribute/issue/323/versionconflict-on-setup_requires ), which attempts to allow missing dependencies to be fetched and used locally, but it is broken as of this writing ( https://bitbucket.org/tarek/distribute/issue/335/version-0631-broken-against-site-packages ). Once this patch is fixed, then we would do well to feed a 'setup_requires' list to the 'setup' function in 'setup.py'.

Travis CI Configuration

Having tried out Travis CI with another project, I think it is something that we should be doing with 'khmer'. As 'khmer' is a large repo and some of the tests consume a fair amount of memory, any published resource limits for Travis CI need to be investigated. Writing '.travis.yml' should be almost trivial.

Callbacks into CPython VM during Multi-threaded Execution

We have various places where we wish to notify the user of progress being made during processing. In the original single-threaded model, callbacks were made from the C++ code doing the heavy-lifting to Python. That worked fine because there was never any issue with the GIL being released or acquired and the Python VM could be guaranteed to be in a consistent state. For multi-threaded operation, we cannot really guarantee that we will have the GIL when we need it without a great deal of hassle and a performance hit. Bad things will happen if C++ code attempts to callback into Python when the GIL is not acquired.

Here is a (not necessarily exhaustive) list of alternative mechanisms for user notification:
() Use C++ output streams. (We shouldn't have to worry about interference with Python streams since both are not in use at the same time. And, the way the reporting is structured is such that we shouldn't have to worry about multiple threads clobbering each other's output either. I'm +1 on this solution unless I think of something better.)
(
) Redesign the code to perform a certain amount of work, return to the Python wrapper for reporting, and then be resumed. (Potentially messy and would almost certainly break some existing interfaces. I'm -1 on this.)
() Setup a dedicated listener thread and send messages to it via an *intra-process communication mechanism of some sort. (Would be a fun excuse to learn about ZeroMQ's inproc stuff, but probably not worth the hassle. So, I'm -1 on this as well.)
(*) Set up a synchronization barrier among the threads for getting the lock, reporting, and releasing the lock. (This will hurt performance, possibly quite badly. It will also ruin the threading model-agnostic scheme we currently have. I'm definitely -1 on this.)

Other ideas welcome.

Cleanup C++ Extension Module

While debugging and figuring some things out about creating a Python type's class attributes from C, I realized that much of our extension module code is making life way harder on itself than need be and does not conform to best practices (as given by the CPython API documentation). In some cases, our existing practices result in objects not exposing all of the standard attributes that one would expect from a Python object. In other cases, we are simply writing code for things that various standard CPython API functions, in conjunction with certain type object slots, which have been available since at least Python 2.2, handle automatically.

While I am not advocating a massive housecleaning effort all at once, we should be looking to simplify and otherwise improve the various pieces of code as we touch them. In my humble estimation, we may be able to shave 500 or more lines from the extension module code and actually gain functionality in the process....

(This should be weighed against issue #13. Yes, I still have a SWIG itch I want to scratch.)

Decide the Python/C++ Interface Issue

Maintaining one interface to a C++ back-end from Python is irritating and burdensome enough. Maintaining two (like we are currently doing) is enough to make someone not ever want to change the interface. We need to ditch either our direct CPython interface or the indirect one via Cython. (Using Cython with C++ is a rather monstrous ordeal which does not really gain us much programming efficiency. Sure, it handles a little more bookkeeping, but it also has issues, such as not being able to handle C++ references in all cases.) We should also consider SWIG as an alternative.

A firm decision to be made before an official so that we do not have people trying to develop against multiple APIs to which we are then committed.

Refactor 'Read' Class

While working on improvements to the Python wrapper, it occurred to me that we might want an 'IRead' class with inheritors, such as 'FASTARead' and 'FASTQRead'. This mimics the parser hierarchy and I think it may be appropriate for several reasons:
() The current interface is a fat interface - quality scores are unused in FASTA processing.
(
) We would be carrying some information on original format in the type of the read. (Currently, this information is not really tracked anywhere during processing.)

An even more radical idea, which I am not necessarily advocating at this point, would be to supply 'read' and 'write' methods for the read classes. This would relegate the reader/parser classes to more of a facilitator/manager role around the shared resources (such as input streams), and would do likewise for any writer/formatter classes. Just something to think about.... have not seriously considered either a particular design or any performance trade-offs associated with such a design yet.

Create 'iter_read_pairs' Method for 'ReadParser' Objects

The logic to support read pairs in 'normalize-by-median.py' works, but read pairs could be guaranteed from the C++ side, which would remove much of the verification and bookkeeping logic on the Python side. In the vein of Python 2.x dictionary iterators, I think that a 'iter_read_pairs' method might be appropriate. If we do this, then probably we want to alias the existing iterator function with 'iter_reads' (makes selection of the appropriate method with 'getattr' a little nicer, for those, such as myself, who write code that way).

Another approach would be to create a 'PairedReadParser' family of classes, but I think that this is overkill.

Improve Memory Usage Monitoring

We have encountered some fairly serious memory leaks in the software on several occasions, including one which may have been in the production code ('master' branch) for over a year. I developed some capabilities for tracking virtual memory usage on the Python side. This should probably be formalized and placed into a separate module. We may also do well to somehow integrate it into our testing and generate a report somehow.

distribute setuptools Bootstrap

In the course of supporting this software, we've encountered some crazy 'setuptools' installations at other sites.

The 'distribute' project provides a bootstrap script ( http://python-distribute.org/distribute_setup.py ), which pulls down a local copy of its corresponding version of 'setuptools'. Not only does this [potentially] allow for some broken 'setuptools' to be ignored, but also allows for us to standardize on a certain set of supported installation features.

I've used this in another project and it works fine. It is very simple to setup and maintain.

'load-graph' fails on bleeding-edge branch with TooManyThreads

On athyra, in ~t/data/part/,

% python ~/dev/khmer/scripts/load-graph.py -x 8e9 -k 32 corn cat corn.list

PARAMETERS:

  • kmer size = 32 (-k)
  • n hashes = 4 (-N)
  • min hashsize = 8e+09 (-x)

Estimated memory usage is 3.2e+10 bytes (n_hashes x min_hashsize)

Saving hashtable to corn
Loading kmers from sequences in ['../iowa-corn-feb2012-fixed-renamed-300.fa.gz', '../jgi-iowa/iowa-corn/3300000890.a.fna']
We WILL build the tagset (for partitioning/traversal).
making hashtable
consuming input ../iowa-corn-feb2012-fixed-renamed-300.fa.gz
consuming input ../jgi-iowa/iowa-corn/3300000890.a.fna
terminate called after throwing an instance of 'khmer::TooManyThreads'
what(): std::exception

Any ideas?

Merge Hashtable-related Scripts

There is a tremendous amount of copypasta involved in the various scripts using the Bloom filter. (Issue #21 was just the tip of the iceberg.) Unfortunately, the various strains have evolved upon divergent paths. For example, today I noticed that 'load-into-counting' insists upon a false positive rate of 0.2 or less and 'load-graph' insists upon a false positive rate of 0.15 or less. Unless I am missing something, it seems that you want both of these to be consistent and this is a code maintenance issue.

By an eyeball estimate, I would say that 90% of the DN^H^Hcode is the same in the two scripts mentioned above. I would rather merge them (and possibly some others) and setup some symlinks. (Git groks and honors symlinks on POSIX systems.) By inspecting 'sys.argv[0]', we can determine how the script was invoked and adjust available command-line arguments and behaviors accordingly. This may be a better approach from a maintainability perspective, even if it will make the master script a bit larger than any individual script we have now.

drand/srand error in compiling test-CacheManager on Mac OS X

% make CC=gcc CXX=g++ all
cd lib &&
make CXX="g++" CXXFLAGS=" -Wall -O3 -fPIC" LIBS=" -pthread"
/Users/t/dev/khmer/lib
g++ -Wall -O3 -fPIC -c -o test-CacheManager.o test-CacheManager.cc -fopenmp
test-CacheManager.cc: In function 'int main(int, char*)':
test-CacheManager.cc:103: error: 'drand48_data' was not declared in this scope
test-CacheManager.cc:103: error: expected `;' before 'rng_state'
test-CacheManager.cc:118: error: 'rng_state' was not declared in this scope
test-CacheManager.cc:118: error: 'srand48_r' was not declared in this scope
test-CacheManager.cc:130: error: 'lrand48_r' was not declared in this scope
test-CacheManager.cc:106: warning: unused variable 'segment_cut_pos'
make[1]: *
* [test-CacheManager.o] Error 1
make: *** [lib_files] Error 2

-- note, including stdlib.h didn't help. This was with gcc and g++ on Mac OS X 10.8.2.

Implement k-mer Hash Caching

ctb had voidptr work on this at one point. Based on skimming her code, I do not think that we would want to spend too much time trying to integrate her implementation into the existing code base. However, this aspect of the software's performance is very important and implementing hash caching is potentially a huge win for us. A natural point to consider implementing this is during a redesign/reimplementation of the Bloom filter.

Ideally, we should try to finish this before the POSA book is published, since we alluded to working on it.

Sporadic Failure During Multi-threaded FASTQ Parsing

Most of the time, the part of the test suite, which exercises the new multi-threaded parsing code, completes successfully. However, it does occasionally fail. So, there is an unpleasant bug dependent on thread timing lurking in the code somewhere. Any bets on whether it is also a Heisenbug?

Enable multiple hash functions

Right now, we have only one DNA hash function, and it's a perfect hash that goes from (up to a) 32-base DNA string into a 64-bit number, to which we then apply a modulus. It also maps the forward and reverse complement hashes consistently to the lower of the two, which means that AAAAA and TTTTT always hash to the same number. Changing this rev comp behavior requires recompilation, and in general there is no way to change the hash function being used without recompiling khmer.

We would like to be able to use Jordan's cyclic hash implementation and also enable single-stranded DNA/RNA hashing without recompilation.

One suggestion from CTB is to allow Hashtable objects to have their own hashing function. It would be interesting to explore the impact of this on the actual lib and script code. It would also be interesting to see how big the performance hit is of doing this.

Configure Setup Script for Installation

Prior to release, "python setup.py install" must work correctly. This means that decent build support needs to be in place. It also means that we must properly identify everything that needs to be installed, ensure that it is added to the packing manifest, and that it is installed into the proper locations.

Deal with hash tables being too big

Jaron pointed out that for k=20, you don't need to have hash tables much larger than 500 GB total (the exact number to calculate needs to include palindromes for k=20), and, in fact, you don't need more than one hash table because there's 0 false positive rate. We should figure out how to deal with this properly -- options are

  • notify user and keep going
  • notify user and die
  • resize hash table down/modify parameters accordingly, notify user

I think I like the last the best.

Use Read Objects instead of 'screed' Records

If we can use objects of class 'Read' instead of 'screed' records, then we can delete the dependency upon 'screed', which will make 'khmer' more readily accessible, since people will not need to install 'screed' first or add it to their 'PYTHONPATH'.

If we do not do this before putting 'khmer' up on PyPI, then we definitely need to enforce 'screed' as prequisite and put that package up on PyPI.

Use Regex Library in C++ Code?

Maybe we should use a regex library on the C++ side. The variant FASTQ record header formats for paired reads is one reason why we may wish to do this. Another might be if we ever need to support IUPAC characters.

Using the Boost regular expression support is probably overkill since we would need to support Boost and currently don't. A lightweight library with full regex support would be nice. We could use something like PCRE, but might prefer a regex dialect closer to POSIX ('grep', sed', 'awk') or Python.

No cython build error

If you try to compile khmer without cython installed it
causes a syntax error in the generated setup.py

CYTHON_ENABLED =
instead of
CYTHON_ENABLED = False

  • no error message that it's because cython being missing
  • doing a make clean + make after installing cython doesn't get rid of the setup.py file so it has to be removed manually before it'll compile.

Latest bleeding-edge test bug?

....................................................................................................................................................................E.E...............................................

ERROR: tests.test_read_parsers.test_read_pair_iterator_in_error_mode

Traceback (most recent call last):
File "/Users/t/dev/ipy/lib/python2.7/site-packages/nose-1.1.2-py2.7.egg/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/Users/t/dev/khmer/tests/test_read_parsers.py", line 189, in test_read_pair_iterator_in_error_mode
in rparser.iter_read_pairs( ReadParser.PAIR_MODE_ERROR_ON_UNPAIRED ):
AttributeError: type object 'ReadParser' has no attribute 'PAIR_MODE_ERROR_ON_UNPAIRED'

ERROR: tests.test_read_parsers.test_read_pair_iterator_in_ignore_mode

Traceback (most recent call last):
File "/Users/t/dev/ipy/lib/python2.7/site-packages/nose-1.1.2-py2.7.egg/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/Users/t/dev/khmer/tests/test_read_parsers.py", line 219, in test_read_pair_iterator_in_ignore_mode
in rparser.iter_read_pairs( ReadParser.PAIR_MODE_IGNORE_UNPAIRED ):
AttributeError: type object 'ReadParser' has no attribute 'PAIR_MODE_IGNORE_UNPAIRED'


Ran 214 tests in 10.756s

Casava 1.8 pair checking in scripts/normalize-by-median, scripts/abund-filter and khmer/thread-utils

Hi! The read pair checking in scripts/normalize-by-median.py and scripts/abund-filter.py that relies on khmer/thread-utils.py doesn't work for paired reads from casava 1.8. I know that you can simply change it to old-style paired reads, but that is not really a proper solution. I wanted to update the paired checking myself but the problem is that the pair checking is done in a separate function in both files. Probably at some other locations as well. To me it makes more sense if screed gave you a function to check whether a record is a pair and also some functions to get a proper fasta/fastq representation of the read depending on whether the record is fasta or fastq since there are so many places in the scripts where this occurs. I would suggest a function that takes a record, parses the name + annotations of a read and tells you what sequencing technology it is and what type of read in a way that is easily extensible.

PEP8 Compliance

If PEP 8 is part of our Python coding standards, then someone might wish to address the following:

scripts/diginorm-correct.py:11:11: E401 multiple imports on one line
scripts/diginorm-correct.py:16:25: E225 missing whitespace around operator
scripts/diginorm-correct.py:18:1: E302 expected 2 blank lines, found 1
scripts/diginorm-correct.py:33:80: E501 line too long (129 > 79 characters)
scripts/diginorm-correct.py:38:80: E501 line too long (80 > 79 characters)
scripts/diginorm-correct.py:40:80: E501 line too long (129 > 79 characters)
scripts/diginorm-correct.py:43:6: E225 missing whitespace around operator
scripts/diginorm-correct.py:44:12: E225 missing whitespace around operator
scripts/diginorm-correct.py:45:9: E225 missing whitespace around operator
scripts/diginorm-correct.py:46:21: E225 missing whitespace around operator
scripts/diginorm-correct.py:59:5: E303 too many blank lines (2)
scripts/diginorm-correct.py:106:51: E502 the backslash is redundant between brackets
scripts/ec.py:12:2: E225 missing whitespace around operator
scripts/ec.py:12:5: E261 at least two spaces before inline comment
scripts/ec.py:25:4: E111 indentation is not a multiple of four
scripts/ec.py:26:7: E111 indentation is not a multiple of four
scripts/ec.py:28:4: E111 indentation is not a multiple of four
scripts/ec.py:29:4: E111 indentation is not a multiple of four
scripts/ec.py:31:4: E111 indentation is not a multiple of four
scripts/ec.py:33:4: E111 indentation is not a multiple of four
scripts/ec.py:35:4: E111 indentation is not a multiple of four
scripts/ec.py:36:7: E111 indentation is not a multiple of four
scripts/ec.py:37:7: E111 indentation is not a multiple of four
scripts/ec.py:38:7: E111 indentation is not a multiple of four
scripts/ec.py:39:4: E111 indentation is not a multiple of four
scripts/ec.py:40:7: E111 indentation is not a multiple of four
scripts/ec.py:41:7: E111 indentation is not a multiple of four
scripts/load-graph.py:59:1: W293 blank line contains whitespace

digiNorm output format control

Hey, Eric:
normalize-by-median can handle both fasta and fastq, but it only output the same format as input?

I have a situation that input is fastq and I want the output to be fasta. Adding a arg for output format would be nice.

jiarong

PEP8 Compliance

Not all of the Python code is PEP8-compliant. (This is probably mostly my "fault", since my natural coding style is not PEP8-compliant. Yes, I've been developing in Python for a number of years now and, yes, I've been aware of PEP8 for most of those years. My more readable coding style predates my use of Python. If the BDFL really wants everyone to comply with PEP8, then he should cause his Python parser to enforce it.)

To "fix" this "problem" and be good conformists, a tool such as autopep8 or pep8ify should be used on the Python code.

Automagically figure out shared library extension

Right now the only part of the Makefile that needs to be changed to compile khmer on Macs is to set DYLIB_EXT=.dylib instead of .so. This should be done automatically. A cheap hack to do this is to look at the extension of /usr/lib/libc.{so,dylib}; is there a better way?

Coalesce on a Single Build System

Currently, building 'khmer' involves at least 3 different build systems: a custom makefile-driven one, an autotoolized makefile-driven one (for zlib), another custom makefile-driven one (for bzip2 - on 'bleeding-edge' branch only), and a Python 'setuptools' one. Working with the various build systems across platforms has created some problems and is rather kludgy even on the same platform.

If we intend to release 'khmer' as Python software, then we should rally around 'setup.py' and drive the other builds via it. This appears to be possible via the 'distutils.ccompiler' API or by writing a custom tool subclass for 'distutils'. (There are other options, such as requiring and using Bento.) This undertaking is not likely to be as large as it may seem, because some of the files in the compression libraries do not need to be compiled and the shared libraries themselves do not need to be produced. (Various sources need to be compiled as position-independent code to be used in the shared library for the 'khmer' C++ back end.) Of course, the makefile syntax naturally lends itself to dependency graphs and 'make' can be operated in parallel (if the dependencies are properly defined). So, there is some tradeoff here. But, a purely-Python build system will likely be more portable (we might be able to seriously entertain the Windows platform, for example) and will allow an end user to simply do "python setup.py install", which is as it should be.

Refactor Hash Size Warning Logic into Common Module

While manually breaking up lines which were too long, I encountered the same chunk of code copied and pasted many times over and over again in various scripts in scripts/ and sandbox/. We definitely want to place this in a common module for maintainability and reusability reasons.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.