molssi / cookiecutter-cms Goto Github PK
View Code? Open in Web Editor NEWPython-centric Cookiecutter for Molecular Computational Chemistry Packages
License: MIT License
Python-centric Cookiecutter for Molecular Computational Chemistry Packages
License: MIT License
I have been testing Azure Pipelines while trying to build openmm
in conda-forge
and I have really liked the UI and the added consistency due to the fact that all OS run on the same platform. Will you consider migrating from Travis/AppVeyor to an unified Azure CI?
Suggestion by @bas-rustenburg
the package and first submodule would have the same name. I wonder if that could be a confusing start of a package.
Also a suggestion from @jchodera
Iād choose whimsical package and submodule names
Personally: Iād like to avoid just using whimsical names, the whole point of the cookiecutter is to have it fill in things. I could add another option on init to set the first module name, defaulting to the package name.
In docs/conf.py
Per Sphinx's documentation, this should be the project name, not long description. Suggest changing to cookiecutter.project_name
.
There are a few tools that allow for fairly nice automatic versioning. Something to think about but certainly not necessary.
https://docs.openstack.org/pbr/latest/user/features.html#version
And I think ParmEd uses https://github.com/warner/python-versioneer
Hi there,
Just in the last few weeks using cookiecutter with the MolSSI template fails for me when I decline Windows Continuous Integration (last step). Specific error is below;
Traceback (most recent call last):
File "/var/folders/2n/xtzsyspd32v6vglg_pd5gmw80000gn/T/tmpvfncakrz.py", line 52, in <module>
remove_windows_ci()
File "/var/folders/2n/xtzsyspd32v6vglg_pd5gmw80000gn/T/tmpvfncakrz.py", line 49, in remove_windows_ci
os.remove(os.path.join("devtools", "conda-recipe", "bld.bat"))
FileNotFoundError: [Errno 2] No such file or directory: 'devtools/conda-recipe/bld.bat'
ERROR: Stopping generation because post_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)
I can provide more info if needed, but I believe this is reproducible. If I accept Windows continuous integration everything works fine.
During the setup users are greeted with the following option:
Select dependency_source:
1 - conda-forge
2 - conda
3 - pip
Choose from 1, 2, 3 [1]:
Users may become slightly confused if they need to pull from both conda and pip for example (or if they dont know where they need to pull from). Can this either be reworded or become a multiple choice selection?
I see the .travis.yml
includes support for testing with python 2.7.
No new projects should include support for python 2.7, and since this cookiecutter is intended to be used for new projects, we should drop this branch.
Find a way to compile cookiecutter for all dependency options then test the CI builds. Not sure how to chain CI builds, but its something to ask
Since the cookiecutter sets up .travis.yml
and codecov, it would be cool if it added the badges to the top of README.md
, which will also serve as a reminder to enable those services to get the badges to work.
I have found Codecov quite verbose and annoying without some modification. To automatically tone down the issues I usually setup a .codecov.yml
with the following options:
coverage:
status:
patch: false
project:
default:
threshold: 50%
comment:
layout: "header"
require_changes: false
branches: null
behavior: default
flags: null
paths: null
Happy to discuss and/or modify them.
As the cookiecutter is continually being updated, it might make sense to start cutting releases so that projects using the cookiecutter have a clear cookiecutter version they are based on, and to think about how to provide a semi- or fully automated upgrade path to update repos that use the cookiecutter to stay current with the latest best practice.
pytest
is the recommended method of invoking the tests in the pytest library, see relevant issue. All the pytest documentation uses pytest
over py.test
.
As a template for starting a project I don't see backwards compatibility as a problem, although this came into effect with pytest 3.0 released August 2016.
Hello :)
Is there any reason the template recommends manually specifying packages and subpackages instead of using setuptools.find_packages()
?
Also, some people recommend against using package_data
, suggesting include_package_data=True
and MANIFEST.in
as a cleaner replacement.
Would you consider changing the current behavior?
I think this is a generally useful cookiecutter and should be moved to a more public home (e.g. MolSSI)
Current in Travis we set:
- os: linux
python: 3.7
env: PYTHON_VER=3.7
This will both build a conda env and use a system 3.7 python. There is no need to set the Python version I believe, we may want to set the name=
variable instead so that the display is correct.
We might also want to unpin the xenial container.
create_conda_env.py
fails for an appveyor build on python 3.7.
https://ci.appveyor.com/project/Olllom/pyworkdir/builds/26263346/job/iw36fc15qq5kh5dt
Python 3.6 build is OK
https://ci.appveyor.com/project/Olllom/pyworkdir/builds/26263346/job/45wft6qedcwoao05
The issue is known
conda/conda-build#3220
and can be fixed easily
https://github.com/matplotlib/matplotlib/pull/14649/files
The appveyor error message is.
python devtools\\scripts\\create_conda_env.py -n=test -p=%PYTHON_VERSION% devtools\\conda-envs\\test_env.yaml
173Traceback (most recent call last):
174 File "C:\Miniconda37-x64\Scripts\conda-env-script.py", line 5, in <module>
175 from conda_env.cli.main import main
176 File "C:\Miniconda37-x64\lib\site-packages\conda_env\cli\main.py", line 39, in <module>
177 from . import main_create
178 File "C:\Miniconda37-x64\lib\site-packages\conda_env\cli\main_create.py", line 12, in <module>
179 from conda.cli import install as cli_install
180 File "C:\Miniconda37-x64\lib\site-packages\conda\cli\install.py", line 19, in <module>
181 from ..core.index import calculate_channel_urls, get_index
182 File "C:\Miniconda37-x64\lib\site-packages\conda\core\index.py", line 9, in <module>
183 from .package_cache_data import PackageCacheData
184 File "C:\Miniconda37-x64\lib\site-packages\conda\core\package_cache_data.py", line 15, in <module>
185 from conda_package_handling.api import InvalidArchiveError
186 File "C:\Miniconda37-x64\lib\site-packages\conda_package_handling\api.py", line 3, in <module>
187 from libarchive.exception import ArchiveError as _LibarchiveArchiveError
188 File "C:\Miniconda37-x64\lib\site-packages\libarchive\__init__.py", line 1, in <module>
189 from .entry import ArchiveEntry
190 File "C:\Miniconda37-x64\lib\site-packages\libarchive\entry.py", line 6, in <module>
191 from . import ffi
192 File "C:\Miniconda37-x64\lib\site-packages\libarchive\ffi.py", line 27, in <module>
193 libarchive = ctypes.cdll.LoadLibrary(libarchive_path)
194 File "C:\Miniconda37-x64\lib\ctypes\__init__.py", line 434, in LoadLibrary
195 return self._dlltype(name)
196 File "C:\Miniconda37-x64\lib\ctypes\__init__.py", line 356, in __init__
197 self._handle = _dlopen(self._name, mode)
198TypeError: LoadLibrary() argument 1 must be str, not None
199CONDA ENV NAME test
200PYTHON VERSION 3.7
201CONDA FILE NAME devtools\\conda-envs\\test_env.yaml
202CONDA PATH C:\Miniconda37-x64\Scripts\conda.EXE
203activate test
204Could not find conda environment: test
Example of cookiecutter-derived package travis build failing: https://travis-ci.org/MSchauperl/resppol/jobs/477670412
Official announcement seems to say: "Move to xenial dist" https://travis-ci.community/t/unable-to-download-python-3-7-archive-on-travis-ci/639/2?u=kacperduras
Same issue was found and solved here: https://github.com/mediascopegroup/light-rest-client/issues/2
I was able implement the above solution for openforcefield by adding the sudo: required
dist: xenial
keywords. openforcefield/openff-toolkit@47f74e8#diff-354f30a63fb0907d4ad57269548329e3
For example, here, PYTHON_VER
was set to be 3.6, but the resulting environment had python 3.7.
I don't understand the internals of create_conda_env.py
too well, but if this behavior is inevitable, we could add a fatal check afterwards to ensure that Python is the version we requested. Alternatively, we could add a disclaimer to the create_conda_env.py
docs that you're not guaranteed the python version you requested.
Repost of https://github.com/openforcefield/openforcefield/pull/432/files#r331088966
I haven't verified this directly (by making a new repo using the cookiecutter), but will update this Issue when I do so.
Basically, I removed our old coverage version pin, and codecov dropped by 30%. This was due to the code lines in the tests themselves becoming a part of the denominator for our coverage % (but oddly, not the numerator).
Per the docs here, this can be avoided using the [omit]
keyword in a config file. This is present both in the OFFTK repo and in the cookiecutter, in setup.cfg
. However, our pytest
commands (and the cookiecutter's) don't specifically point to this file (using the --cov-config
commandline argument).
I suspect that new versions of coverage
or pytest-cov
have changed the way that config files are found, such that setup.cfg
is no longer found by default. Adding --cov-config=setup.cfg
to the list of pytest args should fix that if it's also an issue in the cookiecutter.
There is some confusion that versioneer is automatic based off releases, this can be fixed with a small paragraph description.
While going through the setup with cookiecutter-cms
it gives me the option to name my project with dashes (e.x. new-project
), but projects named with dashes are invalid for python. Other users in the future may run into this issue.
We've been converting some of our legacy projects over to the CMS cookiecutter, but I've noticed that the new travis scheme can make debugging harder because the devtools/scripts/create_conda_env.py
mechanism does not emit information on which channels each package is installed from. When tracking down issues where pulling a package from a different channel causes failures, this can be immensely valuable.
What would you think about one of these options?
devtools/scripts/create_conda_env.py
generate a list of which channels each package version is coming from (the same way conda install <packagename>
does), orconda list
to the .travis.yml
?I wonder if there might be some way to help automate the update process for ensuring that repos created with the cookiecutter remain up to date with the latest best practices. Right now, the process is manual and somewhat tedious, with the potential for missing updates to some files.
Any thoughts on how we might be able to help make this process simpler, or automate the update step?
We don't need to create devtools/conda-recipe/bld.bat
if windows support is not selected.
Since we provide an example {{cookiecutter.repo_name}}/data/
directory with example contents, we should also have setup.py
include this in the package when installed.
Currently, the LICENSE includes a fixed set of author names for copyright. This can be auto-populated via cookiecutter.
@dgasmith and I encountered this error with students at the summer school. The readme in their directory was accidentally deleted, leading to failures in building. It's very hard for students to troubleshoot.
I'm migrating one of my repositories to this cookie cutter and I've run into an error that has me stymied. I've been testing the cookiecutter
branch of my repository on a few systems before merging into master; they all behave as expected, except in one case: on the head node of a large cluster. My general strategy is to clone my repository, build the conda
environment, activate it, run pip install -e .
, and then run the test suite, just like what happens on Travis. (I have made one change to the default conda
environment YAML, which is to specify python=3.6
for pytraj
.) When I do this on TSCC/SDSC, I have no problems building the environment or installing my module, but my tests fail with libstdc++.so.6
problems (see below). Does this suggest that something went wrong building the dynamically linked (?) C code in scipy
and mdtraj
? I don't modify any paths in my .bashrc
. Any suggestions for how to debug things? I can manually import pymbar
and mdtraj
despite these errors.
(base) [davids4@tscc-login1 pAPRika]$ conda env create -f devtools/conda-envs/test_env.yaml
[...]
#
# To activate this environment, use
#
# $ conda activate paprika-debug-tleap-dummy
#
(base) [davids4@tscc-login1 pAPRika]$ conda activate paprika-debug-tleap-dummy
(paprika-debug-tleap-dummy) [davids4@tscc-login1 pAPRika]$ pip install -e .
Obtaining file:///home/davids4/paprika-debug-tleap-dummy/pAPRika
Installing collected packages: paprika
Running setup.py develop for paprika
Successfully installed paprika
(paprika-debug-tleap-dummy) [davids4@tscc-login1 pAPRika]$ pytest -v paprika/tests/
=========================================================== test session starts ============================================================
platform linux -- Python 3.6.7, pytest-4.2.1, py-1.7.0, pluggy-0.8.1 -- /home/davids4/anaconda3/envs/paprika-debug-tleap-dummy/bin/python
cachedir: .pytest_cache
rootdir: /home/davids4/paprika-debug-tleap-dummy/pAPRika, inifile:
plugins: cov-2.6.1
collected 18 items / 2 errors / 16 selected
================================================================== ERRORS ==================================================================
_____________________________________________ ERROR collecting paprika/tests/test_analysis.py ______________________________________________
ImportError while importing test module '/home/davids4/paprika-debug-tleap-dummy/pAPRika/paprika/tests/test_analysis.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
paprika/tests/test_analysis.py:7: in <module>
from paprika import analysis
paprika/analysis.py:6: in <module>
import pymbar
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/pymbar/__init__.py:31: in <module>
from pymbar import timeseries, testsystems, confidenceintervals, version
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/pymbar/confidenceintervals.py:25: in <module>
import scipy.stats
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/stats/__init__.py:367: in <module>
from .stats import *
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/stats/stats.py:173: in <module>
from . import distributions
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/stats/distributions.py:10: in <module>
from ._distn_infrastructure import (entropy, rv_discrete, rv_continuous,
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/stats/_distn_infrastructure.py:16: in <module>
from scipy.misc import doccer
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/misc/__init__.py:68: in <module>
from scipy.interpolate._pade import pade as _pade
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/interpolate/__init__.py:175: in <module>
from .interpolate import *
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/interpolate/interpolate.py:32: in <module>
from .interpnd import _ndim_coords_from_arrays
interpnd.pyx:1: in init scipy.interpolate.interpnd
???
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/spatial/__init__.py:98: in <module>
from .kdtree import *
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/spatial/kdtree.py:8: in <module>
import scipy.sparse
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/sparse/__init__.py:231: in <module>
from .csr import *
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/sparse/csr.py:15: in <module>
from ._sparsetools import csr_tocsc, csr_tobsr, csr_count_blocks, \
E ImportError: /opt/gnu/gcc/lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /home/davids4/anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/sparse/_sparsetools.cpython-36m-x86_64-linux-gnu.so)
______________________________________________ ERROR collecting paprika/tests/test_openmm.py _______________________________________________
ImportError while importing test module '/home/davids4/paprika-debug-tleap-dummy/pAPRika/paprika/tests/test_openmm.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
paprika/tests/test_openmm.py:7: in <module>
from paprika.openmm_simulate import *
paprika/openmm_simulate.py:9: in <module>
from mdtraj.reporters import NetCDFReporter
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/mdtraj/__init__.py:29: in <module>
from .formats.registry import FormatRegistry
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/mdtraj/formats/__init__.py:15: in <module>
from .dtr import DTRTrajectoryFile
E ImportError: /opt/gnu/gcc/lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /home/davids4/anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/mdtraj/formats/dtr.cpython-36m-x86_64-linux-gnu.so)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
========================================================= 2 error in 4.71 seconds ==========================================================
I'm trying to make a new repo where the repo name has hyphens, but not the module name. This is raising the following error:
(openforcefield) jwagner@MBP-S$ cookiecutter gh:molssi/cookiecutter-cms
You've downloaded /Users/jwagner/.cookiecutters/cookiecutter-cms before. Is it okay to delete and re-download it? [yes]:
project_name [ProjectName]: NistDataSelection
repo_name [nistdataselection]: nist-data-selection
first_module_name [nist-data-selection]: NistDataSelection
author_name [Your name (or your organization/company/team)]: Open Force Field Consortium
author_email [Your email (or your organization/company/team)]: [email protected]
description [A short description of the project.]: Records the tools and decisions used to select NIST data for curation.
Select open_source_license:
1 - MIT
2 - BSD-3-Clause
3 - LGPLv3
4 - Not Open Source
Choose from 1, 2, 3, 4 (1, 2, 3, 4) [1]:
Select dependency_source:
1 - Prefer conda-forge over the default anaconda channel with pip fallback
2 - Prefer default anaconda channel with pip fallback
3 - Dependencies from pip only (no conda)
Choose from 1, 2, 3 (1, 2, 3) [1]:
Select Include_Windows_continuous_integration:
1 - y
2 - n
Choose from 1, 2 (1, 2) [1]:
nist-data-selection None
ERROR: "nist-data-selection" is not a valid Python module name!
ERROR: Stopping generation because pre_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)
As the title says
#56 introduced changes to the Cookiecutter to move away from having the repo handle deployment tasks (i.e. conda-build). I tried to make some changes to the documentation on deployment and what we recommend, but we still need a better "here is how we recommend to deploy AND here is where you can go to get help."
What are people's opinions on adding a CLA to both the cookiecutter repo and the output project?
The primary Cookiecutter site has links to many cookiecutters, we should add this repo to that.
Related to #1 as we want the name first
Related to #3 in that the link would change if we add it before moving it.
We should stamp the output directory somewhere with the version of the CMS cookiecutter that was used to make it; either a formal version system or a the git hash. This may take some additional engineering.
Based on suggestion from @loriab
It's useful to have users configure the cron job travis support to set up daily travis builds when initially configuring travis to catch upstream changes more quickly.
Just a reminder issue to double check and test the LGTM file.
We should consider if we should make a note about the versioneer description to note that it is not really updated, but does still just work. Also double check the LGTM issue (#52) and Flake to ensure the Versioneer files are correctly ignored.
Possible alternate in the future: https://github.com/pypa/setuptools_scm/ (a la @dgasmith)
Notes of when Versioneer might fail in the future:
"%d" % 10
). No noted plans to do so at the moment.git
changes the syntax which Versioneer reads. Very unlikely for the foreseeable futuresetuptools
changes the way it accepts additional modules. No sense of the likelihood of this.This is something that we discuss a bit internally but could use some additional discussion. In general, conda-build is fairly complicated and errors can be quite opaque. With the rise of host, run, requires, import, build, etc the meta.yaml
files are becoming increasingly complex and not particularly sustainable for the average user even though they make good sense for CD integrity.
In addition, there seems to be a continuous movement towards deploying with conda-forge
rather than custom channels. This has the benefit that meta.yaml
's can be templated off the continuously updated c-f templates, a large community of reviewers can examine the meta.yaml
's before merging, and c-f's bots can update out of date meta.yaml
's automatically. In addition, c-f's build, deployment, and auto update on release technology will provide a far easier experience than someone trying to deploy on their own.
I would propose using conda install
for travis/appveyor instead where we can either have users list dependencies in the travis.yaml
or use environment files using a script similar to this. This has the benefit that builds are much quicker often (3-4x) due to less redundancy in the build cycles and the overall cognitive overhead is much lower as cookiecutter users can use canonical conda commands.
Using straight conda install
will increase the dependency duplication slightly with setup.py
, appveyor.yaml
, and travis.yaml
. The environments have the benefit of keeping duplication the same and providing developers and users a clean and reusable development/execution environment with the downside the complexity is slightly increased I would think.
Thoughts?
Suggestion from @bas-rustenburg
I think in general, this is looking good. It might need some documentation for people who havent ever used travis, appveyor, conda, pip, et cetera. Otherwise, you can install the cookie cutter, and the next question is "so now what?"
Good idea, may just need to be link-outs. And probably some guide on "what now?" which may just link to the software dev
With the recent migration from choderalab/cookiecutter-compchem to molssi/cookiecutter-cms, several links are likely to have broken and the docs will need updated.
I'll work through this
It is very common for people to want to use the cookie cutter to package existing code or scripts. However, the fact that it initializes a new repository become confusing for those who are not as well versed with git (if their project is already using version control) and causes a lot of problems through conflicting git histories, etc
It seems to me that the Cookiecutter should be able to init
if not in a repo, but otherwise only add and commit the created files.
When I generate a new repo, it contains the following on line 3:
[comment]: <> (Badges)
I think this was meant to abuse the link
capability to hide a comment in an empty link annotation, but GitHub is not fooled by our deception.
Instead, we should use the platform independent form:
[//]: # (This may be the most platform independent comment)
@robertodr hit on a simple, yet effective way to cache conda envs. I normally push back against caching due to the chances of issues which can be substantial with confusing errors, but this way we use all of conda's native caching to handle this which should reduce the chance of problems. Setting the timeout to something reasonable short as shown below will help during peak access on a repository. Something that we should try out to make sure it is robust before deploying here.
before_cache:
- conda deactivate
- conda remove --name test --all
cache:
timeout: 1000
directories:
- $HOME/miniconda
Apparently codecov is on conda-forge and I missed it. Looking at the time stamps I think I started using codecov over two years ago and never checked again!
It would be good to leave pip stubs commented out in the conda env for demonstration however.
It might be good to also add support for YAPF or similar. This can be added in a .style.yapf
file in the base folder. I typically use:
[style]
COLUMN_LIMIT = 119
INDENT_WIDTH = 4
USE_TABS = False
However, this might convolve with the usage of pyflakes
or not.
Including a code of conduct in the cookiecutter would be a good way to introduce new developers to the community practices in open source software development.
Example: https://github.com/MolSSI/QCFractal/blob/master/CODE_OF_CONDUCT.md
Currently the Sphinx theme is Alabaster which I have always found... difficult. Any object to changing this to the RTD theme?
Currently, the sphinx extensions do not include numpydoc
to render numpy style docstrings, which are both human- and sphinx-readable.
What do folks think about including numpydoc
by default and encouraging users to follow the numpy docstring convention?
This cookie cutter is great for pure Python projects, but a concern we will likely run into is how to integrate hybrid Python/C/C++ projects. Perhaps not recommended for this cookie cutter (we might consider another cookie cutter), but I think a reasonable place to open discussions.
Personally I try to adhere to the following rules:
ctypes
and the wonderful numpy.ctypeslib
.setup.py
that calls CMake. Example here.We have a number of solutions depending on the requirements and things get quite messy depending if CMake is in the mix or not. I have tried to follow scikit-build without too much success. Opinions and discussion are most welcome.
The versioneer
repository appears to be dead; however, this isn't necessarily a bad thing since versioneer
works for all use cases and examples that we can find. In addition, versioneer is static and installed so there are no dependance issues. However, this likely will not be the case forever and watching for replacements like setuptools_scm is something that we should continue to evaluate.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
š Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ššš
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ā¤ļø Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.