Git Product home page Git Product logo

barretenberg's Introduction

Barretenberg, an optimized elliptic curve library for the bn128 curve, and PLONK SNARK prover

Barretenberg aims to be a stand-alone and well-specified library, but please see https://github.com/AztecProtocol/aztec-packages/edit/master/barretenberg for the authoritative source of this code. The separate repository https://github.com/AztecProtocol/barretenberg is available if working on barretenberg independently of Aztec, however it is encouraged to develop in the context of Aztec to see if it will cause issues for Aztec end-to-end tests. Currently, merging only occurs in aztec-packages. This is a mirror-only repository until it matures. Legacy release merges need an admin As the spec solidifies, this should be less of an issue. Aztec and Barretenberg are currently under heavy development.

This code is highly experimental, use at your own risk!

Benchmarks!

Table represents time in ms to build circuit and proof for each test on n threads. Ignores proving key construction.

x86_64

+--------------------------+------------+---------------+-----------+-----------+-----------+-----------+-----------+
| Test                     | Gate Count | Subgroup Size |         1 |         4 |        16 |        32 |        64 |
+--------------------------+------------+---------------+-----------+-----------+-----------+-----------+-----------+
| sha256                   | 38799      | 65536         |      5947 |      1653 |       729 |       476 |       388 |
| ecdsa_secp256k1          | 41049      | 65536         |      6005 |      2060 |       963 |       693 |       583 |
| ecdsa_secp256r1          | 67331      | 131072        |     12186 |      3807 |      1612 |      1351 |      1137 |
| schnorr                  | 33740      | 65536         |      5817 |      1696 |       688 |       532 |       432 |
| double_verify_proof      | 505513     | 524288        |     47841 |     15824 |      7970 |      6784 |      6082 |
+--------------------------+------------+---------------+-----------+-----------+-----------+-----------+-----------+

WASM

+--------------------------+------------+---------------+-----------+-----------+-----------+-----------+-----------+
| Test                     | Gate Count | Subgroup Size |         1 |         4 |        16 |        32 |        64 |
+--------------------------+------------+---------------+-----------+-----------+-----------+-----------+-----------+
| sha256                   | 38799      | 65536         |     18764 |      5116 |      1854 |      1524 |      1635 |
| ecdsa_secp256k1          | 41049      | 65536         |     19129 |      5595 |      2255 |      2097 |      2166 |
| ecdsa_secp256r1          | 67331      | 131072        |     38815 |     11257 |      4744 |      3633 |      3702 |
| schnorr                  | 33740      | 65536         |     18649 |      5244 |      2019 |      1498 |      1702 |
| double_verify_proof      | 505513     | 524288        |    149652 |     45702 |     20811 |     16979 |     15679 |
+--------------------------+------------+---------------+-----------+-----------+-----------+-----------+-----------+

Dependencies

  • cmake >= 3.24
  • Ninja (used by the presets as the default generator)
  • clang >= 16 or gcc >= 10
  • clang-format
  • libstdc++ >= 12
  • libomp (if multithreading is required. Multithreading can be disabled using the compiler flag -DMULTITHREADING 0)

Ubuntu

To install on Ubuntu, run:

sudo apt-get install cmake clang clang-format ninja-build libstdc++-12-dev

The default cmake version on 22.04 is 3.22.1, so it must be updated. You can get the latest version here.

MacOS

When running MacOS Sonoma 14.2.1 the following steps are necessary:

  • update bash with brew install bash
  • update cmake

Installing openMP (Linux)

Install from source:

git clone -b release/10.x --depth 1 https://github.com/llvm/llvm-project.git \
  && cd llvm-project && mkdir build-openmp && cd build-openmp \
  && cmake ../openmp -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DLIBOMP_ENABLE_SHARED=OFF \
  && cmake --build . --parallel \
  && cmake --build . --parallel --target install \
  && cd ../.. && rm -rf llvm-project

Or install from a package manager, on Ubuntu:

sudo apt-get install libomp-dev

Note: on a fresh Ubuntu Kinetic installation, installing OpenMP from source yields a Could NOT find OpenMP_C (missing: OpenMP_omp_LIBRARY) (found version "5.0") error when trying to build Barretenberg. Installing from apt worked fine.

Getting started

Run the bootstrap script. (The bootstrap script will build both the native and wasm versions of barretenberg)

cd cpp
./bootstrap.sh

Installing

After the project has been built, such as with ./bootstrap.sh, you can install the library on your system:

cmake --install build

Formatting

Code is formatted using clang-format and the ./cpp/format.sh script which is called via a git pre-commit hook. If you've installed the C++ Vscode extension you should configure it to format on save.

Testing

Each module has its own tests. e.g. To build and run ecc tests:

# Replace the `default` preset with whichever preset you want to use
cmake --build --preset default --target ecc_tests
cd build
./bin/ecc_tests

A shorthand for the above is:

# Replace the `default` preset with whichever preset you want to use
cmake --build --preset default --target run_ecc_tests

Running the entire suite of tests using ctest:

cmake --build --preset default --target test

You can run specific tests, e.g.

./bin/ecc_tests --gtest_filter=scalar_multiplication.*

Benchmarks

Some modules have benchmarks. The build targets are named <module_name>_bench. To build and run, for example ecc benchmarks.

# Replace the `default` preset with whichever preset you want to use
cmake --build --preset default --target ecc_bench
cd build
./bin/ecc_bench

A shorthand for the above is:

# Replace the `default` preset with whichever preset you want to use
cmake --build --preset default --target run_ecc_bench

CMake Build Options

CMake can be passed various build options on its command line:

  • -DCMAKE_BUILD_TYPE=Debug | Release | RelWithAssert: Build types.
  • -DDISABLE_ASM=ON | OFF: Enable/disable x86 assembly.
  • -DDISABLE_ADX=ON | OFF: Enable/disable ADX assembly instructions (for older cpu support).
  • -DMULTITHREADING=ON | OFF: Enable/disable multithreading.
  • -DOMP_MULTITHREADING=ON | OFF: Enable/disable multithreading that uses OpenMP.
  • -DTESTING=ON | OFF: Enable/disable building of tests.
  • -DBENCHMARK=ON | OFF: Enable/disable building of benchmarks.
  • -DFUZZING=ON | OFF: Enable building various fuzzers.

If you are cross-compiling, you can use a preconfigured toolchain file:

  • -DCMAKE_TOOLCHAIN_FILE=<filename in ./cmake/toolchains>: Use one of the preconfigured toolchains.

WASM build

To build:

cmake --preset wasm
cmake --build --preset wasm --target barretenberg.wasm

The resulting wasm binary will be at ./build-wasm/bin/barretenberg.wasm.

To run the tests, you'll need to install wasmtime.

curl https://wasmtime.dev/install.sh -sSf | bash

Tests can be built and run like:

cmake --build --preset wasm --target ecc_tests
wasmtime --dir=.. ./bin/ecc_tests

To add gtest filter parameters in a wasm context:

wasmtime --dir=.. ./bin/ecc_tests run --gtest_filter=filtertext

Fuzzing build

For detailed instructions look in cpp/docs/Fuzzing.md

To build:

cmake --preset fuzzing
cmake --build --preset fuzzing

Fuzzing build turns off building tests and benchmarks, since they are incompatible with libfuzzer interface.

To turn on address sanitizer add -DADDRESS_SANITIZER=ON. Note that address sanitizer can be used to explore crashes. Sometimes you might have to specify the address of llvm-symbolizer. You have to do it with export ASAN_SYMBOLIZER_PATH=<PATH_TO_SYMBOLIZER>. For undefined behavior sanitizer -DUNDEFINED_BEHAVIOUR_SANITIZER=ON. Note that the fuzzer can be orders of magnitude slower with ASan (2-3x slower) or UBSan on, so it is best to run a non-sanitized build first, minimize the testcase and then run it for a bit of time with sanitizers.

Test coverage build

To build:

cmake --preset coverage
cmake --build --preset coverage

Then run tests (on the mainframe always use taskset and nice to limit your influence on the server. Profiling instrumentation is very heavy):

taskset 0xffffff nice -n10 make test

And generate report:

make create_full_coverage_report

The report will land in the build directory in the all_test_coverage_report directory.

Alternatively you can build separate test binaries, e.g. honk_tests or numeric_tests and run make test just for them or even just for a single test. Then the report will just show coverage for those binaries.

VS Code configuration

A default configuration for VS Code is provided by the file barretenberg.code-workspace. These settings can be overridden by placing configuration files in .vscode/.

Integration tests with Aztec in Monorepo

CI will automatically run integration tests against Aztec. It is located in the barretenberg folder.

Integration tests with Aztec in Barretenberg Standalone Repo

When working on a PR, you may want to point this file to a different Aztec branch or commit, but then it should probably be pointed back to master before merging.

Testing locally in docker

A common issue that arises is that our CI system has a different compiler version e.g. namely for GCC. If you need to mimic the CI operating system locally you can use bootstrap_docker.sh or run dockerfiles directly. However, there is a more efficient workflow for iterative development:

cd barretenberg/cpp
./scripts/docker_interactive.sh
mv build build-native # your native build folders are mounted, but will not work! have to clear them
cmake --preset gcc ;  cmake --build build

This will allow you to rebuild as efficiently as if you were running native code, and not have to see a full compile cycle.

Building docs

If doxygen is installed on the system, you can use the build_docs target to build documentation, which can be configured in vscode CMake extension or using

cmake --build . --target build_docs

in the cpp/build directory. The documentation will be generated in cpp/docs/build folder. You can then run a python http server in the folder:

python3 -m http.server <port>

and tunnel the port through ssh.

barretenberg's People

Contributors

adr1anh avatar arielgabizon avatar arijitdutta67 avatar aztecbot avatar benesjan avatar charlielye avatar codygunton avatar dbanks12 avatar github-actions[bot] avatar guipublic avatar iammichaelconnor avatar ilyasridhuan avatar jeanmon avatar joss-aztec avatar kevaundray avatar ledwards2225 avatar leilawang avatar lucasxia01 avatar ludamad avatar maddiaa0 avatar maramihali avatar phated avatar philwindle avatar rumata888 avatar sirasistant avatar suyash67 avatar thomas-waite avatar tomafrench avatar vezenovm avatar zac-williamson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

barretenberg's Issues

Integrate transcript into Sumcheck

  • add methods for serialization/deserialization of univariate to transcript and corresponding tests
  • integrate existing transcript into Sumcheck

Get rid of linearization trick in Plonk.

Getting rid of it, and all of the extra structure that supports it, is a nice goal, but a useful step would at least be to remove whatever is necessary for Honk to not have any notion of unrolled.

New manifest paradigm for Honk

The manifests are currently a primary source of loose coupling in Barretenberg. Resolving this issue would give a more flexible transcript (e.g., simplifying the tests that now require a manifest (sometimes a partial one)) and would follow the paradigm of having "a single source of information". Namely, the prover's algorithm and the verifier's algorithm will be specified in code, and a manifest recording their interactions will be documented in a test. In-order execution of the algorithms will be enforced via the properties of the hash function that is used to generate the challenges; out-of-order execution will lead to differing challenges.

Improve BarycentricData class

These comments have been left in Barretenberg:

/* IMPROVEMENT(Cody): This could or should be improved in various ways. In no particular order:
   1) Edge cases are not considered. One non-use case situation (I forget which) leads to a segfault.
   2) This could all be constexpr.
   3) Precomputing for all possible size pairs is probably feasible and might be a better solution than instantiating
   many instances separately. Then perhaps we could infer input type to `extend`.
   4) There should be more thorough testing of this class in isolation. */

This goes in the nice-to-have bin for now.

Remove or change <filesystem> usage in project

As noted in WebAssembly/wasi-sdk#125, the <filesystem> library cannot be used in wasm/wasi. This project seems stuck on wasi-sdk 12 because it contained a bug that didn't disable loading that filesystem header, but if that code path is every taken, it'd crash.

@adr1anh noted this in an internal issue, and had fixes to use mkdir instead; this seemed to have been blocked as "not portable".

We should either #ifndef around all code using <filesystem> or switch to mkdir. It seems this can be made portable by using _mkdir on Windows.

plookup_auxiliary_widget.hpp: Missing description of non native field arithmetic gate

The following lines only contain a description of "Non native field arithmetic gate 2".

limb_subproduct *= LIMB_SIZE;
limb_subproduct += (w_1_omega * w_2_omega);
Field non_native_field_gate_1 = limb_subproduct;
non_native_field_gate_1 -= (w_3 + w_4);
non_native_field_gate_1 *= q_3;
Field non_native_field_gate_3 = limb_subproduct;
non_native_field_gate_3 += w_4;
non_native_field_gate_3 -= (w_3_omega + w_4_omega);
non_native_field_gate_3 *= q_m;
Field non_native_field_identity = non_native_field_gate_1 + non_native_field_gate_2 + non_native_field_gate_3;
non_native_field_identity *= q_2;
Field limb_accumulator_1 = w_2_omega;
limb_accumulator_1 *= SUBLIMB_SHIFT;
limb_accumulator_1 += w_1_omega;
limb_accumulator_1 *= SUBLIMB_SHIFT;
limb_accumulator_1 += w_3;
limb_accumulator_1 *= SUBLIMB_SHIFT;
limb_accumulator_1 += w_2;
limb_accumulator_1 *= SUBLIMB_SHIFT;
limb_accumulator_1 += w_1;
limb_accumulator_1 -= w_4;
limb_accumulator_1 *= q_4;
Field limb_accumulator_2 = w_3_omega;
limb_accumulator_2 *= SUBLIMB_SHIFT;
limb_accumulator_2 += w_2_omega;
limb_accumulator_2 *= SUBLIMB_SHIFT;
limb_accumulator_2 += w_1_omega;
limb_accumulator_2 *= SUBLIMB_SHIFT;
limb_accumulator_2 += w_4;
limb_accumulator_2 *= SUBLIMB_SHIFT;
limb_accumulator_2 += w_3;
limb_accumulator_2 -= w_4_omega;
limb_accumulator_2 *= q_m;
Field limb_accumulator_identity = limb_accumulator_1 + limb_accumulator_2;
limb_accumulator_identity *= q_3;

Deprecate `waffle`...

Maybe also move some shared stuff into bonk namespace. Maybe also give some better names to things (e.g., waffle::StandardComposer can become plonk::StandardComposer and honk::StandardHonkComposer can become honk::StandardComposer).

Formatting git hook breaks `git add -p` workflows

Since the git commit hook to format code runs on and commits all files that are changed, if you try to use git add -p to stage chunks of a file, the entire file gets committed.

This is really bad for workflows that make a bunch of changes and then chop them up into digestible commits.

Consumable via pkg-config

In #185, I added minimal install targets for barretenberg; however, we should also provide a file that pkg-config can use to find out any flags, etc that are required when linking against the library (such as omp).

Make testing faster.

Fake SRS in existing Plonk tests?

Split tests into more CCI jobs?

Assess whether we really need all of the biggroup tests?

Add `pow_zeta`.

The sumcheck argument requires that the sum to be checked has summands weighted by powers of a random element $\zeta$ so that, when the sum is shown to be zero, one can conclude with high probability that each of the summands is zero. This is not implemented in the sumcheck work, at present.

Integrate Gemini

After #23 is merged, this will handle the work of replacing mocked calls to the multivariate PCS by real calls.

Remove witness from PK for Plonk systems

The proving key for the Plonk based proving systems contains the witnesses which breaks the idea that a pkey should be "shareable" between provers. This work makes that possible (as is already the case for Honk).

Commitment key modifications

It would be nice if we could initialise commitment key with file/memory reference strings. Also, if we had the option of initialising pippenger runtime separately, when we know the size of the circuit.

Use Fiat-Shamir in IPA

From: #31 (review): the Fiat-Shamir heuristic must be used to generate the round challenges as part of the ipa_prove and ipa_verify algorithms. The InnerProductArgument class should have a Hash template parameter that defines which hash algorithm is used.

Improve testing

This could be split into three tasks:

  1. Reinstate tests that that were replaced by tests of Honk during the building of the PoC. This was done for expediency.
  2. Assess test coverage.
  3. Finally complete the overhaul of the standard library tests, most likely along the lines of #24. This would include using typed tests.

Another goal that is worth achieving is to avoid having any skipped tests. Currently we skip some tests (e.g., in biggroup) that are not intended to run, but it might be better to simply not build these tests and use "skipping" as a temporary measure only.

barretenberg doesn't work on mac m1

Hi,

Barretenberg doesn't seem to work for me at the moment on a mac m1.

I don't know much about c++ but this is possibly an issue related to tbb in the ultra composer file. As I'm running into this error

 Could not find a package configuration file provided by "TBB" with any of
  the following names:
 
    TBBConfig.cmake
    tbb-config.cmake

after running

./bootstrap.sh

After running,

brew install tbb

the error turned into this

fatal error: 'tbb/atomic.h' file not found
#include <tbb/atomic.h>

Use logarithmic derivatives?

Investigate possibility of modifying our product equality check arguments (copy constraints and plookup) to use logarithmic derivatives as in Cached Quotients paper. This would allow us to remove commitments to the quotient polynomials.

Share `pippenger_runtime_state`.

From #31 (review): "The SRS object contains a single pippenger_runtime_state object that is used to perform all pippenger computations (constructing pippenger_runtime_state is expensive)."

The objective of this issue is to reduce the cost of using pippenger by using only a single pippenger_runtime_state.

Should have install targets

This project currently only builds to a "binary dir" (as cmake calls them) but it should also have install targets so you can do cmake --install . --prefix "/usr/local"

Formal Proof for UltraPlonk Pedersen Hash

A formal collision resistance proof for the ultraplonk pedersen hash (based on the merkle damagard construction) is pending. Opening this issue so that we don't forget this.

Code:

point<C> pedersen_plookup<C>::merkle_damgard_compress(const std::vector<field_t>& inputs, const field_t& iv)

Spec: https://hackmd.io/@aztec-network/ryDVeaT6d?type=view#The-UltraPlonk-pedersen-hash

Account required should be checked when skipping signature verification in join_splits

The join_split circuit now skips signature verification for merge join_splits since It checks that the spender and total amounts are going to remain the same and that only note aggregation is happening.

However, using the account_required flag an exploiter that has control over an account key but no spending keys could still drain the notes that have account_required to true:

  • Decode the notes using the account key
  • Do regular merge join_splits with them, with a fake signature (since it's going to be skipped) to the same owner BUT creating the output notes as account_required = false
  • Now the attacker can spend the notes with the account key since the output notes are account_required false

Efficient Polynomials For Sumcheck

Motivation

We currently store too many preprocessed polynomials in the proving key, like ID_1, ID_2, ID_3, and L_FIRST and L_LAST.
All of these can be efficiently evaluatable by the verifier, so they do not need to be committed. Moreover, it may be more efficient to derive their "edge extension" directly by exploiting their structure.

BN254 MSM does not crash with `(0, 0)` for Standard and Turbo Composers

In the recursive verification circuit, we aggregate the proof verification by computing $P_0, P_1$. To compute $P_0$, we perform the BN254 MSM operation on a vector of group elements and scalars inside the circuit using the function bn254_endo_batch_mul_with_generator .

auto opening_result = g1_ct::template bn254_endo_batch_mul_with_generator(
big_opening_elements, big_opening_scalars, opening_elements, opening_scalars, batch_opening_scalar, 128);
opening_result = opening_result + double_opening_result;
for (const auto& to_add : elements_to_add) {
opening_result = opening_result + to_add;
}
opening_result = opening_result.normalize();

Similarly, the MSM to compute $P_1$ uses the function wnaf_batch_mul.

g1_ct rhs = g1_ct::template wnaf_batch_mul<128>(rhs_elements, rhs_scalars);
rhs = rhs + PI_Z;
rhs = (-rhs).normalize();

But the problem is: if any of the elements in the rhs_elements contain $(0,0)$ as its $x,y$-coordinate (btw $(0,0)$ isn't point on the curve) then the composer should catch that and throw an error. Ideally, there should be an assertion failure in the division function msub_div in the bigfield module. This doesn't happen when using Standard or Turbo composers. It fails at the correct assertion when using Ultraplonk though.

I wrote a quick test to confirm this is a problem with the MSM operation in the circuit. This only tests wnaf_batch_mul function.

HEAVY_TYPED_TEST(stdlib_biggroup, wnaf_batch_mul_bug)
{
// This test should fail for turbo and standard but it doesn't.
if constexpr (TypeParam::use_bigfield) {
GTEST_SKIP();
} else {
TestFixture::test_wnaf_batch_mul_bug();
}
}

Questions:

  1. Is this the expected behaviour or are we missing some checks?
  2. Is this a problem because we don't allow overflows while constructing fq_ct points?

Fuzz dynamic arrays

We should extend our field fuzzer to a dynamic array fuzzer. The dynamic array implementation is complicated and must be audited thoroughly before being used in production.

Update grand product relation

Assuming we adopt the form where 'polynomials' have 0 in the first and last index, we only need one grand product relation of the form [(Z_{shift,i} + L_{n-1,i}) * G_i] - [(Z_i + L_{0,i}) * F_i].

  • Update the current grand product computation relation
  • Remove the no longer needed grand product initialization relation

Move witness from proving key to prover.

This lets the proving key be something shareable between provers (the correct abstraction). This is a step toward getting rid of the polynomial manifest in Honk.

Get rid of stdlib/types.hpp (?)

SYSTEM_COMPOSER is an Aztec Connect-specific concept. Unless someone has a good reason to keep it, it should go away and tests such as those of stdlib Schnorr should be rewritten and tested with Honk.

IPA uses SRS

From #31 (review): the function commit, ipa_prove and ipa_verify should take in an SRS object instead of a vector of generators (and have this SRS class load the required points from disk like our KZG SRS)

The function `assert_equal` is confusing

From @kevaundray in #171

We could use the assert_equal which this method calls, this helper function just makes it explicit that we are not actually asserting anything and instead doing a copy/replace

It's confusing that assert_equal is just a copy (and only works in one direction). Ideally this function would be renamed for clarity.

Remove `SYSTEM_COMPOSER`

This SYSTEM_COMPOSER variable was introduced by @suyash67 to ease the process of changing the underlying proof system of the Aztec Connect rollup (pre-root verifier circuit). It seems to me this feature is now just noise, so I propose to remove it from Barretenberg. It's possible that we should reinstate it in some other form in Aztec 3, though there we will be blending various proof systems and curves, so something more robust will be needed. Thoughts @suyash67 @iAmMichaelConnor?

Test coverage

  • Assess it.
  • Improve it (e.g., some stdlib tests are only being run for StandardHonkComposer now).

Benchmarking

Add basic benchmarking suite for Honk (using google benchmark).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.