Git Product home page Git Product logo

based's Introduction

Simple linear attention language models balance the recall-throughput tradeoff.

arXiv GitHub

Model on HF Dataset on HF

Based is an efficient architecture inspired by recovering attention-like capabilities (i.e., recall). We do so by combining 2 simple ideas:

  1. Short sliding window attention (e.g., window size 64), to model fine-grained local dependencies
  2. "Dense" and global linear attention, to model long-range dependencies

In this way, we aim to capture the same dependencies as Transformers in a 100% subquadratic model, with exact softmax attention locally and a softmax-approximating linear attention for all other tokens.

We find this helps close many of the performance gaps between Transformers and recent subquadratic alternatives (matching perplexity is not all you need? [1, 2, 3]).

In this repo, please find code to (1) train new models and (2) evaluate existing checkpoints on downstream tasks.

Installation

Note. The code in this repository is tested on python=3.8.18 and torch=2.1.2. We recommend using these versions in a clean environment.

# clone the repository
git clone [email protected]:HazyResearch/based.git
cd based

# install torch
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118 # due to observed causal-conv1d dependency

# install based package
pip install -e .

# Note that sometimes the causal-conv1d interface changes (https://github.com/state-spaces/mamba/pull/168) in case you run into an error.

Pretrained Checkpoints

We are releasing the following checkpoints for research, trained at the 360M and 1.3B parameter scales. Each checkpoint is trained on the same 10B to 50B tokens (specified below) of the Pile corpus, using the same data order. The checkpoints are trained using the same code and infrastructure. A quick start notebook is provided at notebooks/03-24-quick-start.ipynb and further details are below:

Use the code below to load the Based checkpoints:

import torch
from transformers import AutoTokenizer
from based.models.gpt import GPTLMHeadModel

tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = GPTLMHeadModel.from_pretrained_hf("hazyresearch/based-360m")
Architecture Size Tokens WandB HuggingFace Config
Based 360m 10b 02-20-based-360m hazyresearch/based-360m reference/based-360m.yaml
Based 1.4b 10b 02-21-based-1b hazyresearch/based-1b reference/based-1b.yaml
Based 1.4b 50b 03-31-based-1b-50b hazyresearch/based-1b-50b reference/based_1.3b_50b_tok.yaml
Attention 360m 10b 02-21-attn-360m hazyresearch/attn-360m reference/attn-360m.yaml
Attention 1.4b 10b 02-25-attn-1b hazyresearch/attn-1b reference/attn-360m.yaml
Mamba 360m 10b 02-21-mamba-360m hazyresearch/mamba-360m reference/mamba-360m.yaml
Mamba 1.4b 10b 02-22-mamba-1b hazyresearch/mamba-1b reference/mamba-1b.yaml
Mamba 1.4b 50b 03-31-mamba-1b-50b hazyresearch/mamba-1b-50b reference/mamba-1.3b_50b_tok.yaml

Warning. We are releasing these models for the purpose of efficient architecture research. Because they have not been instruction fine-tuned or audited, they are not intended for use in any downstream applications.

The following code will run text generation for a prompt and print out the response.

input = tokenizer.encode("If I take one more step, it will be", return_tensors="pt").to("cuda")
output = model.generate(input, max_length=20)
print(tokenizer.decode(output[0]))

Note. For the checkpoints from other models, you will need to install other dependencies and use slightly different code.

To load the Attention models, use the following code:

import torch
from transformers import AutoTokenizer
from based.models.transformer.gpt import GPTLMHeadModel

tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = GPTLMHeadModel.from_pretrained_hf("hazyresearch/attn-360m").to("cuda")

To use the Mamba checkpoints, first run pip install mamba-ssm and then use the following code:

import torch
from transformers import AutoTokenizer
from based.models.mamba import MambaLMHeadModel

tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = MambaLMHeadModel.from_pretrained_hf("hazyresearch/mamba-360m").to("cuda")

Train

In order to train a new model with our code, you'll need to do a bit of additional setup:

# install train extra dependencies
pip install -e .[train]

# install apex (if you run into issues, likely candidates are torch or pip version issues; if using torch 2.0.1, this may help https://github.com/NVIDIA/apex/issues/1735)
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./
cd ..

We breakdown this section into three parts: 1) how to set up a training config and launch; 2) how to set up fast training kernels, and 3) how to install extra optimizations for training.

Launching Training

To train a new model, construct a config.yaml file at train/configs/experiment/. We are including the configs used to produce the pretrained checkpoints for the paper (released on HF below) at train/configs/experiment/reference/.

You can launch a training job using the following command from the train/ directory, where you can modify the config name and number of GPUs (trainer.devices):

cd train/
python run.py experiment=reference/based-1b trainer.devices=8

In our paper, we evaluated on the Pile corpus, which is no longer available online, so the train/configs/experiment/reference/ configs are unfortunately not directly runnable. For your use, we are including an example config that would train on the WikiText103 language modeling data. You can launch using the following script:

cd train/
python run.py experiment=example/based-360m trainer.devices=8

You can adapt the training dataset by adding a new dataset config file under train/configs/datamodule/. Follow the examples in wikitext103.yaml. Once you've constructed the yaml file for your new dataset, go to the experiment config (e.g. train/configs/experiment/example/based-360m.yaml) and update the name of the datamodule under override datamodule to the filename of your new dataset yaml file.

Be sure to update the checkpointing directory in the config prior to launching training.

Fast Training

We support a few different training views in this repo. The choice of parallel_implementation in your training config determines which training view gets used:

parallel_implementation: str="quadratic",
The default, which requires installing no kernels, simply retains a quadratic O(n^2) view during training. We currently recommend using Option 2 below for drastically faster training. These will be replaced with our new custom kernels (from the Based paper), to be released soon.

  • Option 1 (parallel_implementation = "quadratic"): default, quadratic PyTorch view.
  • Option 2 (parallel_implementation = "fla_parallel"): Flash linear attention kernel. Use the following to install:
pip install triton==2.2.0
pip install -U git+https://github.com/sustcsonglin/flash-linear-attention
  • Option 3 (parallel_implementation = "linear"): Fast transformers linear attention kernel. Use the following to install:
cd train/csrc/causal_dot_prod/
python setup.py install

We have provided benchmarking plots for different kernels in the benchmark/examples/linear_attention_forward/ folder. We are providing WandB training curves here showing how training using the fla-parallel mode allows Based to train fast with strong quality!

Additional notes:

  • Kernels for other fused operations: The config defaults will use fused kernels from the Flash Attention repo, which can all be installed by cloning the Flash Attention repo and python setup.py install the relevant kernels here. In particular, the fused_dense_lib, layer_norm, rotary, and xentropy kernels. Alternatively, you can change the codepaths to avoid the use of these kernels -- for instance by specifying fused_dense False in the experiment config, or by replacing the RMSNorm import in based/models/gpt.py to import from based/ops/triton/layer_norm.
  • Decay If you want to explore the optional decay strategy discussed in the Based paper, you can checkout the notebooks/03-31-decay.ipynb notebook.
  • References Note that this training code is from: https://github.com/Dao-AILab/flash-attention/tree/main/training, the Flash Linear Attention kernel is from https://github.com/sustcsonglin/flash-linear-attention, and the Fast Transformers kernel is from https://github.com/idiap/fast-transformers. Please cite them if you use their work!

Evaluate

In our paper, we evaluate pretrained language models on a standard suite of benchmarks from the LM Evaluation Harness, as well as a suite of three recall-intensive tasks:

  • SWDE (Info. extraction). A popular information extraction benchmark for semi-structured data. SWDE includes raw HTML documents from 8 Movie and 5 University websites (e.g.IMDB, US News) and annotations for 8-274 attributes per website (e.g., Movie runtime). HuggingFace: hazyresearch/based-swde
  • FDA (Info. extraction). A popular information extraction benchmark for unstructured data. The FDA setting contains 16 gold attributes and 100 PDF documents, which are up to 20 pages long, randomly sampled from FDA 510(k). HuggingFace: hazyresearch/based-fda
  • SQUAD-Completion (Document-QA). We find that original SQUAD dataset is challenging for our models without instruction fine-tuning. So we introduce a modified version of SQUAD where questions are reworded as next-token prediction tasks. For example, "What is the capital of France?" becomes "The capital of France is". HuggingFace: hazyresearch/based-squad

Under evaluate, we have a clone of EleutherAI's lm-evaluation-harness that includes these new tasks and provides scripts for running all the evaluations from the paper. The following instructions can be used to reproduce our results on the LM-Eval harness using the pretrained checkpoints.

Setup.

cd evaluate 

# init the submodule and install
git submodule init
git submodule update
pip install -e . 

Running Evaluations.

We provide a script evaluate/launch.py that launch evaluations on the checkpoints we've released.

For example, running the following from the evaluate folder will evaluate the 360M Based, Mamba, and Attention models on the SWDE dataset.

You can set your huggingface cache directory to a location with sufficient space (export TRANSFORMERS_CACHE, export HF_HOME).

python launch.py \
    --task swde  --task fda --task squad_completion \
    --model "hazyresearch/based-360m" \
    --model "hazyresearch/mamba-360m" \
    --model "hazyresearch/attn-360m" \
    --model "hazyresearch/based-1b" \
    --model "hazyresearch/mamba-1b" \
    --model "hazyresearch/attn-1b"

Optionally, if you have access to multiple GPUs, you can pass the -p flag to run each evaluation on a different GPU. To run a limited number of samples for each task (e.g. 100), use the --limit=100 option.

Below we include the results produced from running the command above. Note: the results below are on the new models trained and evaluated with the cleaned-up code in this repository. As a result, the results reported in our paper differ slightly, however the trends and conclusions remain the same.

Architecture Size HuggingFace SWDE FDA SQUAD
Based 360m hazyresearch/based-360m 25.65 14.34 24.23
Mamba 360m hazyresearch/mamba-360m 17.28 5.90 24.83
Attention 360m hazyresearch/attn-360m 56.26 57.89 27.85
Based 1.4b hazyresearch/attn-1b 37.71 19.06 29.49
Mamba 1.4b hazyresearch/attn-1b 28.35 11.07 29.42
Attention 1.4b hazyresearch/attn-1b 69.04 68.87 35.89

Note that the results shown may differ slightly if the Flash-Attention kernels are not used during inference.

Experiments on Synthetic Data

In our paper, we demonstrate the recall-throughput tradeoff using a synthetic associative recall task (see Figure 2, below, and Figure 3 in the paper).

The code for reproducing these figures is provided in a separate repository: HazyResearch/zoology. Follow the setup instruction in the Zoology README. The instructions for reproducing the are provided in zoology/experiments. For example, you can create the figure above using.

python -m zoology.launch zoology/experiments/arxiv24_based_figure2/configs.py -p

Benchmarking and Efficiency

We include the kernels evaluated in the Based paper under based/benchmarking/. We provide additional details on the CUDA releases in the README in this folder. Stay tuned!

Citation and Acknowledgements

This repo contains work based on the following papers. Please consider citing if you found the work or code useful:

# Based
@article{arora2024simple,
  title={Simple linear attention language models balance the recall-throughput tradeoff},
  author={Arora, Simran and Eyuboglu, Sabri and Zhang, Michael and Timalsina, Aman and Alberti, Silas and Zinsley, Dylan and Zou, James and Rudra, Atri and Ré, Christopher},
  journal={arXiv:2402.18668},
  year={2024}
}

# Hedgehog (Linear attention)
@article{zhang2024hedgehog,
  title={The Hedgehog \& the Porcupine: Expressive Linear Attentions with Softmax Mimicry},
  author={Zhang, Michael and Bhatia, Kush and Kumbong, Hermann and R{\'e}, Christopher},
  journal={arXiv preprint arXiv:2402.04347},
  year={2024}
}

# Zoology (BaseConv, Synthetics, Recall Problem)
@article{arora2023zoology,
  title={Zoology: Measuring and Improving Recall in Efficient Language Models},
  author={Arora, Simran and Eyuboglu, Sabri and Timalsina, Aman and Johnson, Isys and Poli, Michael and Zou, James and Rudra, Atri and Ré, Christopher},
  journal={arXiv:2312.04927},
  year={2023}
}

This project was made possible by a number of other open source projects; please cite if you use their work! Notably:

Models in this project were trained using compute provided by:

Please reach out with feedback and questions!

based's People

Contributors

simran-arora avatar seyuboglu avatar axelmagn avatar benjaminfspector avatar eltociear avatar

Stargazers

FelixTang avatar YUAN avatar  avatar Kuk Jin Kim avatar  avatar Alex Wang avatar Arjun Srivastava avatar Vilhelm Bergsøe avatar Bryan avatar jzhu avatar Aryaman Arora avatar Ersi Ni avatar Nikhar Arora avatar Yan Ru Pei avatar Juan Diego Rodriguez avatar  avatar Niels Horn avatar  avatar Nouamane Tazi avatar Umar Jamil avatar  avatar Adam Wentz avatar Chen Yingfa avatar  avatar Yongqi Chen avatar sam avatar Tripp Lyons avatar Tianyu Pang avatar  avatar Robert Flynn avatar Joanne avatar Weixi Xiang avatar Hassan  avatar  avatar Marius Miron avatar Shyam Sudhakaran avatar  avatar Yilun Kuang avatar Vaishnav Varma avatar Denghao Li avatar  avatar Shengyu Ye avatar  avatar xavier_switiz avatar Yang Wang  avatar Fan Pu Zeng avatar Nick avatar Ye Bai avatar Pawel Garbacki avatar Jinghan Yao avatar Jun Yoon avatar  avatar YuZhao avatar Andres Roman avatar Lennie Budgell avatar Jeff Carpenter avatar Yang Yang avatar Remy avatar Jose Cohenca avatar Siyuan Li avatar Mr.Shah avatar yucc-leon avatar Wei Deng avatar STYLIANOS IORDANIS avatar Jan Bielak avatar  avatar  avatar Koolen Dasheppi avatar Xiangyu Guo avatar  avatar Azer Shakhverdiev avatar Sasi Kiran Malladi avatar  avatar Alex Wu avatar Eugene Siow avatar Laxman Singh Tomar avatar 睡觉型学渣 avatar Kuan-Ying Lai avatar Lambda Shi  avatar ZincCat avatar Dhruv Karan avatar Sandalots avatar Vik Paruchuri avatar TedLi avatar WonderSeen avatar  avatar Ben Viggiano avatar Lee Gao avatar Zeyu Lu (Lauren Lu) avatar  avatar Chenghao Mou avatar  avatar Bekim avatar Liliang Ren avatar Zachariah Mustafa avatar c0ffymachyne avatar Hayato Tsukagoshi avatar Zheng Yuan avatar Paul Maragakis avatar bion howard avatar

Watchers

 avatar Jason Alan Fries avatar Mike Cafarella avatar Nimit Sohoni avatar Alex Ratner avatar Julius Smith avatar  avatar Jian Zhang avatar  avatar Henry Ehrenberg avatar Albert Gu avatar Megan Leszczynski avatar Karan Goel avatar  avatar  avatar Laurel Orr avatar

based's Issues

License

Hi,
Thanks for releasing this! Would you mind adding a license to the code and larger weights?
Thanks!

How to run prefill phase of inference benchmark?

Hi, I want to test how much time it takes for prefilling phase.
I want to test it with Llama2-7B model, and compare Based against FlashAttention.
How can I do it?
I want to test it with different sequence lengths and number of batches.
It seems that I need to modify based_inference.py. However, I am not even able to install it because there is no "test_build_utils" library.

simple implementation

This is very interesting work! Sadly, right now the code is extremely complicated and bloated. Are there plans to make a nano-gpt like version or a simple jupyter notebook doing step-by-step building the model and training? This would be very useful to people who want to try your approach. The main issue right now is that extracting the model from this code is very difficult. This will likely limit the impact of this work.

Type Error in GPTLMHeadModel

I am having a go at running inference and evaluation for this model, and running into a TypeError in GPTLMHeadModel:

In [1]: import torch
   ...: from transformers import AutoTokenizer
   ...: from based.models.gpt import GPTLMHeadModel
   ...: 
   ...: tokenizer = AutoTokenizer.from_pretrained("gpt2")
   ...: model = GPTLMHeadModel.from_pretrained_hf("hazyresearch/based-360m").to("cuda", dtype=torch.float
   ...: 16)
tokenizer_config.json: 100%|███████████████████████████████████████████| 26.0/26.0 [00:00<00:00, 260kB/s]
config.json: 100%|██████████████████████████████████████████████████████| 665/665 [00:00<00:00, 8.64MB/s]
vocab.json: 100%|███████████████████████████████████████████████████| 1.04M/1.04M [00:00<00:00, 12.1MB/s]
merges.txt: 100%|█████████████████████████████████████████████████████| 456k/456k [00:00<00:00, 8.99MB/s]
tokenizer.json: 100%|███████████████████████████████████████████████| 1.36M/1.36M [00:00<00:00, 17.8MB/s]
config.json: 100%|██████████████████████████████████████████████████| 2.86k/2.86k [00:00<00:00, 36.7MB/s]
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[1], line 6
      3 from based.models.gpt import GPTLMHeadModel
      5 tokenizer = AutoTokenizer.from_pretrained("gpt2")
----> 6 model = GPTLMHeadModel.from_pretrained_hf("hazyresearch/based-360m").to("cuda", dtype=torch.float16)

File /based/models/gpt.py:468, in GPTPreTrainedModel.from_pretrained_hf(cls, pretrained_model_name, device, **kwargs)
    466 config_data = load_config_hf(pretrained_model_name)
    467 config = GPT2Config(**config_data)
--> 468 model = cls(config, device=device, **kwargs)
    469 state_dict = load_state_dict_hf(pretrained_model_name, device=device)
    471 # remove the 'model.' prefix from the keys

File /based/models/gpt.py:741, in GPTLMHeadModel.__init__(self, config, process_group, device, dtype)
    739 super().__init__(config)
    740 self.process_group = process_group
--> 741 self.transformer = GPTModel(config, process_group=process_group, **factory_kwargs)
    742 self.tie_word_embeddings = getattr(config, "tie_word_embeddings", True)
    743 lm_head_bias = getattr(config, "lm_head_bias", False)

File /based/models/gpt.py:585, in GPTModel.__init__(self, config, process_group, device, dtype)
    569     self.embeddings = ParallelGPT2Embeddings(
    570         config.hidden_size,
    571         vocab_size,
   (...)
    575         **factory_kwargs,
    576     )
    578 # We change the order of dropout, residual and layer norm:
    579 # Instead of LN -> Attn / MLP -> Dropout -> Add, we do:
    580 # Dropout -> Add -> LN -> Attn / MLP, returning both the residual branch (output of Add) and
    581 # the main branch (output of MLP). The model definition is unchanged, but the mapping of the
    582 # nn.Dropout probabilities are changed.
    583 # This is for performance reason: we can fuse dropout + add + layer_norm.
    584 self.layers = nn.ModuleList(
--> 585     [
    586         create_block(config, layer_idx=i, process_group=process_group, **factory_kwargs)
    587         for i in range(config.num_hidden_layers)
    588     ]
    589 )
    590 self.fused_dropout_add_ln = getattr(config, "fused_dropout_add_ln", False)
    591 if self.fused_dropout_add_ln:

File /based/models/gpt.py:586, in <listcomp>(.0)
    569     self.embeddings = ParallelGPT2Embeddings(
    570         config.hidden_size,
    571         vocab_size,
   (...)
    575         **factory_kwargs,
    576     )
    578 # We change the order of dropout, residual and layer norm:
    579 # Instead of LN -> Attn / MLP -> Dropout -> Add, we do:
    580 # Dropout -> Add -> LN -> Attn / MLP, returning both the residual branch (output of Add) and
    581 # the main branch (output of MLP). The model definition is unchanged, but the mapping of the
    582 # nn.Dropout probabilities are changed.
    583 # This is for performance reason: we can fuse dropout + add + layer_norm.
    584 self.layers = nn.ModuleList(
    585     [
--> 586         create_block(config, layer_idx=i, process_group=process_group, **factory_kwargs)
    587         for i in range(config.num_hidden_layers)
    588     ]
    589 )
    590 self.fused_dropout_add_ln = getattr(config, "fused_dropout_add_ln", False)
    591 if self.fused_dropout_add_ln:

File /based/models/gpt.py:371, in create_block(config, layer_idx, process_group, device, dtype, **kwargs)
    369 mlp_cls = create_mlp_cls(config, layer_idx, process_group=process_group, **factory_kwargs)
    370 use_rms_norm = getattr(config, "rms_norm", False)
--> 371 norm_cls = partial(
    372     nn.LayerNorm if not use_rms_norm else RMSNorm,
    373     eps=config.layer_norm_epsilon,
    374     **factory_kwargs,
    375 )
    376 # TD [2022-07-30]: Force residual in fp32, seems to make fp16 training more stable
    377 residual_in_fp32 = getattr(config, "residual_in_fp32", False)

TypeError: the first argument must be callable

For reproducibility, I have been running this in a docker container:

FROM nvidia/cuda:11.8.0-devel-ubuntu22.04

RUN apt-get update && apt-get install -y \
    apt-utils \
    python3.10 \
    python3-pip \
    git \
    && rm -rf /var/lib/apt/lists/*

RUN pip install --upgrade pip
RUN pip install \
    torch==2.1.2 \
    torchvision==0.16.2 \
    torchaudio==2.1.2 \
    --index-url https://download.pytorch.org/whl/cu118 # due to observed causal-conv1d dependency

RUN pip install \
    jupyter==1.0.0 \
    hydra-core==1.3.2

RUN pip install jupyter
COPY . .
RUN pip install .

Any idea what could be going wrong here?

Inquiry on 'params' Interpretation and Request for DNA Modeling Code and Scripts

Hello,

I would like to extend my sincere appreciation for the outstanding work you have done. While going through the paper, I came across a parameter labeled 'params' in the table3.:

image

Based on my understanding, these values appear to be denoted in million (M). Could you please confirm if this interpretation is correct?

Additionally, I am curious to know if there is any possibility of gaining access to the code and scripts used for pre-training the model on the hg38 dataset and fine-tuning it on genomic benchmarks?

Thanks.

Taylor approximation is not equal to the math definition

Thank you for sharing your code along with your paper. This makes things reproducible and is extremely appreciated.

The Taylor approximation of the exponential should be 1 + qk + (qk)^2/2.
However, in my understanding this code actually computes 1 + qk + q^2k^2/2 which is not equivalent. Am I correct? If not could you point me to the part of the code that computes the Taylor expansion?

FYI: HuggingFace Transformers Request

Hi all, just a heads up: I filed an issue with huggingface/transformers requesting model support for BASED via their library.

My engagement over the past few days has been part of an exploratory analysis of BASED for my employer. I hope it isn't too much of an intrusion, and please feel free to reach out if you have any questions or would like to coordinate.

a question, thank you for your reply

Hi, thank you for your nice work. I have a question about training. If I want to train your model on the pile-uncopyrighted dataset (just uncopyrighted pile), how should I prepare or pre-process the dataset?

Upstreaming SWDE, FDA, and Squad-completion to Eval Harness

Hi!

Congrats on the really great work. I'll definitely be trying Based out and referencing your work here in future :)

Was really happy to see you found the Eval Harness useful! I wanted to see if you were interested in or needed any help upstreaming the custom evals you created to the main harness--it'd be great to have these more easily reproducible so future work can compare to the evaluations you report! I'd be happy to help on this front.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.