Git Product home page Git Product logo

llama-2-onnx's Introduction

Llama 2 Powered By ONNX

This is an optimized version of the Llama 2 model, available from Meta under the Llama Community License Agreement found on this repository. Microsoft permits you to use, modify, redistribute and create derivatives of Microsoft's contributions to the optimized version subject to the restrictions and disclaimers of warranty and liability in the Llama Community License agreement.

Before You Start

The sub-modules that contain the ONNX files in this repository are access controlled. To get access permissions to the Llama 2 model, please fill out the Llama 2 ONNX sign up page. If allowable, you will receive GitHub access in the next 48 hours, but usually much sooner.

Cloning This Repository And The Submodules

Before you begin, ensure you have Git LFS installed. Git LFS (Large File Storage) is used to handle large files efficiently. You can find out how to install Git LFS for your operating system at https://git-lfs.com/.

Next, you can choose which version of the Llama 2 model you would like to use by selecting the appropriate submodule.

Chose from the following sub-modules:

  • 7B_FT_float16
  • 7B_FT_float32
  • 7B_float16
  • 7B_float32
  • 13B_FT_float16
  • 13B_FT_float32
  • 13B_float16
  • 13B_float32
git clone https://github.com/microsoft/Llama-2-Onnx.git
cd Llama-2-Onnx
git submodule init <chosen_submodule> 
git submodule update

You can repeate the init command with a different submodule name to initialize multiple submodules. Be careful, the contained files are very large! (7B Float16 models are about 10GB)

What is Llama 2?

Llama 2 is a collection of pretrained and fine-tuned generative text models. To learn more about Llama 2, review the Llama 2 model card.

What Is The Structure Of Llama 2?

Llama 2 model consists of a stack of decoder layers. Each decoder layer (or transformer block) is constructed from one self-attention layer and one feed-forward multi-layer perceptron. Llama models use different projection sizes compared with classic transformers in the feed-forward layer, for instance, both Llama 1 and Llama 2 projection use 2.7x hidden size rather than the standard 4x hidden size. A key difference between Llama 1 and Llama 2 is the architectural change of attention layer, in which Llama 2 takes advantage of Grouped Query Attention (GQA) mechanism to improve efficiency.

Llama 2 Model

FAQ

Is There A Simple Code Example Running Llama 2 With ONNX?

There are two examples provided in this repository. There is a minimum working example shown in Llama-2-Onnx/MinimumExample. This is simply a command line program that will complete some text with the chosen version of Llama 2.

Given the following input:

python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx --embedding_file 7B_FT_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?"

Output:

The lightest element is hydrogen. Hydrogen is the lightest element on the periodic table, with an atomic mass of 1.00794 u (unified atomic mass units).

Is There A More Complete Code Example Running Llama 2 With ONNX?

There is a more complete chat bot interface that is available in Llama-2-Onnx/ChatApp. This is a python program based on the popular Gradio web interface. It will allow you to interact with the chosen version of Llama 2 in a chat bot interface.

An example interaction can be seen here:

Chat App

How Do I Use The Fine-tuned Models?

The fine-tuned models were trained for dialogue applications.

To get the expected features and performance for them, a specific formatting needs to be followed, including the INST tag, BOS and EOS tokens, and the whitespaces and breaklines in between (we recommend calling strip() on inputs to avoid double-spaces).

This enables models in chat mode as well as additional safeguards to reduce potentially undesirable output.

Why Is The First Inference Session Slow?

ONNX runtime execution provider might need to generate JIT binaries for the underlying hardware, typically the binary is cache and will be loaded directly in the subsequent runs to reduce the overhead.

Why Is FP16 ONNX Slower Than ONNX FP32 On My Device?

It is possible that your device does not support native FP16 math, therefore weights will be cast to FP32 at runtime. Using the FP32 version of the model will avoid the cast overhead.

How Do I Get Better Inference Speed?

It is recommended that inputs/outputs are put on target device to avoid expensive data copies, please refer to the following document for details.

I/O Binding | onnxruntime

What Parameters Should I Test With?

Users can perform temperature and top-p sampling using the model’s output logits. Please refer to Meta’s guidance for the best parameters combination; an example is located here.

How Can I Develop With Llama 2 Responsibly?

In order to help developers innovate responsibly, Meta encourages you to review the Responsible Use Guide for the Llama 2 models.

Microsoft encourages you to learn more about its Responsible AI approach, including many publicly available resources and tools for developers.

llama-2-onnx's People

Contributors

adarshxs avatar joshuaelsdon avatar microsoft-github-operations[bot] avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar tammanygrantmsft avatar vriveras avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llama-2-onnx's Issues

Access permissions to the Llama 2 model.

@vriveras Thanks for the invite!

Any idea why I am not getting the Access permissions email for Llama 2 ONNX, from the sign up page?

When I click on the link, I get this message :

Thank you for requesting access to the ONNX optimized Llama 2 models
We are processing your request, we'll contact you once your access is granted. Just make sure your [primary email address](https://github.com/settings/emails) is up to date.

But I did not receive any emails yet.

LlamaV2_7B_float32 failing onnx checker

I created a script to run onnx checker functions on the LlamaV2_7B_float32.onnx model. Below is the script:

import onnx

def check_model(model_path):
    model = onnx.load(model_path)
    

    for i, node in enumerate(model.graph.node):
        
        # Check the node for consistency
        onnx.checker.check_node(node)
        
        # Check attributes of each node
        for attr in node.attribute:
            onnx.checker.check_attribute(attr)    
    
    # Check tensor information in the model
    for init_tensor in model.graph.initializer:
        onnx.checker.check_tensor(init_tensor)

    for value_info in model.graph.value_info:
        onnx.checker.check_value_info(value_info)
    # Check graph consistency
    onnx.checker.check_model(model_path)


if __name__ == "__main__":
    import argparse

    parser = argparse.ArgumentParser(description="Check ONNX model for validation errors.")
    parser.add_argument("model", type=str, help="Path to the ONNX model file.")
    args = parser.parse_args()

    check_model(args.model)

Running

python3 check_onnx_model.py LlamaV2_7B_float32.onnx

returns

onnx.onnx_cpp2py_export.checker.ValidationError: Field 'shape' of type is required but missing.

Access TAT

Hello,
First off, great project!
How long does it take to get access? I applied a week ago.

Thank you.

git submodule update fails

I'm wondering if there is a problem with the way submodules are hooked up.

I'm trying to run the commands at the start, so specifically this:
git submodule init 7B_FT_float16
git submodule update

But when I go to update, a dialog opens up "Connect to Github". I have tried to "sign in with code" and pass a personal access token with all attributes checked. I get a "Congratulations all set" on the webpage, but in the console, it continues as follows:

D:\github\llama2\ms_onnx>git submodule update
Cloning into 'D:/github/llama2/ms_onnx/7B_FT_float16'...
remote: Repository not found.
fatal: repository 'https://github.com/microsoft/Llama-2-Onnx-7-FT-16/' not found
fatal: clone of 'https://github.com/microsoft/Llama-2-Onnx-7-FT-16' into submodule path 'D:/github/llama2/ms_onnx/7B_FT_float16' failed
Failed to clone '7B_FT_float16'. Retry scheduled

I have tried this with 7B_FT_float32, 7B_float16 and 7B_float32, and they all have the same error.

Am I doing something wrong or is there something wrong with the submodules?

Cannot access new optimized model variants despite having permissions for existing variants

I'm attempting to access the newly optimized models created on the main-CUDA-CPU branch, however I'm getting a 404 error (indicating lack of permissions). I have been granted access to the model weights for the unoptimized version of the models in the main branch, are there plans to grandfather in those permission to the newly optimized models? Or is there some new process to request access to the optimized models?

I/O binding to speed up inference

Hello everyone, I want to try I/O binding as suggested in the readme to speed up inference. I've reviewed the documents and I tried something but I saw that the inference time is not changed. Is there anyone who can share their experience with this?

Submodules are bad and defaulting to https

Thanks for the invite to join this repo!

I use SSH but when I clone this repo the submodules are listed as https so I get a request for a username and password. For some reason (I assume 2FA), I'm unable to sign-in so unable to update the submodule.

Maybe I'm doing something wrong but I've never been a fan of submodules and while I can see the use-case here, I think this could lead to more issues and reduced usage of this repro.

Ian

What is the difference between 7B_float16 and 7B_FT_float16

When I use 7B_FT_float16
python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx --embedding_file 7B_FT_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?"
The response is as expected:

The lightest element is hydrogen. Hydrogen is the lightest element on the periodic table, with an atomic mass of 1.00794 u (unified atomic mass units).

When I use 7B_float16
python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_float16/ONNX/LlamaV2_7B_float16.onnx --embedding_file 7B_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?"
The response is strange:

What is the lightest element on the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is

fatal: repository 'https://github.com/microsoft/Llama-2-Onnx-7-16/' not found

log:

Cloning into 'D:/github/Llama-2-Onnx/7B_float16'...
remote: Repository not found.
fatal: repository 'https://github.com/microsoft/Llama-2-Onnx-7-16/' not found
fatal: clone of 'https://github.com/microsoft/Llama-2-Onnx-7-16' into submodule path 'D:/github/Llama-2-Onnx/7B_float16' failed
Failed to clone '7B_float16'. Retry scheduled
Cloning into 'D:/github/Llama-2-Onnx/7B_float16'...
fatal: unable to access 'https://github.com/microsoft/Llama-2-Onnx-7-16/': Recv failure: Connection was reset
fatal: clone of 'https://github.com/microsoft/Llama-2-Onnx-7-16' into submodule path 'D:/github/Llama-2-Onnx/7B_float16' failed
Failed to clone '7B_float16' a second time, aborting

7B_FT_float16 model size

Hi,
i am downloading 7B_FT_float16 model, and this is still downloading when it is of 10gb, can you please tell me how much will be its exact size?
Thanks.

No module named 'ChatApp'

when trying to execute the ChatApp/app.py from Windows powershell

$ python ChatApp/app.py

I got the following error.

Traceback (most recent call last):
  File "C:\Llama-2-Onnx\ChatApp\app.py", line 6, in <module>
    from interface.hddr_llama_onnx_interface import LlamaOnnxInterface
  File "C:\Llama-2-Onnx\ChatApp\interface\hddr_llama_onnx_interface.py", line 12, in <module>
    from ChatApp.app_modules.utils import (
ModuleNotFoundError: No module named 'ChatApp'

Dependencies have been installed

cd .\ChatApp
pip install -r requirements.txt
$ python -V
Python 3.10.6

Not able to run Llama 7B float 16 not in my system or google colab

I have been testing the repo inside my laptop and Google Colab. Here is the system information for both environments.

My local system:

Memory: 16GB
CPU: AMD Ryzen 9 5900HX with Radeon Graphics
GPU: NVIDIA GeForce RTX 3060 Mobile / Max-Q 

Google colab

CPU: Intel Xeon (2) @ 2.199GHz 
GPU: NVIDIA Tesla T4 

Command to reproduce

!python MinimumExample/Example_ONNX_LlamaV2.py \
--onnx_file 7B_float16/ONNX/LlamaV2_7B_float16.onnx \
--embedding_file 7B_float16/embeddings.pth \
--tokenizer_path tokenizer.model \
--prompt "What is the lightest element?"

Output in my local system

python3 MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_float16/ONNX/LlamaV2_7B_float16.onnx --embedding_file 7B_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "hello"
/home/anindyadeep/anaconda3/envs/llm/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:65: UserWarning: Specified provider 'DmlExecutionProvider' is not in available provider names.Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
  warnings.warn(
2023-08-27 12:25:33.996863660 [E:onnxruntime:, inference_session.cc:1644 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 33554432

Traceback (most recent call last):
  File "/home/anindyadeep/workspace/llama2-onnx/Llama-2-Onnx/MinimumExample/Example_ONNX_LlamaV2.py", line 166, in <module>
    response = run_onnx_llamav2(
  File "/home/anindyadeep/workspace/llama2-onnx/Llama-2-Onnx/MinimumExample/Example_ONNX_LlamaV2.py", line 47, in run_onnx_llamav2
    llm_session = onnxruntime.InferenceSession(
  File "/home/anindyadeep/anaconda3/envs/llm/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/anindyadeep/anaconda3/envs/llm/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 435, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 33554432

Output in Google colab

/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:65: UserWarning: Specified provider 'DmlExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider'
  warnings.warn(
^C

This probably means the process is automatically getting killed.

So now I have two questions here:

  1. What might be the root cause here, Instead of having Cuda and everything installed, it is switching back to DmlExecutionProvider and giving error.
  2. The execution time is large here. Although I am getting error or the process is getting killed, but till reaching that state, the time take is around 52-60 seconds in google colab (after which it is using ^C to kill the process) and `10-15`` seconds in my local m(after which it is giving error)

!! Update:

made some changes in the example code in just to provide the CPU Execution provider.

options = onnxruntime.SessionOptions()
    llm_session = onnxruntime.InferenceSession(
        onnx_file,
        sess_options=options,
        providers=[
            "CPUExecutionProvider",
        ],
    )

And then ran the same command, it took more than 2.5 minutes and finally the process got killed. It seems like I might not have the correct cuda vs onnx compatibility for which it could be generating error.

cuda version: 12.2
onnx version: 1.15.1

Model inference accuracy

Running this on RTX4090

C:\AI\Llama2-Onnx\Llama-2-Onnx>python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_float16/ONNX/LlamaV2_7B_float16.onnx --embedding_file 7B_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?"
C:\Users\Real\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'DmlExecutionProvider' is not in available provider names.Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
warnings.warn(

What is the lightest element on the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is

C:\AI\Llama2-Onnx\Llama-2-Onnx>
C:\AI\Llama2-Onnx\Llama-2-Onnx>python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_float16/ONNX/LlamaV2_7B_float16.onnx --embedding_file 7B_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?"
C:\Users\Real\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'DmlExecutionProvider' is not in available provider names.Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
warnings.warn(

What is the lightest element on the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is the lightest element in the periodic table?
What is the lightest element in the universe?
What is

C:\AI\Llama2-Onnx\Llama-2-Onnx>python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_float16/ONNX/LlamaV2_7B_float16.onnx --embedding_file 7B_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element on the periodic table"
C:\Users\Real\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'DmlExecutionProvider' is not in available provider names.Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
warnings.warn(
?
What is the lightest element on the periodic table? Hydrogen is the lightest element on the periodic table. It is also the most abundant element in the universe. Hydrogen is a gas at room temperature and pressure.
What is the lightest element in the periodic table?
What is the lightest element in the periodic table? Hydrogen is the lightest element in the periodic table. It is also the most abundant element in the universe. Hydrogen is a gas at room temperature and pressure.
What is the lightest element in the periodic table? Hydrogen is the lightest element in the periodic table. It is also the most abundant element in the universe. Hydrogen is a gas at room temperature and pressure.
What is the lightest element in the periodic table? Hydrogen is the lightest element in the periodic table. It is also the most abundant element in the universe. Hydrogen is a gas at room temperature and pressure. Hydrogen is the lightest element in the periodic table. It is also the most abundant element in the universe. Hydrogen is a gas at room temperature and pressure.
What is the lightest element in the periodic

C:\AI\Llama2-Onnx\Llama-2-Onnx>

Cannot download the model. Is it due to access problem?

I filled in the Llama 2 ONNX GitHub Request Form yesterday. Today, I tried the tutorial in the readme.md, but still get the following error, when trying to download the llama model. How to solve it?

Llama-2-Onnx % git submodule update
Cloning into '~/Desktop/forked/Llama-2-Onnx/7B_float16'...
info: please complete authentication in your browser...
remote: Repository not found.
fatal: repository 'https://github.com/microsoft/Llama-2-Onnx-7-16/' not found
fatal: clone of 'https://github.com/microsoft/Llama-2-Onnx-7-16' into submodule path '~/Desktop/forked/Llama-2-Onnx/7B_float16' failed
Failed to clone '7B_float16'

failed:Protobuf parsing failed

When I try to run " python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx --embedding_file 7B_FT_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?"
" it returns that “onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx failed:Protobuf parsing failed.”
And I try onnx.checker.check_model() it returns that onnx.onnx_cpp2py_export.checker.ValidationError: Unable to parse proto from file: /data/renzhen/Llama-2-Onnx/7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx. Please check if it is a valid protobuf file of proto.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.