Git Product home page Git Product logo

ppl.llm.serving's People

Contributors

alcanderian avatar hijdk avatar openppl-public avatar ouonline avatar vincent-syr avatar xupinjie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ppl.llm.serving's Issues

test

What are the problems?(screenshots or detailed error messages)

What are the types of GPU/CPU you are using?

What's the operating system ppl.llm.serving runs on?

What's the compiler and its version?

Which version(commit id or tag) of ppl.llm.serving is used?

What are the commands used to build ppl.llm.serving?

What are the execution commands?

minimal code snippets for reproducing these problems(if necessary)

models and inputs for reproducing these problems (send them to [email protected] if necessary)

关于性能分析的一点疑惑

What are the problems?(screenshots or detailed error messages)

想问下有性能分析的工具嘛?profiler相关,还是只能用nsight profile这种自己去看一些算子性能

What are the types of GPU/CPU you are using?

GPU:A100-80G-SXM4

What's the operating system ppl.llm.serving runs on?

Ubuntu 20.04.4
cuda:12.3
cudnn:8904
trt:9.2.0

What's the compiler and its version?

gcc 11.4
cmake version 3.27.9
Cuda compilation tools, release 12.3, V12.3.107

Which version(commit id or tag) of ppl.llm.serving is used?

commit id:51c3b3d5c5eba25c276a84388f04a2c9e198699f

ppl llm dependency grpc checkout error

Cloning into '/workspace/dev/inference/ppl.llm.serving/deps/grpc/third_party/xds'...
Cloning into '/workspace/dev/inference/ppl.llm.serving/deps/grpc/third_party/zlib'...
Submodule path 'third_party/abseil-cpp': checked out 'c2435f8342c2d0ed8101cb43adfd605fdc52dca2'
Submodule path 'third_party/benchmark': checked out '015d1a091af6937488242b70121858bce8fd40e9'
error: Server does not allow request for unadvertised object 60209eb1ccc34d5deefb002d1b7f37545204f7f2
fatal: Fetched in submodule path 'third_party/bloaty', but it did not contain 60209eb1ccc34d5deefb002d1b7f37545204f7f2. Direct fetching of that commit failed.
fatal:
CMake Error at grpc-subbuild/grpc-populate-prefix/tmp/grpc-populate-gitclone.cmake:62 (message):
Failed to update submodules in:
'/workspace/dev/inference/ppl.llm.serving/deps/grpc'

gmake[2]: *** [CMakeFiles/grpc-populate.dir/build.make:102: grpc-populate-prefix/src/grpc-populate-stamp/grpc-populate-download] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:83: CMakeFiles/grpc-populate.dir/all] Error 2
gmake: *** [Makefile:91: all] Error 2

link error for ppl_llama_server

What are the problems?(screenshots or detailed error messages)

I have installed libnccl, but still get the error.

/usr/bin/ld: CMakeFiles/ppl_llama_server.dir/src/models/llama/llama_server.cc.o: in function ResourceManager::~ResourceManager()': llama_server.cc:(.text._ZN15ResourceManagerD2Ev[_ZN15ResourceManagerD5Ev]+0xa8): undefined reference to ncclCommDestroy'
/usr/bin/ld: CMakeFiles/ppl_llama_server.dir/src/models/llama/llama_server.cc.o: in function ResourceManager::Init(unsigned int, unsigned long, unsigned long, float, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': llama_server.cc:(.text._ZN15ResourceManager4InitEjmmfRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE[_ZN15ResourceManager4InitEjmmfRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE]+0x13e): undefined reference to ncclCommInitAll'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/ppl_llama_server.dir/build.make:195: ppl_llama_server] Error 1
make[1]: *** [CMakeFiles/Makefile2:1738: CMakeFiles/ppl_llama_server.dir/all] Error 2
make: *** [Makefile:156: all] Error 2

What are the types of GPU/CPU you are using?

GPU

What's the operating system ppl.llm.serving runs on?

Ubuntu20.04

What's the compiler and its version?

gcc/g++ 9.4

Which version(commit id or tag) of ppl.llm.serving is used?

master

What are the commands used to build ppl.llm.serving?

./build.sh -DPPLNN_USE_LLM_CUDA=ON -DPPLNN_CUDA_ENABLE_NCCL=OFF -DPPLNN_ENABLE_CUDA_JIT=OFF -DPPLNN_CUDA_ARCHITECTURES="'80;'" -DPPLCOMMON_CUDA_ARCHITECTURES="'80'"

Error for llama-13B on V100

An error was encountered while executing client_qps_measure.

Platform: llama-13B on 2 V100 GPUS

[INFO][2023-09-13 03:35:21.764][llama_server.cc:539] max_tokens: 75630
[INFO][2023-09-13 03:35:21.827][llama_server.cc:484] VOCAB_SIZE: 32000; BOS ID: 1; EOS ID: 2; PAD ID: -1
[INFO][2023-09-13 03:35:21.827][llama_server.cc:606] End init nccl, cuda engine, kv cache, kv scale manager
[INFO][2023-09-13 03:35:21.827][llama_server.cc:626] Init llama worker successed
[INFO][2023-09-13 03:35:21.827][llama_worker.cc:1043] waiting for request ...
[INFO][2023-09-13 03:35:21.829][llama_server.cc:639] listening on [0.0.0.0:23333]
[ERROR][2023-09-13 03:35:34.009][gemm.cu:113] cublasLt failed: the requested functionality is not supported
[ERROR][2023-09-13 03:35:34.009][kernel.cc:176] DoExecute kernel [/layers.0/wqkv/ColumnParallelLinear] failed: device runtime error
[ERROR][2023-09-13 03:35:34.009][gemm.cu:113] cublasLt failed: the requested functionality is not supported
[ERROR][2023-09-13 03:35:34.009][sequential_scheduler.cc:130] exec kernel[/layers.0/wqkv/ColumnParallelLinear] of type[pmx:ColumnParallelLinear:1] failed: device runtime error
[ERROR][2023-09-13 03:35:34.009][kernel.cc:176] DoExecute kernel [/layers.0/wqkv/ColumnParallelLinear] failed: device runtime error
[ERROR][2023-09-13 03:35:34.009][runtime_impl.cc:333] Run() failed: device runtime error
[ERROR][2023-09-13 03:35:34.009][sequential_scheduler.cc:130] exec kernel[/layers.0/wqkv/ColumnParallelLinear] of type[pmx:ColumnParallelLinear:1] failed: device runtime error
[ERROR][2023-09-13 03:35:34.009][llm_cuda_device.cc:276] [ERROR][2023-09-13 03:35:34.009][runtime_impl.cc:333] Run() failed: device runtime error
cudaStreamSynchronize failed: 700, an illegal memory access was encountered
[ERROR][2023-09-13 03:35:34.009][llm_cuda_device.cc:276] cudaStreamSynchronize failed: 700, an illegal memory access was encountered
[ERROR][2023-09-13 03:35:34.009][runtime_impl.cc:316] sync device[llm_cuda] failed: internal error
[ERROR][2023-09-13 03:35:34.009][runtime_impl.cc:316] sync device[llm_cuda] failed: internal error
[ERROR][2023-09-13 03:35:34.009][llama_worker.cc:922] ParallelExecute(RunModelTask) failed.
[INFO][2023-09-13 03:35:34.009][llama_worker.cc:1043] waiting for request ...
[ERROR][2023-09-13 03:35:34.010][llm_cuda_device.cc:112] cudaMemcpyAsync failed: 700, an illegal memory access was encountered
[ERROR][2023-09-13 03:35:34.010][llm_cuda_device.cc:112] [ERROR][2023-09-13 03:35:34.010][llama_worker.cc:724] cudaMemcpyAsync failed: 700, an illegal memory access was encountered
set token_ids [token_ids] failed: other error
[ERROR][2023-09-13 03:35:34.010][llama_worker.cc:724] set token_ids [token_ids] failed: other error
[ERROR][2023-09-13 03:35:34.010][llama_worker.cc:910] ParallelExecute(SetInputTask) failed.
[INFO][2023-09-13 03:35:34.010][llama_worker.cc:1043] waiting for request ...
[INFO][2023-09-13 03:35:34.010][llama_worker.cc:1043] waiting for request ...
[INFO][2023-09-13 03:35:34.011][llama_worker.cc:1043] waiting for request ...
[INFO][2023-09-13 03:35:34.011][llama_worker.cc:1043] waiting for request ...
[INFO][2023-09-13 03:35:34.011][llama_worker.cc:1043] waiting for request ...

nccl error

On A30, It works fine.

On V100, run llama-13B using two GPUs.

NVIDIA-SMI 510.85.02 Driver Version: 510.85.02 CUDA Version: 11.7

[ERROR][2023-09-12 05:21:16.518][nccl_utils.h:110] NCCL error(code:1) on ncclGroupEnd
[ERROR][2023-09-12 05:21:16.518][nccl_utils.h:110] NCCL error(code:1) on ncclGroupEnd
[ERROR][2023-09-12 05:21:16.518][kernel.cc:176] [ERROR][2023-09-12 05:21:16.518][kernel.cc:176] DoExecute kernel [/tok_embeddings/ParallelEmbedding] failed: other error
DoExecute kernel [/tok_embeddings/ParallelEmbedding] failed: other error
[ERROR][2023-09-12 05:21:16.518][sequential_scheduler.cc:130] [ERROR][2023-09-12 05:21:16.518][sequential_scheduler.cc:130] exec kernel[/tok_embeddings/ParallelEmbedding] of type[pmx:ParallelEmbedding:1] failed:
other error
exec kernel[/tok_embeddings/ParallelEmbedding] of type[pmx:ParallelEmbedding:1] failed: other error
[ERROR][2023-09-12 05:21:16.518][runtime_impl.cc:333] Run() failed: other error
[ERROR][2023-09-12 05:21:16.518][runtime_impl.cc:333] Run() failed: other error
[ERROR][2023-09-12 05:21:16.519][llama_worker.cc:922] ParallelExecute(RunModelTask) failed.
[INFO][2023-09-12 05:21:16.519][llama_worker.cc:1043] waiting for request ...
[ERROR][2023-09-12 05:21:16.520][nccl_utils.h:110] NCCL error(code:1) on ncclGroupEnd
[ERROR][2023-09-12 05:21:16.520][kernel.cc:176] DoExecute kernel [/tok_embeddings/ParallelEmbedding] failed: other error
[ERROR][2023-09-12 05:21:16.520][sequential_scheduler.cc:130] exec kernel[/tok_embeddings/ParallelEmbedding] of type[pmx:ParallelEmbedding:1] failed: other error
[ERROR][2023-09-12 05:21:16.520][runtime_impl.cc:333] Run() failed: other error
[ERROR][2023-09-12 05:21:16.520][nccl_utils.h:110] NCCL error(code:1) on ncclGroupEnd

Here is my config.json:

{
    "model_dir":  "/data/codes/ppl/llama-13b",
    "model_param_path": "/data/codes/ppl/llama-13b/params.json",

    "tokenizer_path": "/data/LLaMA-7B/tokenizer.model",

    "tensor_parallel_size": 2,

    "top_p": 0.0,
    "top_k": 1,

    "max_tokens_scale": 0.94,
    "max_tokens_per_request": 4096,
    "max_running_batch": 1024,

    "host": "0.0.0.0",
    "port": 10086
}

Onnx Support

What are the problems?

Support ONNX format.

What are the types of GPU/CPU you are using?

Intel / Nvidia

What's the operating system ppl.llm.serving runs on?

Linux / Debian

What's the compiler and its version?

gcc 9.0

Which version(commit id or tag) of ppl.llm.serving is used?

latest

What are the commands used to build ppl.llm.serving?

What are the execution commands?

minimal code snippets for reproducing these problems(if necessary)

models and inputs for reproducing these problems (send them to [email protected] if necessary)

compile error:ppl.llm.serving/tools/client_pressure.cc:339:105: error: no matching function for call to ‘std::unique_ptr<grpc::ClientReader<ppl::llm::proto::Response> >::unique_ptr(std::unique_ptr<grpc::ClientReader<ppl::llm::proto::BatchedResponse> >)’

What are the problems?(screenshots or detailed error messages)

compile error:
image

What are the types of GPU/CPU you are using?

GPU:A100-80G-SXM4

What's the operating system ppl.llm.serving runs on?

Ubuntu 20.04.4
cuda:12.3
cudnn:8904
trt:9.2.0

What's the compiler and its version?

gcc 11.4
cmake version 3.27.9
Cuda compilation tools, release 12.3, V12.3.107

Which version(commit id or tag) of ppl.llm.serving is used?

commit id:c2bf8614ea7bce0cb9838255fb3cd6ab9d75039b

What are the commands used to build ppl.llm.serving?

./build.sh -DPPLNN_USE_LLM_CUDA=ON -DPPLNN_CUDA_ENABLE_NCCL=ON -DPPLNN_ENABLE_CUDA_JIT=OFF -DPPLNN_CUDA_ARCHITECTURES="'80;86;87'" -DPPLCOMMON_CUDA_ARCHITECTURES="'80;86;87'"

What are the execution commands?

None

minimal code snippets for reproducing these problems(if necessary)

None

models and inputs for reproducing these problems (send them to [email protected] if necessary)

None

test

What are the problems?(screenshots or detailed error messages)

What are the types of GPU/CPU you are using?

What's the operating system ppl.llm.serving runs on?

What's the compiler and its version?

Which version(commit id or tag) of ppl.llm.serving is used?

What are the commands used to build ppl.llm.serving?

What are the execution commands?

minimal code snippets for reproducing these problems(if necessary)

models and inputs for reproducing these problems (send them to [email protected] if necessary)

Example demo stuck (ppl_llm_server/client_sample)

When I run the provided example by:

[server]
./ppl-build/ppl_llm_server llama_7b_config_example.json

[client]
./ppl-build/client_sample 127.0.0.1:23333

The program got stuck and not proceed.

After a 2-hour debugging, I found that this maybe caused by miss-use of StaticThreadPool. The worker thread is implemented as:

void* StaticThreadPool::ThreadWorker(void* arg) {
    auto info = (ThreadInfo*)arg;
    auto pool = info->pool;

    while (true) {
        pool->barrier_.Wait();
        if (!pool->func_) {
            break;
        }
        pool->func_(pool->threads_.size(), info->thread_idx);
        pool->barrier_.Wait();
    }

    return nullptr;
}

There are two barriers: one before func_ run and the other is after. However, for asynchronism, the asynchronous method such as StaticThreadPool::RunAsync only call barrier(Wait()) once. So, it leaves to caller to take another barrier(Wait()) before the next run. And this is also noted by author in comment.

It seems strange to find that there may be still missing the SECOND barrier. When I add a pool->Wait() at two places, the program starts to proceed. It seems that client and server interact with each other normally.

The two modifications are:

[utils.h]

template <typename TaskType, typename... TaskArgType>
ppl::common::RetCode ParallelExecute(ppl::common::StaticThreadPool* pool, TaskArgType&&... rest_args) {
    auto n = pool->GetNumThreads();
    ppl::common::Barrier finish_barrier;
    ppl::common::RetCode thr_rc[n];
    finish_barrier.Reset(n + 1);

    pool->RunAsync([&](uint32_t nthr, uint32_t ithr) {
        auto task = TaskType(ithr, std::forward<TaskArgType>(rest_args)...);
        thr_rc[ithr] = task.Process();
        finish_barrier.Wait();
    });

    finish_barrier.Wait();

    ppl::common::RetCode retcode = ppl::common::RC_SUCCESS;
    for (uint32_t i = 0; i < n; ++i) {
        if (thr_rc[i] != ppl::common::RC_SUCCESS) {
            LOG(ERROR) << "ParallelExecute task[" << i << "] failed";
            retcode = thr_rc[i];
            break;
        }
    }

    pool->Wait();

    return retcode;
}

and

[llama_worker.cc]

if (is_first_run_) {
    is_first_run_ = false;
} else {
    decoder_barrier_.Wait();
    decoder_thread_pool_.Wait();
}

So I'd like to know if this modification is correct and is reasonable according to the design?

Current output of the provided example:
out

编译出错 [ 17%] Built target crypto as: symbol lookup error: as: undefined symbol: deflate

What are the problems?(screenshots or detailed error messages)

What are the types of GPU/CPU you are using?

What's the operating system ppl.llm.serving runs on?

What's the compiler and its version?

Which version(commit id or tag) of ppl.llm.serving is used?

What are the commands used to build ppl.llm.serving?

What are the execution commands?

minimal code snippets for reproducing these problems(if necessary)

models and inputs for reproducing these problems (send them to [email protected] if necessary)

serving 运行出错

/data/openppl/ppl.llm.serving$ ./ppl-build/ppl_llama_server src/models/llama/conf/llama_13b_config_example.json
[INFO][2023-09-19 16:51:43.346][llama_server.cc:149] server_config.host: 0.0.0.0
[INFO][2023-09-19 16:51:43.346][llama_server.cc:150] server_config.port: 23333
[INFO][2023-09-19 16:51:43.346][llama_server.cc:152] server_config.model_dir: /data/openppl/ppl.pmx/model_zoo/llama/huggingface/llama_chinese_13b_ppl
[INFO][2023-09-19 16:51:43.346][llama_server.cc:153] server_config.model_param_path: /data/openppl/ppl.pmx/model_zoo/llama/huggingface/llama_chinese_13b_ppl/pmx_params.json
[INFO][2023-09-19 16:51:43.346][llama_server.cc:154] server_config.tokenizer_path: /data/wenda_llama/wenda-main/model/Chinese-LlaMA2-chat-7B-sft-v0.3
[INFO][2023-09-19 16:51:43.346][llama_server.cc:156] server_config.top_k: 1
[INFO][2023-09-19 16:51:43.346][llama_server.cc:157] server_config.top_p: 0
[INFO][2023-09-19 16:51:43.346][llama_server.cc:159] server_config.tensor_parallel_size: 2
[INFO][2023-09-19 16:51:43.347][llama_server.cc:160] server_config.max_tokens_scale: 0.93
[INFO][2023-09-19 16:51:43.347][llama_server.cc:161] server_config.max_tokens_per_request: 4096
[INFO][2023-09-19 16:51:43.347][llama_server.cc:162] server_config.max_running_batch: 1024
[ERROR][2023-09-19 16:51:43.347][llama_server.cc:221] find key [cache_quant_bit] failed
[ERROR][2023-09-19 16:51:43.347][llama_server.cc:561] PaseModelConfig failed, model_param_path: /data/openppl/ppl.pmx/model_zoo/llama/huggingface/llama_chinese_13b_ppl/pmx_params.json

模型我用huggingface的,利用pmx成功进行转换,并使用demo.py测试是成功的,但在serving这里就出错了
能提个建议吗,你们的说明文档能写得详细点吗,谢谢就好像pmx里面的转换和export两者是有什么区别吧,都要运行吗

Compilation error

What are the problems?(screenshots or detailed error messages)

In file included from /home/liuxiandong/workspace/ppl/ppl.llm.serving/src/models/llama/llama_worker.h:25:0,
from /home/liuxiandong/workspace/ppl/ppl.llm.serving/src/models/llama/llama_worker.cc:18:
/home/liuxiandong/workspace/ppl/ppl.llm.serving/src/models/llama/../../utils/mpsc_request_scheduler.h:21:10: fatal error: ppl/common/event_count.h: No such file or directory
#include "ppl/common/event_count.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/liuxiandong/workspace/ppl/ppl.llm.serving/src/models/llama/llama_worker.h:25:0,
from /home/liuxiandong/workspace/ppl/ppl.llm.serving/src/models/factory.cc:19:
/home/liuxiandong/workspace/ppl/ppl.llm.serving/src/models/llama/../../utils/mpsc_request_scheduler.h:21:10: fatal error: ppl/common/event_count.h: No such file or directory
#include "ppl/common/event_count.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
compilation terminated.
CMakeFiles/ppl_llm_static.dir/build.make:75: recipe for target 'CMakeFiles/ppl_llm_static.dir/src/models/factory.cc.o' failed

What are the types of GPU/CPU you are using?

GPU

test

What are the problems?(screenshots or detailed error messages)

What are the types of GPU/CPU you are using?

What's the operating system ppl.llm.serving runs on?

What's the compiler and its version?

Which version(commit id or tag) of ppl.llm.serving is used?

What are the commands used to build ppl.llm.serving?

What are the execution commands?

minimal code snippets for reproducing these problems(if necessary)

models and inputs for reproducing these problems (send them to [email protected] if necessary)

CMakeLists.txt:24 (hpcc_populate_dep) Configuring incomplete, errors occurred!

when I run this command:

./build.sh -DPPLNN_USE_LLM_CUDA=ON -DPPLNN_CUDA_ENABLE_NCCL=ON -DPPLNN_ENABLE_CUDA_JIT=OFF -DPPLNN_CUDA_ARCHITECTURES="'80;86;87'" -DPPLCOMMON_CUDA_ARCHITECTURES="'80;86;87'"

then:

mkdir: cannot create directory '/home/limenxin/openPPL/ppl.llm.serving/ppl-build': File exists
cmd -> cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=install -DPPLNN_USE_LLM_CUDA=ON -DPPLNN_CUDA_ENABLE_NCCL=ON -DPPLNN_ENABLE_CUDA_JIT=OFF -DPPLNN_CUDA_ARCHITECTURES='80;86;87' -DPPLCOMMON_CUDA_ARCHITECTURES='80;86;87' .. && cmake --build . -j 128 --config Release && cmake --build . --target install -j 128 --config Release
-- Populating hpcc
-- Configuring done
-- Generating done
-- Build files have been written to: /home/limenxin/openPPL/ppl.llm.serving/deps/hpcc-subbuild
[ 11%] Creating directories for 'hpcc-populate'
[ 22%] Performing download step (git clone) for 'hpcc-populate'
Cloning into 'hpcc'...
Already on 'master'
Your branch is up to date with 'origin/master'.
[ 33%] Performing update step for 'hpcc-populate'
HEAD is now at e674b69 use c++11 by default
[ 44%] No patch step for 'hpcc-populate'
[ 55%] No configure step for 'hpcc-populate'
[ 66%] No build step for 'hpcc-populate'
[ 77%] No install step for 'hpcc-populate'
[ 88%] No test step for 'hpcc-populate'
[100%] Completed 'hpcc-populate'
[100%] Built target hpcc-populate
CMake Error at /usr/local/share/cmake-3.23/Modules/FetchContent.cmake:754 (message):
  No content details recorded for grpc
Call Stack (most recent call first):
  /usr/local/share/cmake-3.23/Modules/FetchContent.cmake:1140 (__FetchContent_getSavedDetails)
  /usr/local/share/cmake-3.23/Modules/FetchContent.cmake:1260 (FetchContent_Populate)
  deps/hpcc/cmake/hpcc-common.cmake:92 (FetchContent_MakeAvailable)
  CMakeLists.txt:24 (hpcc_populate_dep)

linux A800

How to disable kv cache 8-bit quantization?

I try to disable this feature by setting --quantized_cache 0 when exporting model, but when loading model, it coredumps,

I see this:

if (model_config_.cache_quant_bit != 8 && model_config_.cache_quant_group != 8) {
        LOG(ERROR) << "only support cache_quant_bit == 8 and cache_quant_group == 8";
        return RC_INVALID_VALUE;
    }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.