Git Product home page Git Product logo

recipe's Introduction

RECIPE : Converting Concurrent DRAM Indexes to Persistent-Memory Indexes (SOSP 2019)

RECIPE proposes a principled approach for converting concurrent indexes built for DRAM into crash-consistent indexes for persistent memory. This repository includes the implementations of the index structures for persistent memory converted from the existing concurrent DRAM indexes by following RECIPE. For performance evaluations, this repository also provides the microbenchmarks for index structures based on YCSB. This repository contains all the information needed to reproduce the main results from our paper.

Please cite the following paper if you use the RECIPE approach or RECIPE-converted indexes:

RECIPE : Converting Concurrent DRAM Indexes to Persistent-Memory Indexes. Se Kwon Lee, Jayashree Mohan, Sanidhya Kashyap, Taesoo Kim, Vijay Chidambaram. Proceedings of the The 27th ACM Symposium on Operating Systems Principles (SOSP 19). Paper PDF. Extended version(arXiv). Bibtex.

@InProceedings{LeeEtAl19-Recipe,
  title =        "{RECIPE: Converting Concurrent DRAM Indexes to Persistent-Memory Indexes}",
  author =       "Se Kwon Lee and  Jayashree Mohan and  Sanidhya Kashyap and  Taesoo Kim and  Vijay Chidambaram",
  booktitle =    "Proceedings of the 27th ACM Symposium on Operating
                  Systems Principles (SOSP '19)",
  month =        "October",
  year =         "2019",
  address =      "Ontario, Canada",
}

News

RECIPE Applications

  • P-CLHT has been used to build DINOMO, a key-value store for disaggregated persistent memory.

Improvements made after the SOSP paper

The following improvements are made to the codebase after the SOSP paper.

  • Resolve the problems readers return uncommitted value (Issue #13 and pull reqeusts #11, #12)

Contents

  1. P-CLHT/ contains the source code for P-CLHT. It is converted from Cache-Line Hash Table to be persistent. The original source code and paper can be found in code and paper.
  2. P-HOT/ contains the source code for P-HOT. It is converted from Height Optimized Trie to be persistent. The original source code and paper can be found in code and paper.
  3. P-BwTree/ contains the source code for P-BwTree. It is converted from an open sourced implementation of BwTree for persistent memory. The original source code and paper can be found in code and paper.
  4. P-ART/ contains the source code for P-ART. It is converted for persistent memory from Adaptive Radix Tree using ROWEX for concurrency. The original source code and paper can be found in code and paper.
  5. P-Masstree/ contains the source code for P-Masstree. It is converted from Masstree to be persistent and is custumized for the compact version. The original source code and paper can be found in code and paper.
  6. index-microbench/ contains the benchmark framework to generate YCSB workloads. The original source code can be found in code.

Recommended use cases for RECIPE indexes

  1. P-CLHT is a good fit for applications requiring high-performance point queries.
  2. P-HOT is a good fit for applications with read-dominated workloads.
  3. P-BwTree provides well-balanced performance for insertion, lookup, and range scan operations for applications using integer keys.
  4. P-ART is suitable for applications with insertion-dominated workloads and a small number of range queries.
  5. P-Masstree provides well-balanced performance for insertion, lookup, and range scan operations for applications using either integer or string keys.

Integrating RECIPE indexes into your own project

Apart from benchmark code with ycsb.cpp, we provide simple example codes (P-*/example.cpp for each RECIPE index) to help developers who want to apply RECIPE indexes into their own project to easily identify how to use each index's APIs. These example source codes run insert and lookup operations with custom integer keys. For more details of usage for each index, please refer to P-*/README.md in each index's directory and ycsb.cpp as well.

Important Limitation

Persistent memory allocator

The RECIPE data structures in the master branch use a volatile memory allocator (libvmmalloc) so that RECIPE can be compared in an apples-to-apples manner with prior work like FAST&FAIR and CCEH, which also use volatile allocators (and thereby do not provide crash consistency). Thus, if you use RECIPE data structures from the master branch on PM, metadata related to memory allocator will not have crash consistency.

The current volatile allocator must be replaced with persistent memory allocator to ensure crash consistency of memory allocator and to prevent permanent memory leaks. Especially, we recommend post-crash garbage collection rather than logging-based approaches to solve permanent memory leaks since logging-based approaches should constantly consume costs for recording logs during normal runtime (We already described it through our SOSP publication). We are currently exploring various post-crash garbage collection techniques ([1], [2], [3], [4], [5]) to apply them for RECIPE data structures.

As a first step, we are working on replacing current volatile allocator with PMDK library and on solving permanent memory leaks using the functions provided by it [5]. Please check out the pmdk branch for the updates of this work as well as these details.

Running RECIPE Indexes on Persistent Memory and DRAM

Desired system configurations (for DRAM environment)

  • Ubuntu 18.04.1 LTS
  • At least 32GB DRAM
  • x86-64 CPU supporting at least 16 threads
  • P-HOT: x86-64 CPU supporting at least the AVX-2 and BMI-2 instruction sets (Haswell and newer)
  • Compile: cmake, g++-7, gcc-7, c++17

Dependencies

Install build packages

$ sudo apt-get install build-essential cmake libboost-all-dev libpapi-dev default-jdk

Install jemalloc and tbb

$ sudo apt-get install libtbb-dev libjemalloc-dev

Generating YCSB workloads

Download YCSB source code

$ cd ./index-microbench
$ curl -O --location https://github.com/brianfrankcooper/YCSB/releases/download/0.11.0/ycsb-0.11.0.tar.gz
$ tar xfvz ycsb-0.11.0.tar.gz
$ mv ycsb-0.11.0 YCSB

How to configure and generate workloads

Configure the options of each workloads (a, b, c, e), would only need to change $recordcount and $operationcount.

$ vi ./index-microbench/workload_spec/<workloada or workloadb or workloadc or workloade>

Select which workloads to be generated. Default configuration will generate all workloads (a, b, c, e). Change the code line for WORKLOAD_TYPE in <a b c e>; do, depending on which workload you want to generate.

$ vi ./index-microbench/generate_all_workloads.sh

Generate the workloads. This will generate both random integer keys and string ycsb keys with the specified key distribution.

$ cd ./index-microbench/
$ mkdir workloads
$ bash generate_all_workloads.sh

Checklists

Configuration for workload size.

Change LOAD_SIZE and RUN_SIZE variables to be same with the generated workload size, which are hard-coded in ycsb.cpp (Default is 64000000).

$ vi ycsb.cpp

Configurations for Persistent Memory

For running the indexes on Intel Optane DC Persistent Memory, we will use libvmmalloc to transparently converts all dynamic memory allocations into Persistent Memory allocations, mapped by pmem.

Ext4-DAX mount

$ sudo mkfs.ext4 -b 4096 -E stride=512 -F /dev/pmem0
$ sudo mount -o dax /dev/pmem0 /mnt/pmem

Install PMDK

$ git clone https://github.com/pmem/pmdk.git
$ cd pmdk
$ git checkout tags/1.6
$ make -j
$ cd ..

Configuration for libvmmalloc

  • LD_PRELOAD=path

Specifies a path to libvmmalloc.so.1. The default indicates the path to libvmmalloc.so.1 that is built from the instructions installing PMDK above.

  • VMMALLOC_POOR_DIR=path

Specifies a path to the directory where the memory pool file should be created. The directory must exist and be writable.

  • VMMALLOC_POOL_SIZE=len

Defines the desired size (in bytes) of the memory pool file.

$ vi ./scripts/set_vmmalloc.sh

Please change below configurations to fit for your environment.

export VMMALLOC_POOL_SIZE=$((64*1024*1024*1024))
export VMMALLOC_POOL_DIR="/mnt/pmem"

Building & Running on Persistent Memory and DRAM

Build all

$ mkdir build
$ cd build
$ cmake ..
$ make

DRAM environment

Run

$ cd ${project root directory}
$ ./build/ycsb art a randint uniform 4
Usage: ./ycsb [index type] [ycsb workload type] [key distribution] [access pattern] [number of threads]
       1. index type: art hot bwtree masstree clht
                      fastfair levelhash cceh
       2. ycsb workload type: a, b, c, e
       3. key distribution: randint, string
       4. access pattern: uniform, zipfian
       5. number of threads (integer)

Persistent Memory environment

Run

$ cd ${project root directory}
$ sudo su
# source ./scripts/set_vmmalloc.sh
# LD_PRELOAD="./pmdk/src/nondebug/libvmmalloc.so.1" ./build/ycsb art a randint uniform 4
# source ./scripts/unset_vmmalloc.sh

Artifact Evaluation

For artifact evaluation, we will evaluate again the performance of the index structures presented in the paper by using YCSB benchmark. The index structures tested for artifact evaluation include P-CLHT P-ART, P-HOT, P-Masstree, P-Bwtree, FAST&FAIR, WOART, CCEH, and Level hashing. The evaluation results will be stored in ./results directory as csv files. Please make sure to check the contents at least by checklists subsection in Benchmark details section below, before beginning artifact evaluation. Note that the evaluations re-generated for artifact evaluation will be based on DRAM because Optane DC persistent memory machine used for the evaluations presented in the paper has the hard access limitation from external users. For more detail, please refer to experiments.md.

RECIPE has been awarded three badges: Artifact Available, Artifact Functional, and Results Reproduced.

References

[1] Kumud Bhandari, et al. Makalu: Fast Recoverable Allocation of Non-volatile Memory, OOPSLA'16.

[2] Nachshon Cohen, et al. Object-Oriented Recovery for Non-volatile Memory, OOPSLA'18.

[3] Tudor David, et al. Log-Free Concurrent Data Structures, ATC'18.

[4] Wentao Cai, et al. Understanding and optimizing persistent memory allocation, ISMM'20.

[5] Eduardo B., Code Sample: Find Your Leaked Persistent Memory Objects Using the Persistent Memory Development Kit (PMDK).

License

The licence for most of the P-* family of persistent indexes is Apache License (https://www.apache.org/licenses/LICENSE-2.0). This is consistent with the most of the indexes we build on, with the exception of CLHT and HOT, which uses the MIT and ISC License respectively. Accordingly, P-CLHT is under the MIT license (https://opensource.org/licenses/MIT). P-HOT is under the ISC license (https://opensource.org/licenses/ISC).

Copyright for RECIPE indexes is held by the University of Texas at Austin. Please contact us if you would like to obtain a license to use RECIPE indexes in your commercial product.

Acknowledgements

We thank the National Science Foundation, VMware, Google, and Facebook for partially funding this project. We thank Intel and ETRI IITP/KEIT[2014-3-00035] for providing access to Optane DC Persistent Memory to perform our experiments.

Contact

Please contact us at [email protected] and [email protected] with any questions.

recipe's People

Contributors

g4197 avatar pyrito avatar sekwonlee avatar vijay03 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

recipe's Issues

some questions about CLFLUSH_OPT/CLWB

Excuse me, I have read your paper and code,very interesting. But I have a question about CLFLUSH_OPT/CLWB。
Below is your implementation

inline void clflush(char *data, int len, bool front, bool back)
{
      volatile char *ptr = (char )((unsigned long)data & ~(cache_line_size - 1));
      if (front)
               mfence();
      for (; ptr < data+len; ptr += cache_line_size){
              unsigned long etsc = read_tsc() +
             (unsigned long)(write_latency_in_ns * cpu_freq_mhz/1000);
#ifdef CLFLUSH
             asm volatile("clflush %0" : "+m" ((volatile char )ptr));
#elif CLFLUSH_OPT
             asm volatile(".byte 0x66; clflush %0" : "+m" ((volatile char )(ptr)));
#elif CLWB
             asm volatile(".byte 0x66; xsaveopt %0" : "+m" ((volatile char *)(ptr)));
#endif
            while (read_tsc() < etsc) cpu_pause();
     }
     if (back) 
             mfence();
}

In fact, CLFLUSH_OPT/CLWB will be reorder. In some indexes, Should mfence() be added between cachelines instead of adding mfence() at the end? E.g. the FAST & FAIR paper clearly mentions the need to add mfence() at cacheline boundary. In the original implementation of FAST&FAIR, only CLFLUSH was used, which will not cause problems, because CLFLUSH will not be reorder.
So will using CLFLUSH_OPT/CLWB cause a bug?

Search non-existing keys in P-Masstree

Hi, when I invoke a get for non-existing keys in P-Masstree, "should not enter here....." is printed.

code:

  masstree::masstree *tree = new masstree::masstree();
  auto t = tree->getThreadInfo();
  char *str = "helloworld";
  tree->get(str, t);

printed information:

should not enter here
fkey = rowolleh, key = 7522537965574647666, searched key = 0, key index = -1

some questions about the usage of Optane DC

Excuse me, I have read your paper, and I have some questions about your test:

  1. Did you configure App Direct mode with interleaved modules?
  2. How did you measure high throughput(77.64Mops/s for CCEH) in Optane DC?
  3. Did you test the scalability of your P-CLHT? The scalability of other common data structure I tested is not good, I wonder it is limited by the Optane DC.
    I hope that you can take the time to answer for me during your busy schedule, thank you!.

P-Masstree not Working Correctly with String Keys

I've read your code a bit and try to modified example.cpp to use string keys.
I use the function void masstree::put(char *key, uint64_t value) to insert string keys to P-Masstree.
However it would not working correctly if key is overlapped with the previously inserted keys.
For example if we first insert key1 abcdefghijklmnopqrstuvwxyz and then key2: abcdefghijklmnopqrstuvwxy. Then I try to get key1 using void *masstree::get(char *key) would return an empty value.

Read committed

Current implementations only ensure the lowest level of isolation (Read Uncommitted) for some read operations such as scan, negative lookup, and verification for value existence, since they are based on normal CASs or temporal stores coupled with cache line flush instructions. However, it is not the fundamental limitation of RECIPE conversions. You can easily extend them, following RECIPE conversions, to guarantee the higher level of isolation (Read Committed) by replacing each final commit stores (such as pointer swap) coupled with cache line flushes with non-temporal stores coupled with memory fence for lock-based implementations including P-CLHT, P-HOT, P-ART, and P-Masstree. For lock-free implementations such as P-Bwtree, you can either add additional flushes only after loads to final commit stores or replace volatile CASs coupled with cache line flush instructions with alternative software-based atomic-persistent primitives such as either Link-and-Persist (paper, code) or PSwCAS (paper, code).

Port RECIPE data structures to libpmem

This issue involves porting the RECIPE data structures to work on libpmem. For example, converting P-CLHT to a form that uses the libpmem pointers and allocation routines.

Crash-consistency bug in P-CLHT `clht_gc_collect_cond`

The value of hashtable->ht_oldest is not persisted after the free, meaning that a post-crash execution can read the previous value and perform a double-free.

RECIPE/P-CLHT/src/clht_gc.c

Lines 183 to 196 in 05a49d7

clht_hashtable_t* cur = hashtable->ht_oldest;
while (cur != NULL && cur->version < version_min)
{
gced_num++;
clht_hashtable_t* nxt = cur->table_new;
/* printf("[GCOLLE-%02d] gc_free version: %6zu | current version: %6zu\n", GET_ID(collect_not_referenced_only), */
/* cur->version, hashtable->ht->version); */
nxt->table_prev = NULL;
clht_gc_free(cur);
cur = nxt;
}
hashtable->version_min = cur->version;
hashtable->ht_oldest = cur;

Segmentation fault in ycsb

It looks like there is a segmentation fault caused by `./build/ycsb art a randint uniform 4'
I followed the the build and config procedure as described in README.md, till 'Persistent Memory environment'.

Machine config:
CPU: AMD Ryzen Threadripper 2990WX 32-Core Processor
DRAM: 8*16G DDR4
DRAM emulated persistent memory: 50G, mounted with ext4-dax file system
OS: Ubuntu 18.04.3 LTS, with linux-5.1.0+ kernel

root@RECIPE# cat ./scripts/set_vmmalloc.sh
export VMMALLOC_POOL_SIZE=$((16*1024*1024*1024))
export VMMALLOC_POOL_DIR="/mnt/pmem"

root@RECIPE# source ./scripts/set_vmmalloc.sh
root@RECIPE# LD_PRELOAD="../pmdk/src/nondebug/libvmmalloc.so.1" ./build/ycsb art a randint uniform 4
art, workloada, randint, uniform, threads 4
Loaded 0 keys
Segmentation fault (core dumped)

root@RECIPE# gdb ./build/ycsb core
warning: Error reading shared library list entry at 0x7f2ecdc39b00
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `./build/ycsb art a randint uniform 4'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00005629a6dc8967 in ART_ROWEX::N4::change(unsigned char, ART_ROWEX::N*) ()
[Current thread is 1 (Thread 0x7f32d0800800 (LWP 108852))]
(gdb) bt
#0  0x00005629a6dc8967 in ART_ROWEX::N4::change(unsigned char, ART_ROWEX::N*) ()
#1  0x00005629a6dcd717 in ART_ROWEX::Tree::insert(Key const*, ART::ThreadInfo&) ()
#2  0x00005629a6d73cc0 in tbb::interface9::internal::start_for<tbb::blocked_range<unsigned long>, ycsb_load_run_randint(int, int, int, int, int, std::vector<unsigned long, std::allocator<unsigned long> >&, std::vector<unsigned long, std::allocator<unsigned long> >&, std::vector<int, std::allocator<int> >&, std::vector<int, std::allocator<int> >&)::{lambda(tbb::blocked_range<unsigned long> const&)#1}, tbb::auto_partitioner const>::execute() ()
#3  0x00007f32cff9bb46 in ?? () from /usr/lib/x86_64-linux-gnu/libtbb.so.2
#4  0x00007f32cff98790 in ?? () from /usr/lib/x86_64-linux-gnu/libtbb.so.2
#5  0x00005629a6d82db6 in ycsb_load_run_randint(int, int, int, int, int, std::vector<unsigned long, std::allocator<unsigned long> >&, std::vector<unsigned long, std::allocator<unsigned long> >&, std::vector<int, std::allocator<int> >&, std::vector<int, std::allocator<int> >&) ()
#6  0x00005629a6d700c6 in main ()

FAST_FAIR Range Bug

In linear_search_range of /third-party/FAST_FAIR/btree.h, count() should be current->count(),

Crash consistency bug in clht_gc_free

Bug

Exposed by crashing after freeing the hash table in clht_gc_free.

RECIPE/P-CLHT/src/clht_gc.c

Lines 239 to 242 in fc508dd

PMEMoid table_oid = {pool_uuid, hashtable->table_off};
pmemobj_free(&table_oid);
PMEMoid ht_oid = pmemobj_oid((void *)hashtable);
pmemobj_free(&ht_oid);

  • pmemobj_free sets the PMEMoid object to NULL when freeing objects.
  • With the current design of storing the offset in hashtable->table_off, the offset is never set to null, and so a crash can cause a double-free to occur.

Steps to reproduce

gdb --args ./example 20 20
> break clht_gc.c:241
> run
> quit
# Then, re-run
./example 20 0

Will output something like:

Simple Example of P-CLHT
operation,n,ops/s
Throughput: load, inf ,ops/us
Throughput: run, inf ,ops/us
<libpmemobj>: <1> [palloc.c:295 palloc_heap_action_exec] assertion failure: 0

Segmentation fault on YCSB with CLHT, after using libvmmalloc

I first tried running YCSB CLHT on DRAM, which worked. Then, I ran it with libvmmalloc as LD_PRELOAD, and observed a segmentation fault occasionally happening (in a nondeterministic way), both with 16 and 32 threads. I ran YCSB workloads with 2 input configurations, recordcount=operationcount=64000000, and recordcount=operationcount=1000000, both seeing the segfault:

RECIPE# LD_PRELOAD="../pmdk/src/nondebug/libvmmalloc.so.1" ./build/ycsb clht a randint uniform 32
Loaded 1000001 keys
Segmentation fault (core dumped)

Machine config:
CPU: AMD Ryzen Threadripper 2990WX 32-Core Processor
DRAM: 8*16G DDR4
DRAM emulated persistent memory: 64G, mounted with ext4-dax file system
OS: Ubuntu 18.04.3 LTS, with linux-5.1.0+ kernel

scripts/set_vmmalloc.sh:

export VMMALLOC_POOL_SIZE=$((60*1024*1024*1024))
export VMMALLOC_POOL_DIR="/mnt/pmem/test"

GDB backtrace on the core file

warning: Error reading shared library list entry at 0x7f7154c39b00
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `./build/ycsb clht a randint uniform 32'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000055a0f68ab202 in ssmem_mem_reclaim ()
[Current thread is 1 (Thread 0x7f71545ff700 (LWP 20200))]
(gdb) bt
#0  0x000055a0f68ab202 in ssmem_mem_reclaim ()
#1  0x000055a0f68aa72e in clht_gc_release ()
#2  0x000055a0f68a9ba0 in ht_resize_pes ()
#3  0x000055a0f68a94dc in ht_status ()
#4  0x000055a0f68a98de in clht_put ()
#5  0x000055a0f6840dc8 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<ycsb_load_run_randint(int, int, int, int, int, std::vector<unsigned long, std::allocator<unsigned long> >&, std::vector<unsigned long, std::allocator<unsigned long> >&, std::vector<int, std::allocator<int> >&, std::vector<int, std::allocator<int> >&)::{lambda()#9}> > >::_M_run() ()
#6  0x00007f7d5662166f in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#7  0x00007f7d568f46db in start_thread (arg=0x7f71545ff700) at pthread_create.c:463
#8  0x00007f7d55cde88f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Scalability Issues in Optane

Hello, we are trying to reproduce your results. Even through we have successfully plugged your work with the Intel Optane of our system by following exactly the procedure that you indicate, we do not observe a significant scalability as we increase the number of threads, in contrast to DRAM execution, where the scalablity is clear. Is this a known issue? Do these indexes scale on both DRAM and Intel Optane? Is there any known reason why scalability fails in Optane?
Thank you :)

Crash consistency issue after acquiring bucket locks

Bug

Exposed by crashing after acquiring a lock from clht_put.

static inline int
lock_acq_chk_resize(clht_lock_t* lock, clht_hashtable_t* h)
{
char once = 1;
clht_lock_t l;
while ((l = CAS_U8(lock, LOCK_FREE, LOCK_UPDATE)) == LOCK_UPDATE)
{

  • Crashing after line 311 here causes the lock to be never released, so the restarted example waits indefinitely

Steps to reproduce

gdb --args ./example 20 1
> break clht_lb_res.h:311
> run
> next
> p *lock
# should print "$1 = 1 '\001'"
> quit
# Then, re-run
./example 20 1

The second execution should run indefinitely, waiting on acquiring the lock.

Comments

I see your comments here about locking assumptions:

// Although our current implementation does not provide post-crash mechanism,
// the locks should be released after a crash (Please refer to the function clht_lock_initialization())
clht_lock_t lock;

Does this mean this is a known issue, or does clht_lock_initialization just need to be added to clht_create? I ask because it seems that clht_lock_initialization is called in other places, just not in the recovery procedure.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.