Git Product home page Git Product logo

latte's Introduction

Lightweight Benchmarking Tool for Apache Cassandra

Runs custom CQL workloads against a Cassandra cluster and measures throughput and response times

animation

Why Yet Another Benchmarking Program?

  • Latte outperforms other benchmarking tools for Apache Cassandra by a wide margin. See benchmarks.
  • Latte aims to offer the most flexible way of defining workloads.

Performance

Contrary to NoSQLBench, Cassandra Stress and tlp-stress, Latte has been written in Rust and uses the native Cassandra driver from Scylla. It features a fully asynchronous, thread-per-core execution engine, capable of running thousands of requests per second from a single thread.

Latte has the following unique performance characteristics:

  • Great scalability on multi-core machines.
  • About 10x better CPU efficiency than NoSQLBench. This means you can test large clusters with a small number of clients.
  • About 50x-100x lower memory footprint than Java-based tools.
  • Very low impact on operating system resources – low number of syscalls, context switches and page faults.
  • No client code warmup needed. The client code works with maximum performance from the first benchmark cycle. Even runs as short as 30 seconds give accurate results.
  • No GC pauses nor HotSpot recompilation happening in the middle of the test. You want to measure hiccups of the server, not the benchmarking tool.

The excellent performance makes it a perfect tool for exploratory benchmarking, when you quickly want to experiment with different workloads.

Flexibility

Other benchmarking tools often use configuration files to specify workload recipes. Although that makes it easy to define simple workloads, it quickly becomes cumbersome when you want to script more realistic scenarios that issue multiple queries or need to generate data in different ways than the ones directly built into the tool.

Instead of trying to bend a popular configuration file format into a turing-complete scripting language, Latte simply embeds a real, fully-featured, turing-complete, modern scripting language. We chose Rune due to painless integration with Rust, first-class async support, satisfying performance and great support from its maintainers.

Rune offers syntax and features similar to Rust, albeit with dynamic typing and easy automatic memory management. Hence, you can not only just issue custom CQL queries, but you can program
anything you wish. There are variables, conditional statements, loops, pattern matching, functions, lambdas, user-defined data structures, objects, enums, constants, macros and many more.

Features

  • Compatible with Apache Cassandra 3.x, 4.x, DataStax Enterprise 6.x and ScyllaDB
  • Custom workloads with a powerful scripting engine
  • Asynchronous queries
  • Prepared queries
  • Programmable data generation
  • Workload parameterization
  • Accurate measurement of throughput and response times with error margins
  • No coordinated omission
  • Configurable number of connections and threads
  • Rate and concurrency limiters
  • Progress bars
  • Beautiful text reports
  • Can dump report in JSON
  • Side-by-side comparison of two runs
  • Statistical significance analysis of differences corrected for auto-correlation

Limitations

Latte is still early stage software under intensive development.

  • Binding some CQL data types is not yet supported, e.g. user defined types, maps or integer types smaller than 64-bit.
  • Query result sets are not exposed yet.
  • The set of data generating functions is tiny and will be extended soon.
  • Backwards compatibility may be broken frequently.

Installation

From deb package

dpkg -i latte-<version>.deb

From source

  1. Install Rust toolchain
  2. Run cargo install latte-cli

Usage

Start a Cassandra cluster somewhere (can be a local node). Then run:

latte schema <workload.rn> [<node address>] # create the database schema 
latte load <workload.rn> [<node address>]   # populate the database with data
latte run <workload.rn> [-f <function>] [<node address>]  # execute the workload and measure the performance 

You can find a few example workload files in the workloads folder. For convenience, you can place workload files under /usr/share/latte/workloads or .local/share/latte/workloads, so latte can find them regardless of the current working directory. You can also set up custom workload locations by setting LATTE_WORKLOAD_PATH environment variable.

Latte produces text reports on stdout but also saves all data to a json file in the working directory. The name of the file is created automatically from the parameters of the run and a timestamp.

You can display the results of a previous run with latte show:

latte show <report.json>
latte show <report.json> -b <previous report.json>  # to compare against baseline performance

Run latte --help to display help with the available options.

Workloads

Workloads for Latte are fully customizable with embedded scripting language Rune.

A workload script defines a set of public functions that Latte calls automatically. A minimum viable workload script must define at least a single public async function run with two arguments:

  • ctx – session context that provides the access to Cassandra
  • i – current unique cycle number of a 64-bit integer type, starting at 0

The following script would benchmark querying the system.local table:

pub async fn run(ctx, i) {
  ctx.execute("SELECT cluster_name FROM system.local LIMIT 1").await
}

Instance functions on ctx are asynchronous, so you should call await on them.

The workload script can provide more than one function for running the benchmark. In this case you can name those functions whatever you like, and then select one of them with -f / --function parameter.

Schema creation

You can (re)create your own keyspaces and tables needed by the benchmark in the schema function. The schema function should also drop the old schema if present. The schema function is executed by running latte schema command.

pub async fn schema(ctx) {
  ctx.execute("CREATE KEYSPACE IF NOT EXISTS test \
                 WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 }").await?;
  ctx.execute("DROP TABLE IF NOT EXISTS test.test").await?;
  ctx.execute("CREATE TABLE test.test(id bigint, data varchar)").await?;
}

Prepared statements

Calling ctx.execute is not optimal, because it doesn't use prepared statements. You can prepare statements and register them on the context object in the prepare function:

const INSERT = "my_insert";
const SELECT = "my_select";

pub async fn prepare(ctx) {
  ctx.prepare(INSERT, "INSERT INTO test.test(id, data) VALUES (?, ?)").await?;
  ctx.prepare(SELECT, "SELECT * FROM test.test WHERE id = ?").await?;
}

pub async fn run(ctx, i) {
  ctx.execute_prepared(SELECT, [i]).await
}

Query parameters can be bound and passed by names as well:

const INSERT = "my_insert";

pub async fn prepare(ctx) {
  ctx.prepare(INSERT, "INSERT INTO test.test(id, data) VALUES (:id, :data)").await?;
}

pub async fn run(ctx, i) {
  ctx.execute_prepared(INSERT, #{id: 5, data: "foo"}).await
}

Populating the database

Read queries are more interesting when they return non-empty result sets.

To be able to load data into tables with latte load, you need to set the number of load cycles on the context object and define the load function:

pub async fn prepare(ctx) {
  ctx.load_cycle_count = 1000000;
}

pub async fn load(ctx, i) {
  ctx.execute_prepared(INSERT, [i, "Lorem ipsum dolor sit amet"]).await
}

We also recommend defining the erase function to erase the data before loading so that you always get the same dataset regardless of the data that were present in the database before:

pub async fn erase(ctx) {
  ctx.execute("TRUNCATE TABLE test.test").await
}

Generating data

Latte comes with a library of data generating functions. They are accessible in the latte crate. Typically, those functions accept an integer i cycle number, so you can generate consistent numbers. The data generating functions are pure, i.e. invoking them multiple times with the same parameters yields always the same results.

  • latte::uuid(i) – generates a random (type 4) UUID
  • latte::hash(i) – generates a non-negative integer hash value
  • latte::hash2(a, b) – generates a non-negative integer hash value of two integers
  • latte::hash_range(i, max) – generates an integer value in range 0..max
  • latte::hash_select(i, vector) – selects an item from a vector based on a hash
  • latte::blob(i, len) – generates a random binary blob of length len
  • latte::normal(i, mean, std_dev) – generates a floating point number from a normal distribution
  • latte::uniform(i, min, max) – generates a floating point number from a uniform distribution

Type conversions

Rune uses 64-bit representation for integers and floats. Since version 0.28 Rune numbers are automatically converted to proper target query parameter type, therefore you don't need to do explicit conversions. E.g. you can pass an integer as a parameter of Cassandra type smallint. If the number is too big to fit into the range allowed by the target type, a runtime error will be signalled.

The following methods are available:

  • x.to_integer() – converts a float to an integer
  • x.to_float() – converts an integer to a float
  • x.to_string() – converts a float or integer to a string
  • x.clamp(min, max) – restricts the range of an integer or a float value to given range

You can also convert between floats and integers by calling to_integer or to_float instance functions.

Text resources

Text data can be loaded from files or resources with functions in the fs module:

  • fs::read_to_string(file_path) – returns file contents as a string
  • fs::read_lines(file_path) – reads file lines into a vector of strings
  • fs::read_resource_to_string(resource_name) – returns builtin resource contents as a string
  • fs::read_resource_lines(resource_name) – returns builtin resource lines as a vector of strings

The resources are embedded in the program binary. You can find them under resources folder in the source tree.

To reduce the cost of memory allocation, it is best to load resources in the prepare function only once and store them in the data field of the context for future use in load and run:

pub async fn prepare(ctx) {
  ctx.data.last_names = fs::read_lines("lastnames.txt")?;
  // ... prepare queries
}

pub async fn run(ctx, i) {
  let random_last_name = latte::hash_select(i, ctx.data.last_names);
  // ... use random_last_name in queries
}

Parameterizing workloads

Workloads can be parameterized by parameters given from the command line invocation. Use latte::param!(param_name, default_value) macro to initialize script constants from command line parameters:

const ROW_COUNT = latte::param!("row_count", 1000000);

pub async fn prepare(ctx) {
  ctx.load_cycle_count = ROW_COUNT;
} 

Then you can set the parameter by using -P:

latte run <workload> -P row_count=200

Error handling

Errors during execution of a workload script are divided into three classes:

  • compile errors – the errors detected at the load time of the script; e.g. syntax errors or referencing an undefined variable. These are signalled immediately and terminate the benchmark even before connecting to the database.
  • runtime errors / panics – e.g. division by zero or array out of bounds access. They terminate the benchmark immediately.
  • error return values – e.g. when the query execution returns an error result. Those take effect only when actually returned from the function (use ? for propagating them up the call chain). All errors except Cassandra overload errors terminate
    the benchmark immediately. Overload errors (e.g. timeouts) that happen during the main run phase are counted and reported in the benchmark report.

latte's People

Contributors

jakubzytka avatar pkolaczk avatar rukai avatar vponomaryov avatar vrischmann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

latte's Issues

Can't go faster than ~2 million calls / s on a 2x12 core Xeon

There seems to be a congestion somewhere that limits performance of the empty benchmark to about 1.5-2 million calls per second.
This is probably not an issue in most real benchmarks, where Cassandra is a lot slower anyways, however the performance bar for this tool is set very high, so this needs to be solved.

The issue doesn't seem to be visible on single core processors, hence I guess it could be caused by false sharing / shared atomic updates, which are inherently costly on multiprocessor machines.

Don't erase data by default

The erase stage is executed by default. The idea was that each benchmark run should be repeatable, so it should start from the same, clean state. However I've learnt this is also a bit dangerous. If someone has a big set of data for read benchmarking, forgetting the --no-load option will erase all data and you can lose hours of loading work, if you didn't save a backup.

Throughput ceiling at 200k-300k req/s

There seems to be a throughput ceiling caused by the fact that spawning asynchronous queries is essentially single-threaded.
The following operations take surprisingly large amount of time:

  • spawning a new tokio task
  • binding a statement and submitting it asynchronously to the driver (the major cost)

With the current design, these operations are serial and don't scale on multicore.

Proposed solution:

  • refactor the main loop to use asynchronous streams (using the Stream abstraction)
  • don't spawn each query as a separate task, make each stream concurrent, but single-threaded (should decrease scheduling costs; async/await are cheaper than spawn)
  • create many independent query streams and spawn them on a separate threads; merge them using mpsc channel

To be decided later: should we share a single Session or have each stream its own Session?

fs::read_lines doesn't make a vector of strings

I have prepared this workload

const READ = "read";

const KEYSPACE = "keyspace";
const TABLE = "table";

pub async fn prepare(ctx) {
    ctx.data.ids = fs::read_lines("ids.txt")?;
    ctx.prepare(READ, `SELECT sum(table_column) FROM ${KEYSPACE}.${TABLE} WHERE id = ? AND event_time >= '2021-11-01'`).await?;
}

pub async fn run(ctx, i) {
    let random_id = latte::hash_select(i, ctx.data.ids);
    ctx.execute_prepared(READ, [random_id]).await?
}

The ids.txt file contains ids (id per line). But it looks like the fs::read_lines() function doesn't work as expected, because the workload fails with error:

LOG ════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════
    Time  ───── Throughput ─────  ────────────────────────────────── Response times [ms] ───────────────────────────────────
     [s]      [op/s]     [req/s]         Min        25        50        75        90        95        99      99.9       Max
error: Cassandra error: Failed to execute query "SELECT sum(table_column) FROM keyspace.table WHERE id = ? AND event_time >= '2021-11-01'" with params [Text("k42qjlyix0lulp3m\njl36ct77wor07183\nr7hpjf012xdw87jn\n ... all ids from the ids.txt file
\n")]: Database returned an error: The query is syntatically correct but invalid, Error message: Key length of 152685 is longer than maximum of 65535

Express workloads as lists of queries to execute

Currently Workload is responsible for both defining and running the queries.
Because running is an async operation, and we want more than one workload, this requires #[async_trait]. Using async functions in traits in Rust is not perfect yet - it requires boxing the returned futures (boxing = additional heap allocation).

Additionally the workload code has still far too much redundant elements e.g. preparing the statements, inspecting the result and counting the rows/partitions.

The goal of this task is to move the running queries responsibility to the main loop. Workloads would be only responsible for providing the queries as CQL text + some minimum binding guidance. This should simplify workload definitions but also make it easy to add user-defined workloads in the future.

Ability to prime BoundedCycleCounter with a different starting position

when we want to split some load across multiple loader machine (i.e. machine that generate traffic in our tests)
we have lots of cases we need to split the ranges each machine is working on
for example if we want to write big amount of data (few terabytes), doing so with multiple machine that would run through the same exact case with the same counter, would cause excessive rewrite, and slow the whole thing down.

I would be nice if we can have a way to pass to latte something like:

latter run -d 10000000 --start-counter 0 -- workload.ns 127.0.0.1
latter run -d 10000000 --start-counter 10000000 -- workload.ns 127.0.0.1
latter run -d 10000000 --start-counter 20000000 -- workload.ns 127.0.0.1
latter run -d 10000000 --start-counter 30000000 -- workload.ns 127.0.0.1
latter run -d 10000000 --counter-range 0-10000000 -- workload.ns 127.0.0.1
latter run -d 10000000 --counter-range 10000000-20000000 -- workload.ns 127.0.0.1
latter run -d 10000000 --counter-range 20000001-30000000 -- workload.ns 127.0.0.1
latter run -d 10000000 --counter-range 30000001-40000000 -- workload.ns 127.0.0.1

Support for user defined functions

Rust driver can read/write UDTs:
https://rust-driver.docs.scylladb.com/stable/data-types/udt.html

but seems that using it from latte workload is not currently possible

passing in Rune object didn't worked:

❯ latte load docker/latte/workloads/custom_d1.rn -- 172.17.0.2
info: Loading workload script /home/fruch/projects/scylla-cluster-tests/docker/latte/workloads/custom_d1.rn...
info: Connecting to ["172.17.0.2"]...
info: Connected to  running Cassandra version 3.0.8
info: Preparing...
info: Erasing data...
info: Loading data...
error: Cassandra error: Unsupported type: Object

I'm guessing some constructs needs to be expose to Rune, for building needed UDT objects

Plot is saved as .png, but should be .svg

The plot.rs code explicitly creates an svg file, but the extension is coded as png. This confuses the operating system, and all it does require is to change the extension to svg.

Response time percentiles should be given with higher precision

RESPONSE TIMES [ms] ════════════════════════════════════════════════════════════════════════════════════════════════════════
          Min                       0 ± 0
           25                       1 ± 0
           50                       1 ± 0
           75                       2 ± 0
           90                       2 ± 0
           95                       2 ± 0
           98                       3 ± 0
           99                       3 ± 0
           99.9                    17 ± 9
           99.99                   97 ± 19
          Max                     135 ± 26

Somehow the decimal places are is missing.

Add PostgreSQL as database target

Hi, have you considered adding other databases to this tool?
I like your work and detail on making sure a thread can run without being blocked.
One of the main databases I am working with is PostgreSQL.

have output that is friendly to being run without terminal

In case we might run this tool inside a docker image as part of a bigger setup/test, while we still want to follow the log and parse it in real time, so we can send out metics into our grafana with stats.

for example, here's the output without a terminal running under docker:

LOG ════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════
    Time  ───── Throughput ─────  ────────────────────────────────── Response times [ms] ───────────────────────────────────
     [s]      [op/s]     [req/s]         Min        25        50        75        90        95        99      99.9       Max
Running...           [                                                            ]   0.0%             0.0/120s            0
Running...           [                                                            ]   0.8%             1.0/120s       290323
   1.000      290314      290314       0.015     0.234     0.308     0.409     0.615     0.952     1.650     2.078     4.037
Running...           [                                                            ]   0.8%             1.0/120s       290323
Running...           [▪                                                           ]   1.7%             2.0/120s       576337
Running...           [▪                                                           ]   1.7%             2.0/120s       576337
   2.000      285969      285969       0.019     0.238     0.303     0.417     0.806     0.964     1.115     1.264     1.598
Running...           [▪                                                           ]   2.5%             3.0/120s       853499
   3.000      277207      277207       0.017     0.234     0.292     0.451     0.906     0.985     1.124     1.377     2.408
Running...           [▪                                                           ]   2.5%             3.0/120s       853499
Running...           [▪▪                                                          ]   3.3%             4.0/120s      1116203
Running...           [▪▪                                                          ]   3.3%             4.0/120s      1116203
   4.000      262662      262662       0.016     0.239     0.307     0.640     0.936     0.982     1.100     1.289     1.804
Running...           [▪▪                                                          ]   4.2%             5.0/120s      1326661
Running...           [▪▪                                                          ]   4.2%             5.0/120s      1326789

Histograms stored in samples take too much memory during long runs

Screenshot from 2024-03-29 14-19-56
On the screenshot above we see memory utilization of 2 nodes which are used for running latte.
Memory utilization grew up to 10Gb for 3 hours of uptime on each of the nodes.

Debugged a bit locally and observed that memory leaks happen during each event of sampling.
My observation is that memory utilization is directly related to the made operations during a sampling period.

execute functions to return responses, so return values can get validated by the workload

While try to play around with latte, I was looking on how can I add data validation to a workflow,
and I've found the execute functions are always return empty/nil/unit

    /// Executes an ad-hoc CQL statement with no parameters. Does not prepare.
    pub async fn execute(&self, cql: &str) -> Result<(), CassError> {
        let start_time = self.stats.try_lock().unwrap().start_request();
        let rs = self.session.query(cql, ()).await;
        let duration = Instant::now() - start_time;
        self.stats
            .try_lock()
            .unwrap()
            .complete_request(duration, &rs);
        rs.map_err(|e| CassError::query_execution_error(cql, &[], e))?;
        Ok(())
    }

    /// Executes a statement prepared and registered earlier by a call to `prepare`.
    pub async fn execute_prepared(&self, key: &str, params: Value) -> Result<(), CassError> {
        let statement = self
            .statements
            .get(key)
            .ok_or_else(|| CassError(CassErrorKind::PreparedStatementNotFound(key.to_string())))?;
        let params = bind::to_scylla_query_params(&params)?;
        let start_time = self.stats.try_lock().unwrap().start_request();
        let rs = self.session.execute(statement, params.clone()).await;
        let duration = Instant::now() - start_time;
        self.stats
            .try_lock()
            .unwrap()
            .complete_request(duration, &rs);
        rs.map_err(|e| CassError::query_execution_error(statement.get_statement(), &params, e))?;
        Ok(())
    }

so in cases the query works but comes out empty, latte consider is as correct, while I would like to be able to consider it as
error, and maybe even stop the workflow cause of that

Pass elapsed time to workload

Perhaps there's another way of doing so, but I would like to be able to benchmark workloads that change over time. It seems like the easiest way to do this would be to have the run function also accept the elapsed time so that whatever logic executed on each step can adapt to the time the workload has been running. This is different from using the cycle number since different operations may take a different amount of time and using the cycle number could cause different threads to switch at very different times.

Support for Credentials

It would be useful to have Latte support Credentials on a Cassandra cluster to allow for use on clusters that have the Authorizer and Authenticators enabled.

Provide a separate load subcommand

In serious benchmarking the data are big enough that we want to load it only once per series of benchmarks.
Currently there is no way to just load the data without running the benchmark.

Unable to build latte on Ubuntu 22.04

Hello!

I tried to build latte from source using cargo and latest stable rust(1.73.0), however I received a build error:

$ cargo build
error[E0512]: cannot transmute between types of different sizes, or dependently-sized types
  --> /home/dr/.asdf/installs/rust/1.73.0/registry/src/index.crates.io-6f17d22bba15001f/rune-0.12.3/src/hash.rs:68:18
   |
68 |         unsafe { mem::transmute(type_id) }
   |                  ^^^^^^^^^^^^^^
   |
   = note: source type: `std::any::TypeId` (128 bits)
   = note: target type: `hash::Hash` (64 bits)

error[E0512]: cannot transmute between types of different sizes, or dependently-sized types
  --> /home/dr/.asdf/installs/rust/1.73.0/registry/src/index.crates.io-6f17d22bba15001f/rune-0.12.3/src/modules/any.rs:15:14
   |
15 |     unsafe { std::mem::transmute(item.type_hash().expect("no type known for item!")) }
   |              ^^^^^^^^^^^^^^^^^^^
   |
   = note: source type: `hash::Hash` (64 bits)
   = note: target type: `modules::any::TypeId` (128 bits)

For more information about this error, try `rustc --explain E0512`.
error: could not compile `rune` (lib) due to 2 previous errors
warning: build failed, waiting for other jobs to finish...

Build env:

╰─$ rustc --version                                                                                                                                                                                                                   
rustc 1.73.0 (cc66ad468 2023-10-03)
╰─$ uname -a
Linux eniac-4 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
╰─$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04 LTS
Release:        22.04
Codename:       jammy

Could you fix the issue or suggest a workaround?

Support retry control

We are looking at using latte in some of our tests, and those test are chaos originated, and can take a node down, or grow/shrink/upgrade the cluster while a tool like latte (or cassandra-stress) is working.

cause of that we'll need some why to control retry requests

  • if in the script itself
  • if in some command line options, similar to cassandra-stress -error retries=20

Latte stopped reporting per-line stats at some point

 418.000       17146      171419       0.279     1.179     1.422     1.690     1.959     2.144     2.802     5.194     6.554
 419.000       16963      169686       0.181     1.163     1.411     1.692     1.993     2.214     4.178     5.726    13.074
 420.000       17032      170217       0.238     1.158     1.410     1.694     1.989     2.206     3.994     5.636     6.849
 421.000       17194      171957       0.232     1.169     1.415     1.689     1.965     2.154     2.968     5.136     7.905
 422.000       17276      172785       0.236     1.177     1.415     1.683     1.953     2.128     2.560     4.624     6.205
 423.008       15905      159015       0.255     1.154     1.405     1.687     1.971     2.179     3.492     5.493    77.136
 424.001       16995      169992       0.220     1.171     1.414     1.691     1.982     2.195     3.557    79.364    85.393
 631.218       16129      161285       0.163     1.196     1.449     1.734     2.033     2.251     3.549     5.923   675.807

SUMMARY STATS ══════════════════════════════════════════════════════════════════════════════════════════════════════════════
    Elapsed time       [s]    631.223
        CPU time       [s]   1204.266
 CPU utilisation       [%]       95.4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.