Git Product home page Git Product logo

apache / incubator-resilientdb Goto Github PK

View Code? Open in Web Editor NEW
102.0 15.0 191.0 288.73 MB

Global-Scale Sustainable Blockchain Fabric

Home Page: https://resilientdb.com/

License: Apache License 2.0

C++ 84.20% Shell 2.84% Python 1.20% Starlark 11.00% QMake 0.10% Solidity 0.08% JavaScript 0.46% Dockerfile 0.11% CSS 0.01%
blockchain crypto distributed-database distributed-ledger key-value-database smart-contracts utxo solidity blockchain-platform

incubator-resilientdb's Introduction

GitHub Generated Button build build

ResilientDB: Global-Scale Sustainable Blockchain Fabric

ResilientDB is a High Throughput Yielding Permissioned Blockchain Fabric founded by ExpoLab at UC Davis in 2018. ResilientDB advocates a system-centric design by adopting a multi-threaded architecture that encompasses deep pipelines. Further, ResilientDB separates the ordering of client transactions from their execution, which allows it to process messages out-of-order.

Downloads:

Download address for run-directly software package: https://downloads.apache.org/incubator/resilientdb/

Quick Facts on ResilientDB

  1. ResilientDB orders client transactions through a highly optimized implementation of the PBFT [Castro and Liskov, 1998] protocol, which helps to achieve consensus among its replicas. ResilientDB also supports deploying other state-of-the-art consensus protocols [release are planned] such as GeoBFT [blog, released], PoE, RCC, RingBFT, PoC, SpotLess, HotStuff, and DAG.
  2. ResilientDB requires deploying at least 3f+1 replicas, where f (f > 0) is the maximum number of arbitrary (or malicious) replicas.
  3. ResilientDB supports primary-backup architecture, which designates one of the replicas as the primary (replica with identifier 0). The primary replica initiates consensus on a client transaction, while backups agree to follow a non-malicious primary.
  4. ResilientDB exposes a wide range of interfaces such as a Key-Value store, Smart Contracts, UTXO, and Python SDK. Following are some of the decentralized applications (DApps) built on top of ResilientDB: NFT Marketplace and Debitable.
  5. To persist blockchain, chain state, and metadata, ResilientDB provides durability through LevelDB.
  6. ResilientDB provides access to a seamless GUI display for deployment and maintenance, and supports Grafana for plotting monitoring data.
  7. [Historial Facts] The ResilientDB project was founded by Mohammad Sadoghi along with his students (Suyash Gupta as the lead Architect, Sajjad Rahnama as the lead System Designer, and Jelle Hellings) at UC Davis in 2018 and was open-sourced in late 2019. On September 30, 2021, we released ResilientDB v-3.0. In 2022, ResilientDB was completely re-written and re-architected (Junchao Chen as the lead Architect, Dakai Kang as the lead Recovery Architect along with the entire NexRes Team), paving the way for a new sustainable foundation, referred to as NexRes (Next Generation ResilientDB). Thus, on September 30, 2022, NexRes-v1.0.0 was born, marking a new beginning for ResilientDB. On October 21, 2023, ResilientDB was officially accepted into Apache Incubation.

Online Documentation:

The latest ResilientDB documentation, including a programming guide, is available on our blog repository. This README file provides basic setup instructions.

Table of Contents

  1. Software Stack Architecture
    • SDK, Interface/API, Platform, Execution, and Chain Layers
    • Detailed API Documentation: Core and SDK
  2. SDK Layer: Python SDK and Wallet - ResVault
  3. Interface Layer: Key-Value, Solidity Smart Contract, Unspent Transaction Output (UTXO) Model, ResilientDB Database Connectivity (RDBC) API
  4. Platform Layer: Consensus Manager Architecture (ordering, recovery, network, chain management)
  5. Execution Layer: Transaction Manager Design (Runtime)
  6. Chain Layer: Chain State & Storage Manager Design (durability)
  7. Installing & Deploying ResilientDB

OS Requirements

Ubuntu 20+


Build and Deploy ResilientDB

Next, we show how to quickly build ResilientDB and deploy 4 replicas and 1 client proxy on your local machine. The proxy acts as an interface for all the clients. It batches client requests and forwards these batches to the replica designated as the leader. The 4 replicas participate in the PBFT consensus to order and execute these batches. Post execution, they return the response to the leader.

Install dependencies:

./INSTALL.sh

Run ResilientDB (Providing a Key-Value Service):

./service/tools/kv/server_tools/start_kv_service.sh
  • This script starts 4 replicas and 1 client. Each replica instantiates a key-value store.

Build Interactive Tools:

bazel build service/tools/kv/api_tools/kv_service_tools

Functions

ResilientDB supports two types of functions: version-based and non-version-based. Version-based functions will leverage versions to protect each update, versions must be obtained before updating a key.

Note: Version-based functions are not compatible with non-version-based functions. Do not use both in your applications.

We show the functions below and show how to use kv_service_tools to test the function.

Version-Based Functions

Get

Obtain the value of key with a specific version v.

  kv_service_tools --config config_file --cmd get_with_version --key key --version v
parameters descriptions
config the path of the client config which points to the db entrance
cmd get_with_version
key the key you want to obtain
version the version you want to obtain. (If the v is 0, it will return the latest version

Example:

  bazel-bin/service/tools/kv/api_tools/kv_service_tools --config service/tools/config/interface/service.config --cmd get_with_version --key key1 --version 0

Results:

get key = key1, value = value: "v2" version: 2

Set

Set value to the key key based on version v.

  kv_service_tools --config config_file --cmd set_with_version --key key --version v --value value
parameters descriptions
config the path of the client config which points to the db entrance
cmd set_with_version
key the key you want to set
version the version you have obtained. (If the version has been changed during the update, the transaction will be ignored)
value the new value

Example:

  bazel-bin/service/tools/kv/api_tools/kv_service_tools --config service/tools/config/interface/service.config --cmd set_with_version --key key1 --version 0 --value v1

Results:

set key = key1, value = v3, version = 2 done, ret = 0

current value = value: "v3" version: 3

Get Key History

Obtain the update history of key key within the versions [v1, v2].

  kv_service_tools --config config_file --cmd get_history --key key --min_version v1 --max_version v2
parameters descriptions
config the path of the client config which points to the db entrance
cmd get_history
key the key you want to obtain
min_version the minimum version you want to obtain
max_version the maximum version you want to obtain

Example:

  bazel-bin/service/tools/kv/api_tools/kv_service_tools --config service/tools/config/interface/service.config --cmd get_history --key key1 --min_version 1 --max_version 2

Results:

get history key = key1, min version = 1, max version = 2
value =
item {
  key: "key1"
  value_info {
   value: "v1"
   version: 2
 }
}
item {
  key: "key1"
  value_info {
   value: "v0"
   version: 1
 }
}

Get Top

Obtain the recent top_number history of the key key.

  kv_service_tools --config config_path --cmd get_top --key key --top top_number
parameters descriptions
config the path of the client config which points to the db entrance
cmd get_top
key the key you want to obtain
top the number of the recent updates

Example:

  bazel-bin/service/tools/kv/api_tools/kv_service_tools --config service/tools/config/interface/service.config --cmd get_top --key key1 --top 1

Results:

key = key1, top 1
value =
item {
 key: "key1"
 value_info {
   value: "v2"
   version: 3
 }
}

Get Key Range

Obtain the values of the keys in the ranges [key1, key2]. Do not use this function in your practice code

  kv_service_tools --config config_file --cmd get_key_range_with_version --min_key key1 --max_key key2
parameters descriptions
config the path of the client config which points to the db entrance
cmd get_key_range_with_version
min_key the minimum key
max_key the maximum key

Example:

  bazel-bin/service/tools/kv/api_tools/kv_service_tools --config service/tools/config/interface/service.config --cmd get_key_range_with_version --min_key key1 --max_key key3

Results:

min key = key1 max key = key2
getrange value =
item {
  key: "key1"
  value_info {
   value: "v0"
   version: 1
  }
}
item {
  key: "key2"
  value_info {
   value: "v1"
   version: 1
  }
}

Non-Version-Based Function

Set

Set value to the key key.

  kv_service_tools --config config_file --cmd set --key key --value value
parameters descriptions
config the path of the client config which points to the db entrance
cmd set
key the key you want to set
value the new value

Example:

  bazel-bin/service/tools/kv/api_tools/kv_service_tools --config service/tools/config/interface/service.config --cmd set --key key1 --value value1

Results:

set key = key1, value = v1, done, ret = 0

Get

Obtain the value of key.

  kv_service_tools --config config_file --cmd get --key key
parameters descriptions
config the path of the client config which points to the db entrance
cmd get
key the key you want to obtain

Example:

  bazel-bin/service/tools/kv/api_tools/kv_service_tools --config service/tools/config/interface/service.config --cmd get --key key1

Results:

get key = key1, value = "v2"

Get Key Range

Obtain the values of the keys in the ranges [key1, key2]. Do not use this function in your practice code

  kv_service_tools --config config_path --cmd get_key_range --min_key key1 --max_key key2
parameters descriptions
config the path of the client config which points to the db entrance
cmd get_key_range
min_key the minimum key
max_key the maximum key

Example:

  bazel-bin/service/tools/kv/api_tools/kv_service_tools --config service/tools/config/interface/service.config --cmd get_key_range --min_key key1 --max_key key3

Results:

getrange min key = key1, max key = key3
value = [v3,v2,v1]

Deployment Script

We also provide access to a deployment script that allows deployment on distinct machines.

Deploy via Docker

  1. Install Docker
    Before getting started, make sure you have Docker installed on your system. If you don't have Docker already, you can download and install it from the official Docker website.

  2. Pull the Latest ResilientDB Image
    Choose the appropriate ResilientDB image for your machine's architecture:

    • For amd architecture, run:

      docker pull expolab/resdb:amd64
    • For Apple Silicon (M1/M2) architecture, run:

      docker pull expolab/resdb:arm64
  3. Run a Container with the Pulled Image
    Launch a Docker container using the ResilientDB image you just pulled:

    • For amd architecture, run:

      docker run -d --name myserver expolab/resdb:amd64
    • For Apple Silicon (M1/M2) architecture, run:

      docker run -d --name myserver expolab/resdb:arm64
  4. Test with Set and Get Commands Exec into the running server:

    docker exec -it myserver bash

    Verify the functionality of the service by performing set and get operations provided above functions.

incubator-resilientdb's People

Contributors

apricity001 avatar ataridreams avatar calvinkirs avatar cjcchen avatar dakaikang avatar glenn-chen avatar gopuman avatar gupta-suyash avatar ic4y avatar juduarte00 avatar kamaci avatar msadoghi avatar nobuginmycode avatar omahs avatar resilientdb avatar rohansogani avatar saipranav-kotamreddy avatar sajjadrahnama avatar xyhlinx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

incubator-resilientdb's Issues

Client Memory

Clients don't release received responses from the nodes and memory is getting leaked.

memcpy error when trying to run script without docker

Tried copy-initalization instead of memcpy, but does not work because one variable is a char and the other is a RemReqType. Should this memcpy error just be ignored?
transport/message.cpp: In member function ‘virtual void YCSBClientQueryMessage::copy_from_buf(char*)’: ./system/helper.h:206:29: error: ‘void* memcpy(void*, const void*, size_t)’ writing to an object of non-trivially copyable type ‘class ycsb_request’; use copy-assignment or copy-initialization instead [-Werror=class-memaccess] memcpy(&v, &d[p], sizeof(v)); \

ResilientDB Fails to run when clients > 1

@gupta-suyash @sajjadrahnama
./resilientDB-docker --clients=2 --replicas=4
The ./resilientDB-docker fails because the ifconfig file is not created properly. The reason being, multiple clients are added in the same line separated by \n. There is a small bug in the docker-ifconfig.sh file.

I've fixed this small bug in my local copy.

Bug: View Change Stopping Liveness

When forcefully inducing view change on the Resilient DB system, there are 2 issues which can happen currently:

  1. After the view change occurs, a replica does not continue the transaction it was performing when the view change occurred
  2. After the view change occurs, a replica is unable to receive/recognize transaction messages being sent towards it

From testing, type 1 is most likely an issue with when the view change messages are received and occurs it interrupts some process which was midway and the program is unsure where to continue. This problem is fixed if start_kv_service.sh is reran, so is likely a runtime issue with timing

For type 2, this is most likely an issue with the ports or memory, as what I have noticed is when the view change runs into type 2, whenever I try a view change on that computer session following that, it always results in a type 2 error. Meanwhile, if the view change works fine the first time, all subsequent star_kv_service.sh runs for that computer session(when the code remains unchanged) avoid the type 2 issue.

Theres also an issue where sometimes after a view change is done, 30+ prepare and commit messages are collected for the next transaction and transactions which were never sent are logged, resulting in a higher number executed count and prepare messages collected count than what should be happening

New Feature: Support Light-weight Read-only Transaction Design

In the current version of ResilientDB, "get" transactions (i.e., read-only transactions) retrieve the values of some keys. To guarantee data consistency, these transactions are forced through the consensus layer to obtain the correct results. Due to the nature of consensus design, each transaction will be written to the chain and written to the disk to support durability and recovery.

To improve read-only transactions, we can explore alternative designs without the need to participate in consensus yet retrieve consistent results.

Here are a set of important considerations:

  1. How to identify read-only transactions?
  2. Should a fee be paid to process read-only transactions, if so, then read-only transactions will require payment, which must pass through consensus.
  3. How to ensure data consistency without engaging in consensus. Without consensus, each replica will return different data. How to verify the data is consistent? Could we introduce read-only caching services (e.g., Coordination-Free Byzantine Replication with Minimal Communication Costs, ICDT'20)? Can we relax the need to provide the latest data, instead maybe we can support snapshot reads, where each query from the same client returns the same version of the data, i.e., repeatable.

New feature: Support Dynamic Network Re-configuration

The current release version or the version from the master branch does not support reconfiguration. The number of replicas to be deployed is determined by the configuration when deploying to the cluster.

We need to support adding and removing replicas when resilientDB is running in the cluster after deployment.

./config/resdb_config_utils.h:43:5: error: 'std::optional' has not been declared

when run example/start_kv_server.sh after ./INSTALL.sh, the complier complains this:

In file included from config/resdb_config_utils.cpp:26:
./config/resdb_config_utils.h:43:5: error: 'std::optional' has not been declared
   43 |     std::optional<ReplicaInfo> self_info = std::nullopt,
      |     ^~~
./config/resdb_config_utils.h:43:18: error: expected ',' or '...' before '<' token
   43 |     std::optional<ReplicaInfo> self_info = std::nullopt,
      |                  ^
config/resdb_config_utils.cpp:133:35: error: 'std::optional' has not been declared
  133 |     const std::string& cert_file, std::optional<ReplicaInfo> self_info,
      |                                   ^~~
config/resdb_config_utils.cpp:133:48: error: expected ',' or '...' before '<' token
  133 |     const std::string& cert_file, std::optional<ReplicaInfo> self_info,
      |                                                ^
config/resdb_config_utils.cpp: In function 'std::unique_ptr<resdb::ResDBConfig> resdb::GenerateResDBConfig(const string&, const string&, const string&, int)':
config/resdb_config_utils.cpp:141:8: error: 'self_info' was not declared in this scope; did you mean 'cert_info'?
  141 |   if (!self_info.has_value()) {
      |        ^~~~~~~~~
      |        cert_info
config/resdb_config_utils.cpp:145:5: error: 'self_info' was not declared in this scope; did you mean 'cert_info'?
  145 |   (*self_info).set_id(cert_info.public_key().public_key_info().node_id());
      |     ^~~~~~~~~
      |     cert_info
config/resdb_config_utils.cpp:151:7: error: 'gen_func' was not declared in this scope
  151 |   if (gen_func.has_value()) {
      |       ^~~~~~~~
Target //kv_server:kv_server failed to build

Cannot start service X: Ports are not available: unable to list exposed ports

Hi @sajjadrahnama I was having issues with Docker. I am using Docker on WSL2 on Windows Build 21343.

I get errors when I invoke:
./resilientDB-docker -d

Output:

Number of Replicas:     4
Number of Clients:      1
Stopping previous containers...
Removing c4 ... done
Removing c2 ... done
Removing c1 ... done
Removing s3 ... done
Removing s2 ... done
Removing c3 ... done
Removing s5 ... done
Removing s1 ... done
Removing s4 ... done
Removing s6 ... done
Removing network resilientdb_default
Successfully stopped
Creating docker compose file ...
Docker compose file created --> docker-compose.yml
Starting the containers...
Creating network "resilientdb_default" with the default driver
Creating s1 ...
Creating s4 ... error
Creating s3 ...
Creating c1 ...
Creating s2 ...

ERROR: for s4  Cannot start service s4: Ports are not available: unable to list exposed ports: Get "http://unix/forwCreating s2 ... error

Creating s1 ... error
ards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for c1  Cannot start service c1: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused
Creating s3 ... error                                                                                               ERROR: for s1  Cannot start service s1: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for s3  Cannot start service s3: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for s4  Cannot start service s4: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for s2  Cannot start service s2: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for c1  Cannot start service c1: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for s1  Cannot start service s1: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for s3  Cannot start service s3: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused
ERROR: Encountered errors while bringing up the project.
ifconfig file exists... Deleting File
Deleted
Server sequence --> IP
Put Client IP at the bottom
ifconfig.txt Created!

Checking Dependencies...
/mnt/d/programs/ExpoLab/ResilientDB
Dependencies has been installed

Creating config file...
Config file has been created

Compiling ResilientDB...
Error response from daemon: Container f82fc2862216fcf5a8f53692034785b11e5f3cb709ce37c0aec0b94e5b91c870 is not running
Error response from daemon: Container f82fc2862216fcf5a8f53692034785b11e5f3cb709ce37c0aec0b94e5b91c870 is not running
Error response from daemon: Container f82fc2862216fcf5a8f53692034785b11e5f3cb709ce37c0aec0b94e5b91c870 is not running
ResilientDB is compiled successfully

Running ResilientDB replicas...
rep 1
Error response from daemon: Container f82fc2862216fcf5a8f53692034785b11e5f3cb709ce37c0aec0b94e5b91c870 is not running
rep 2
Error response from daemon: Container b3f1a934d0d9c17373d099a2653513a5689a3cfa7dbe98bb17e33a436c186665 is not running
rep 3
Error response from daemon: Container f82fc2862216fcf5a8f53692034785b11e5f3cb709ce37c0aec0b94e5b91c870 is not running
Error response from daemon: Container 2cc0570c22d740cfb3a0576ee3c2e9b2b12efe6f36c22abe2ed4f0bb39f5598a is not running
rep 4
Error response from daemon: Container b3f1a934d0d9c17373d099a2653513a5689a3cfa7dbe98bb17e33a436c186665 is not running
Error response from daemon: Container 2cc0570c22d740cfb3a0576ee3c2e9b2b12efe6f36c22abe2ed4f0bb39f5598a is not running
Error response from daemon: Container ec780557852d406bab008d1df771d6b3a0ea2a4a896995bf3e6f8f6bf8f9e8ee is not running
Replicas started successfully

Running ResilientDB clients...
cl 1
Error response from daemon: Container ec780557852d406bab008d1df771d6b3a0ea2a4a896995bf3e6f8f6bf8f9e8ee is not running
Error response from daemon: Container 857462a1b32fb27589536499c7ed6838d6e11b231ba82d04ff8e5217be8b8861 is not running
Clients started successfully
Error response from daemon: Container 857462a1b32fb27589536499c7ed6838d6e11b231ba82d04ff8e5217be8b8861 is not running
scripts/result.sh: line 48: 0 + : syntax error: operand expected (error token is "+ ")
(standard_in) 2: syntax error
expr: division by zero
(standard_in) 1: syntax error
Throughputs:
0:
1:
2:
3:
scripts/result_colorized.sh: line 51: 0 + : syntax error: operand expected (error token is "+ ")
Latencies:
(standard_in) 2: syntax error
latency 4:

idle times:
Idleness of node: 0
Idleness of node: 1
Idleness of node: 2
Idleness of node: 3
Memory:
0: 0 MB
1: 0 MB
2: 0 MB
3: 0 MB
4: 0 MB

expr: division by zero
avg thp: 0:
(standard_in) 1: syntax error
avg lt : 1:
Code Ran successfully ---> res.out

I'm not sure if this is an issue specifically with Docker or WSL2 itself? I'm new to using Docker so if you could point me to a good resource that would be great! Let me know if I'm missing any information to add to my issue.

Thanks.

Best,
Alejandro

Bug: Core dump when long connect is killed

When the long connect is killed, boost library does not return an error but return success with an random number.
Close the connection if reading an invalid package.

'NN:: Exception' when trying to run ResilientDB-Docker with large number of clients/replicas

Dear,
I tried to run resilientDB Blockchain using Docker. It works fine with a small number of clients/servers. However it gives me the error shown on the snapshot below when the number of clients/replicas is >= 10.
For your information, the following three commands give me the same error:
./resilientDB-docker --clients=10 --replicas=4
./resilientDB-docker --clients=1 --replicas=10
./resilientDB-docker --clients=10 --replicas=10

However, it works fine if i run it as follows,:
./resilientDB-docker --clients=9 --replicas=4
./resilientDB-docker --clients=9 --replicas=9
./resilientDB-docker --clients=1 --replicas=4
Capture

Docker setup script very limiting

First of all, it seems the app is only compiled on one of the servers:

docker exec s1 mkdir -p obj
docker exec s1 make clean
docker exec s1 make

Instead on all, which I do not understand.

Second, a lot of the code connects to the containers and executes commands instead of putting a startscript in the Dockerfile which is then executed after startup. This makes a multi-machine setup with swarm very difficult.

In that sense, the repo is also missing the actual Dockerfile to allow people to setup this kind of thing easily for themselvs.

Web Dashboard - UI

Add a web interface for interacting with resilientdb on google cloud machines.

What's the DB type (in-memory or SQLite) used in the experiments of VLDB publication

Dear authors of ResilientDB,

Your VLDB publication ResilientDB: Global Scale Resilient Blockchain Fabric was an excellent one, and I really appreciate that your team kindly open source this great work.

Sorry about spamming the issues of this repo, but may I know which DB (in_memory_db or Sqlite) was used in the experiments of the paper?

Thanks!

Run in a geo-distributed way

Hi authors,

How to run your code on more than 1 cluster? For now, if I use ./resilientDB-docker --clients=1 --replicas=4 then I have 1 cluster wit 4 replicas. But I want 2 clusters with 4 replicas in each. I think there must be some configuration to be done because ./resilientDB-docker --clients=1 --replicas=8 wont split the replicas in 2 clusters.

Appreciate your help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.