oasisprotocol / oasis-core Goto Github PK
View Code? Open in Web Editor NEWPerformant and Confidentiality-Preserving Smart Contracts + Blockchains
Home Page: https://oasisprotocol.org
License: Apache License 2.0
Performant and Confidentiality-Preserving Smart Contracts + Blockchains
Home Page: https://oasisprotocol.org
License: Apache License 2.0
We need some way to pay consensus nodes
Currently, compute nodes pass in a signed enclave program as an argument.
Instead, contract developers should be able to register their contracts with our coordinator.
When the developer builds and deploys their contract, the signed enclave program should be hosted in storage.
Discovery can happen out-of-band or through another contract that acts as a registry [or something else]. Compute nodes could then subscribe to the registry and pull down any new contracts which they wish to run.
Compute nodes should automatically download any contract enclaves programs that it is assigned to.
All workers should register with the registry and be assigned work from the scheduler. We should no longer have a dedicated notion of a compute node running a specific contract.
Using the ECALL/OCALL backend
ongoing work on the voting contract has discovered a case where running cargo make
in ekiden would attempt to use git rev-parse --show-toplevel
Sure we use it to determine the PROJECT_ROOT
:
https://github.com/sunblaze-ucb/ekiden/blob/3788275c0cd3a72028b6daf6233dd5213b06d45e/Makefile.toml#L2
The reason for this is mostly a deficiency of the current build system.
"After all the above, one of the biggest overheads that will remain is going to be the entry and exits to and from the enclave for each ecall and ocall. You can check Scone, Eleos and HotCalls papers for the mechanism details. For a faster turn around, you should directly check the code samples in project repositories for Eleos and HotCalls. Both of these projects have faster ocall libraries. "
High-level idea: Have a enclave thread that doesn't exit and a untrusted thread that doesn't enter. Then share memory between them.
I've checked out HotCalls, it seems to use a very simple shared memory approach to ECALLs. There is a simple shared data structure and one thread which is constantly in the enclave waiting for requests and processing them. So there is only one ECALL for starting the request processor.
https://github.com/oweisse/hot-calls/blob/master/include/hot_calls.h
We could implement something similar (probably in ekiden-enclave) and instead of using EDLs to define the interfaces we would do this ourselves.
How do we coordinate between trusted/untrusted portions when reading/writing shared memory? Are we going to have a lock? What if the untrusted portion is malicious?
We should use a proper Rust logging library everywhere instead of manually printing stuff.
Using log
is an easy choice since it's going to be part of rust core. Of the options listed in the README, fern
seemed to be the most actively maintained.
Does prometheus do logging? Might be good to use this since we already use it for tracing
Tracking this PR tokio-rs/tokio-proto#135
Blocks our ability to spawn/kill TcpServers dynamically for our integration tests.
Currently serving from TcpServer will block the thread:
https://docs.rs/tokio-proto/0.1.1/tokio_proto/struct.TcpServer.html
FYI, testing in a #![no_std]
environment is currently impossible because the rustc --test
harness pulls in std
.
I decided to ask in the rust repo: rust-lang/rust#48045
in consensus/src/ekidenmint.rs, Tendermint expects that we respond to commit events with the Merkle tree root hash of the application state.
We should track this in storage/src/state.rs and respond.
Either called /tls or /socket or something,
this should be designed with a trusted and untrusted portion, similar to /db.
This should then expose an API to contracts for interacting with sockets/TLS
Please check the sample code from Baidu Rust SGX SDK:
https://github.com/baidu/rust-sgx-sdk/tree/master/samplecode/tls/tlsserver
https://github.com/baidu/rust-sgx-sdk/tree/master/samplecode/tls/tlsclient
Seal and check integrity of a full snapshot of contract state
Currently keys are fixed. We ought to have the key manager rotate keys. We can use similar techniques from secure messaging. Here's a decent summary
https://signal.org/blog/advanced-ratcheting/
rust-sgx-sdk image is now tagged on Docker. We should make sure our versioning is consistent
Convention is to turn each binary e.g. {consensus, client, compute} into a library. Then a thin wrapper around that library to parse CLI args/flags.
https://doc.rust-lang.org/stable/book/second-edition/ch12-03-improving-error-handling-and-modularity.html
See consensus node for example
https://github.com/sunblaze-ucb/ekiden/tree/master/consensus
https://software.intel.com/en-us/get-started-with-vtune
https://software.intel.com/en-us/node/708952
https://software.intel.com/en-us/blogs/2016/01/07/intel-sgx-debug-production-prelease-whats-the-difference
Make sure VTune/IntelSGXSDK versions are compatible
Apparently this can lead to issues.
Also keep in mind variable length printf may not work
if we know tendermint is already installed, it would be nice to have a way to skip that step in the Makefile. maybe something like touch install-tendermint.stamp could work.
rider: sha256sum isn't present on mac. an uncited source suggests shasum -a 256 as an alternative
Instead of making a bunch of boilerplate benchmarking scripts, we should have a relatively generic harness.
Currently using Ethereum wallets for compatibility. In the future when we have our own smart contract language/runtime we'll need to think through this.
From undergrad experience's, we seem to be missing some details in the README.
Can someone debug these issues and update accordingly? We want any new developer to be able to get started easily.
-x
in consensus node for development (no Tendermint)keys/
when developing, as an option../
(e.g. "./api", which causes cargo make
to fail when copying docs. Instead it should be "api/".[workspace]
declaration.For the TLSNotary application, we'll need to define an interface for supporting network I/O to and from the contract. One proposal is to specify certain methods to act as a callback function for network data, treating the network data as an input to the method call.
Because Ekiden contracts must be deterministic / RSM model, we want to treat these as inputs.
One potential reference is how we'll handle randomness:
#182
Randomly sample members to be committees.
We should consider when we need to dynamically schedule. (e.g. based on load?)
Add cargo build as a task to CI to ensure that Ekiden builds under no-SGX.
sodalite::box_keypair_seed seems to be extremely slow in some (unknown) cases.
If a particular contract function/method requires randomness from SGX,
libcontract should pass that in as an input parameter, similar to how we pass in state right now.
Contracts should not call sgx_rand directly
We want to do something using a common I/O interface
Ekiden, as a library, doesn't fit neatly into Cargo, because it needs to provide some non-Rust resources to dependent contracts, and Cargo does not naturally manage non-Rust resources. These non-Rust dependencies include, for example Makefile.toml files, EDL files from the Rust SGX SDK, and miscellaneous enclave signing configuration files.
We have currently been delivering these non-Rust dependencies through a Git submodule. However, this solution of using both Cargo and Git submodules has some undesirable properties, compared to a hypothetical ideal build system (HIBS):
This issue identifies several ways our build system falls short of HIBS and asks that we bring our build system closer to HIBS. We have collected some proposals for achieving this, which for example involve getting rid of the submodules and further extending our Makefile.toml's and adding additional scripts to be run by Cargo.
(previous title and description below)
Currently there's a possibility that the ekiden git submodule in the contract repo is out of sync with Cargo.toml.
The only reason we need this is to extend the Makefile, proposal is to replace the git submodule with a script that fetches the correct Makefile based on the ekiden version in cargo.toml
I acknowledge that our build system can pose issues to beginners and that it would be nice to minimize possible complications in onboarding and that doing so may involve changes to our build system. As we move forward with this revamp, I want us to keep in mind some things that our build system does well and that an ideal build system would do well too.
If everything could be provided in crates, I think Cargo can handle these use cases because it supports dependency overrides.
It is easy to make changes to Ekiden while working on a contract and test changes together. This is important when, in the course of developing a contract, you find a bug in Ekiden and want to fix it and verify that the fix resolves the problem.
The canonical way to handle these is the [patch]
section. Also see documentation on overriding dependencies which specifically mentions this scenario (you find a bug in an upstream create or you even need to add a feature to the upstream crate).
It is possible to have CI build a contract with a modified version of Ekiden without having that modified version be approved and merged into master.
As long as the CI can fetch the modified version of Ekiden, this can also be achieved with the patch method above. You just need to specify the repository containing your patched version of Ekiden and it will be used.
The build process is satisfactorily decoupled from the conceptually separate systems of version control and code repository hosting. It is reasonably easy to make a fork of Ekiden, even if that fork were never to be distributed publicly or never to be managed with Git or to be distributed as a source archive.
The same applies to the patch method as you can specify local paths in case you are not using Git.
Another question is how to distribute binaries like the compute and consensus nodes? Should we just cargo install
them?
Potential options:
Use on-premise CI (or hybrid) that allows us to use our own machines with trusted hardware
See if we can support Hyperledger chaincode, similar to how we support the EVM
Sodalite is slow
One proposal:
https://github.com/baidu/rust-sgx-sdk/tree/master/third_party/ring
Random tip:
"The version of SDK did not support CPUID, so our applications using OpenSSL defaulted to using software implementation of crypto operations. I think using the IPP libraries will partially solve this issue, but if you are using any other custom crypto libraries, please check if there is a slow down there because of CPUID failure. "
cc @noahj
https://sunblazecollaborators.slack.com/archives/G8234P6RM/p1520219658000132
error[E0433]: failed to resolve. Could not find `backtrace` in `std`
--> src/lib.rs:173:10
|
173 | std::backtrace::enable_backtrace("/contracts/evm/target/enclave/evm.signed.so", std::backtrace::PrintFormat::Short).expect("Failed to enable backtrace");
| ^^^^^^^^^ Could not find `backtrace` in `std`
find out how, update https://github.com/oasislabs/ekiden/blob/master/docs/enclave-backtrace.md
Let's call it the StateRootHash contract. This contract should implement the consensus layer interface and handle the commit reveals from the leader of the execution group.
Thus, it'll commitments on Ethereum
See design doc for state contract
When compute nodes push next block to blockchain, must verify that:
Also, need proof-of-publication
The idea here is the the client needs to keep track of the current hashchain and check that the compute node is working on a state at least as new as that. From what I can tell from the PR, we're not yet verifying the state to know just how new it is. We're just assuming the storage node isn't lying
Find out where enclave is loaded at runtime.
Get the callgrind annotator to apply mapping.
Currently callgrind cannot find enclave in simulation mode.
Used for no-SGX profiling
an enclave can get its own location with &__ImageBase: https://github.com/intel/linux-sgx/blob/master/sdk/trts/trts.cpp#L72
When contracts are written outside this repo, we should make it easy to write an integration test
Allow storage nodes to handle multiple objects from various contracts.
Our current gRPC library reimplement gRPC, so misses lots of features.
Our current protobuf library requires a lot of boiler plate.
Proposal:
The current gRPC library is currently a blocker in making full use of futures during benchmarks. To summarize, the missing feature is stepancheg/grpc-rust#108 (without this, each gRPC client will process requests in its own event loop in its own thread).
Should include some best practices and code review guidelines, including but not limited to
Find ways to multi-thread in the compute node.
Currently, request decryption, request processing, and response/state encryption happens in a single thread. We should be able to multi-thread aspects of the core trusted Ekiden library to speed some of these parts up
if our build no longer uses it, let's skip that step
Currently takes ~1hr to run an experiment. Warren says he sees these nodes being provisioned in series, each downloading the image remotely.
We should use something like config to handle configuration as the number of different options increases.
Couple of things we need for enclave identity:
just found out that SHA-512/256 uses different initialization values. it's not just a truncated SHA-512. we don't have a SHA-512/256 implementation.
Actually, it seems that our Protocol Buffers implementation does support zero-copy using the bytes feature. https://github.com/stepancheg/rust-protobuf#copy-on-write
There is of course also the problem of security - when crossing enclave boundaries things should be copied at one point to avoid untrusted memory mutations while the enclave is executing, possibly violating security.
We need a way to measure gas consumption for a request and reward compute nodes for it.
Compute nodes may specify policies
Only key manager should stay in this repo.
Everything else should be spun out into separate repos
To avoid developing something like RLP all over again, perhaps we could just reuse it and replace our current serializer implementation with it? See specification and there is also a Rust implementation.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.