Git Product home page Git Product logo

freenet-core's Introduction

Freenet

The Internet has grown increasingly centralized over the past two decades, such that a handful of companies now effectively control the Internet infrastructure. The public square is privately owned, threatening freedom of speech and democracy.

Freenet is a software platform that makes it easy to create decentralized alternatives to today's centralized tech companies. These decentralized apps will be easy to use, scalable, and secured through cryptography.

To learn more about Freenet as a developer read The User Manual. For an introduction to Freenet watch Ian's talk and Q&A - YouTube / Vimeo.

Status

Freenet is currently under development. Using our development guide, developers can experiment with building decentralized applications using our SDK and testing them locally.

Applications

Examples of what can be built on Freenet include:

  • Decentralized email (with a gateway to legacy email via the @freenet.org domain)
  • Decentralized microblogging (think Twitter or Facebook)
  • Instant Messaging (Whatsapp, Signal)
  • Online Store (Amazon)
  • Discussion (Reddit, HN)
  • Video discovery (Youtube, TikTok)
  • Search (Google, Bing)

All will be completely decentralized, scalable, and cryptographically secure. We want Freenet to be useful out-of-the-box, so we plan to provide reference implementations for some or all of these.

How does it work?

Freenet is a decentralized key-value database. It uses the same small world routing algorithm as the original Freenet design, but each key is a cryptographic contract implemented in Web Assembly, and the value associated with each contract is called its state. The role of the cryptographic contract is to specify what state is allowed for this contract, and how the state is modified.

A very simple contract might require that the state is a list of messages, each signed with a specific cryptographic keypair. The state can be updated to add new messages if appropriately signed. Something like this could serve as the basis for a blog or Twitter feed.

Freenet is implemented in Rust and will be available across all major operating systems, desktop and mobile.

What is Locutus?

Locutus was the working title used for this successor to the original Freenet, in March 2023 it was renamed to "Freenet", this repository was renamed from locutus to freenet-core in September 2023.

What is Hyphanet?

The original Freenet codebase is now called Hyphanet. It is still actively developed by the same maintainers as before, and is available here.

Stay up to date

Twitter Follow

Chat with us

We're in #freenet-locutus on Matrix. If you have questions you can also ask here.

Many developers are active in r/freenet, but remember that Reddit engages in political and ideological censorship so don't make this your only point of contact with us.

Acknowledgements and Funding

Protocol Labs

In addition to creating the excellent libp2p which we use for low-level transport, Protocol Labs has generously supported our work with a grant.

FUTO

FUTO has generously awarded Freenet two Legendary Grants to support Freenet development.

Supporting Freenet

If you are in a position to fund our continued efforts please contact us on twitter or by email at ian at freenet dot org.

License

This project is licensed under either of:

freenet-core's People

Contributors

9i5bcrucnx5nmt avatar al8n avatar alexisbatyk avatar arnebab avatar cyborg42 avatar dantinkakkar avatar dependabot[bot] avatar digitaldoggo avatar download13 avatar electronicsarchiver avatar elimisteve avatar gogo2464 avatar iduartgomez avatar kakoc avatar kernelkind avatar kristijanzic avatar netsirius avatar sanity avatar sapristi avatar snyk-bot avatar stevenj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

freenet-core's Issues

Mitigating Sybil attacks

Discussed in #147

Originally posted by sanity May 29, 2022

Problem

Every node in the Locutus network has a location, a floating-point value between 0.0 and 1.0 representing its position in the small-world network. These are arranged in a ring so positions 0.0 and 1.0 are the same. Each contract also has a location that is deterministically derived from the contract's code and parameters through a hash function.

The network's goal is to ensure that nodes close together are much more likely to be connected than distant nodes, specifically, the probability of two nodes being connected should be proportional to 1/distance.

A Sybil attack is where an attacker creates a large number of identities in a system and uses it to gain a disproportionately large influence which they then use for nefarious purposes.

In Locutus, such an attack might involve trying to control all or most peers close to a specific location. This could then be used to drop or ignore get requests or updates for contract states close to that location.

Solutions

I think there are at least three categories of solution to this:

  1. Increase the cost of creating a large number of nodes close to a specific chosen identities
  2. Make it more difficult to target specific contracts
  3. Increase the cost of bad behavior by making other nodes in the network monitor for it and react accordingly

1. Identity Creation Cost

1.1 Gateway assignment

When a node joins through a gateway it must negotiate its location with the gateway first. This could be done by both node and gateway generating a random nonce, hashing it, and sending the hash to the other. After exchanging hashes they exchange their actual nonces which are combined to create a new nonce, and a location is derived from that. This prevents either gateway or the joiner from choosing the location.

1.1.1 Attacks

  • Gateway and joiner could collude to choose the location

1.2 IP-derived location

  • The location could be derived deterministically from the node's IP address, this could then be verified by any node that communicates directly with it.

1.2.1 Attacks

  • Attackers could limit connections to peers they also control, which could then ignore a mismatch between their location and their IP address.

2. Location Hopping

2.1 Replication

A contract has multiple copies, each indicated by a contract parameter - the location of each copy will be pseudorandom. A user could query a random subset of the copies to ensure that they receive any updates. If any copy has an old version of the state then the user can update them by reinserting the latest version obtained from a different copy.

2.2 Date hopping

A contract contains a parameter for the current date, which will mean that the contract has a different location every day. If today's contract is found to be missing it can be reinserted using an older copy.

3. Peer Pressure

temp_dir content preserved

While developing locally temp_dir() is used. So I faced with the issue when I changed my html files but they won't applied for new run just because files in temp dir preserved(in my case it was: /var/...locutus/webs/...; m1 Monterey platform)
I needed explicitly rm -rf /var/.../locutus
So probably we need to do some cleanup when exit?
Also I faced with the issue when I changed some file and I need to run bunch of comands(recompile web/data contract/data and move into examples)
Since it's an active development as a quick solution - does it make sense to introduce some Makefile/bashscript for that? Something like that:

#!/usr/bin/bash

# clean locutus temp_dir; could be evaluated dynamically
rm -rf /var/folders/4g/x2ybv6lj4s794_j2w41d6j_r0000gn/T/locutus/

cd crates/http-gw/examples
rm freenet_microblogging_data
rm freenet_microblogging_data.wasm
rm freenet_microblogging_web
rm freenet_microblogging_web.wasm
cd ../../..

cargo build	

cd contracts/freenet-microblogging-web
bash compile_contract.sh 	
mv ./freenet_microblogging_web.wasm ../../crates/http-gw/examples/
cd ../..

cd contracts/freenet-microblogging-data
bash compile_contract.sh 	
mv freenet_microblogging_data.wasm ../../crates/http-gw/examples/
cd ../..

cd crates/locutus-dev
cargo run --bin build_state -- --input-path ../../contracts/freenet-microblogging-web/web --output-file ../http-gw/examples/freenet_microblogging_web --contract-type web
cargo run --bin build_state -- --input-path ../../contracts/freenet-microblogging-data --output-file ../http-gw/examples/freenet_microblogging_data --contract-type data
cd ../..

We also can place a temp dir clean command there

Support multi-stage requests by including wasm in get messages

Was thinking what would be possible if get and put requests could contain webassembly code.

One thing it would open up is the possibility of multi-stage requests. This would be useful in situations where multiple requests must be made, each dependent on the state retrieved by the last. An example might be navigating an inverted index datastructure as part of a search engine.

Normally the client would need to make several get requests in sequence to navigate the datastructure, but if the request itself knows how to navigate the datstructure then it could avoid the round-trip back to the client at each step, which could be 10-20 times faster.

Subscriptions should refresh when there are more able peers

The scenario is such that when you subscribe initially, currently you are stuck with that peer. There is missing logic yet to recover subscriptions when the node you are subscribed to drops the network, but additionally to that, whenever you get new peer connections which are closer to the location of the contract they should substitute your current subscriptions in order to be more efficient on the number of messages sent around when relaying updates etc.

contract_browsing doesn't work on m1

System: m1 Monterey
Commit hash: a882515
Command: cargo run --example contract_browsing --features local

Output:

[2022-07-17T21:23:18Z INFO  contract_browsing] loading web contract DPJ3sM1MDCuPFxEayRsvEtJcD4ySGpCeFfRgCNLWtpfT in local node
[2022-07-17T21:23:18Z INFO  contract_browsing] loading data contract 5rwkvFpKoj9r7i6kL9BdQurAjnGhYfbi8uZY4Q6PAyrv in local node
[2022-07-17T21:23:18Z INFO  locutus_node::contract::handler::sqlite] loading contract store from "/var/folders/4g/x2ybv6lj4s794_j2w41d6j_r0000gn/T/locutus/db/locutus.db"
[2022-07-17T21:23:19Z ERROR locutus_dev::local_node] other error: Compilation error: unknown relocation Relocation { kind: Elf(264), encoding: Generic, size: 0, target: Symbol(SymbolIndex(6)), addend: 0, implicit_addend: false } 
^^^ the problem is here
[2022-07-17T21:23:19Z ERROR locutus_dev::local_node] other error: Compilation error: unknown relocation Relocation { kind: Elf(264), encoding: Generic, size: 0, target: Symbol(SymbolIndex(5)), addend: 0, implicit_addend: false } 
^^^ the problem is here
[2022-07-17T21:23:19Z INFO  warp::server] Server::run; addr=127.0.0.1:50509
[2022-07-17T21:23:19Z INFO  warp::server] listening on http://127.0.0.1:50509

Unfortunately I wasn't able to check whether it compiles on x86 since there is a bug in docker which prevents from emulating x86 correctly(due to a problems with qemu)

As I see it crashes here:
https://github.com/freenet/locutus/blob/a8825159906ff81f3fd8398f9d7be9c507631ae9/crates/locutus-runtime/src/runtime.rs#L173

With such store replacement works fine:
let module = Module::new(&Default::default(), contract.code().data())?;

So something with store. I tried to switch a backend from LLVM to Cranelift and it works now:
https://github.com/freenet/locutus/blob/a8825159906ff81f3fd8398f9d7be9c507631ae9/crates/locutus-runtime/src/runtime.rs#L238
-> Store::new(&Universal::new(Cranelift::new()).engine())

The root cause isn't obvious for me now. As a solution I can propose introducing if target_arch = aarch64 inside instance_store

Social credit

A decentralized cryptocurrency that departs from the typical "wallet-based" architecture for a "coin-based" architecture, solving many of the shortcomings of conventional cryptocurrencies.

Features

  • No transaction costs
  • Instant transactions (< 1 second confirmations)
  • Unlimited scalability by avoiding a global ledger
  • Coin scarcity ensured by tying to Freenet donations

Design

  • Coin: Contract-key representing a specific quantity of currency
  • Wallet: Public/private key pair that can own a coin.
  • The method of issuance for a coin is configurable, e.g. coins may be issued in return for a monetary donation to Freenet development
  • A coin's contract validates a list of transactions for this coin, including:
    • (First transaction) proof of issuance to a particular wallet
    • Transfer of custody from one wallet to another
    • Splitting of coin into multiple other coins
    • Merging of multiple coins into a single coin

Threat mitigation

Double-spending

Most of the complexity in conventional cryptocurrencies is due to the need to prevent double-spending. This is where A transfers funds to B but then executes another transaction to transfer the same funds to C, creating two contradictory versions of the transaction history.

The conventional solution is to broadcast all transactions to everyone and then have network participants commit to one version of the transaction history. They commit by performing a complex computational task ("proof of work" aka "mining") or by staking currency ("proof of stake").

Social credit takes a different approach which relies on real-time observability of data in Locutus. Interested parties - particularly the coin recipient - can subscribe to a coin's transactions and be notified within milliseconds when ownership is transferred.

<<<work in progress>>>

Components: A secret management system

Note: Previously, the term "component" referred to imported JavaScript libraries distributed like applications - this proposal changes that definition.

The role of the Component system is to manage the use of secret information such as private keys. A Component is WebAssembly code that implements the Component interface described below. These Components execute in the node itself rather than in the web browser.

Components can:

  • Store data securely in the peer, suitable for private keys - that only they can read.
    • This is similar to the browser's cookie or local storage mechanisms but with better encryption and backup support.
    • As with local storage - a component can only access its own private data
  • Communicate with the peer to retrieve, insert, and modify contracts and their state.
  • Communicate with in-browser applications via a messaging mechanism that ties an application's messages to its key in Locutus
  • Communicate with the user through a mechanism outside the web browser, eg. to approve a request from an application to sign something with the component's private key
    • The specifics of this will depend on the OS, perhaps using the OS's existing notification system, or a system tray icon on Windows

A Component implements the following interface:

trait Component {
  /// Process inbound messages, producing zero or more outbound messages in response
  /// Note that all state for the component must be stored using the secret mechanism
  fn process(messages : Vec<InboundComponentMsg>) -> Vec<OutboundComponentMsg>;
}

enum InboundComponentMsg {
  GetSecretResponse {
    key : Vec<u8>,
    value : Option<Vec<u8>>,
  }

  ApplicationMessage {
    from_app : Hash,
    payload : Vec<u8>,
  },

  GetContractResponse {
    contract_id : Vec<u8>,
    update_data : UpdateData, // See #167 
  }

  UserResponse {
    request_id : u32,
    response : String,
  }
}

enum OutboundComponentMsg {
  GetSecretRequest {
    key : Vec<u8>,
  } 

  SetSecretRequest {
    key : Vec<u8>,
    // Option::None will delete value associated with key
    value : Option<Vec<u8>>,
  }

  ApplicationMessage {
    to_app : Hash,
    payload : Vec<u8>,
  }

  GetContractRequest {
    mode : RelatedMode, // See #167 
    contract_id : Vec<u8>,
  }

  UserRequest {
    request_id : u32,

    /// A HTML fragment supporting a limited set of HTML tags, including hyperlinks
	message : String,

    /// If a response is required from the user they can be chosen from this list
    responses : Vec<String>,
  }
}

Distribution of Components

Components are distributed via their contract; their webassembly and parameters are distributed as the state of a contract together with parameters - similar to applications. Components are identified by the keys of the contracts through which they're distributed.

An application can request the installation of a Component, this may require some interaction by the user, for example, to donate to validate an antiflood token generator.

Outstanding questions / concerns

  • How to handle malicious Components, eg. that try to waste resources
  • How to handle internationalization for UserRequests since they contain human-readable text
  • Current user interaction is limited to questions with multiple-choice answers; this is probably too restrictive
  • Potentially add an "alarm clock" mechanism so the components can request to be woken up at a specific time
  • Can this be generalized in any way?
  • Components need a way to generate secure random data, eg. for keypair generation
  • Should #245 be implemented as a Component? What additional functionality would this require?

Allow contracts to subscribe to other contracts

Allow contracts to follow or subscribe to other contract states, the contracts state can be modified in response to the state and state changes of other contracts.

This should open up a wide variety of use cases and will bring Locutus closer to being a general-purpose decentralized computation platform.

NAT traversal

This is a general tracking issue to monitor the status of the different techniques that will help with clients behind NAT when it comes to establishing direct connection between peers.

Prelude

One of the problems of NAT traversal is the variance in the network characteristics and configurations when trying to establish a connection between two peers, at every layer of the network and from the pov of both hw (router configuration, LAN, ISP, etc.) and software (for example, is a browser handling the connection for you or do you have direct access to the interface?). A good overview and taxonomy of the different scenarios can be read here.

Some important/interesting resources re. NAT traversal status within libp2p:

rust-libp2p status

Overall tracking issue in their repo: libp2p/rust-libp2p#2052

  • Automatic router configuration (e.g. UPnP). Not implemented, only an issue: libp2p/rust-libp2p#558
  • STUN-like methods (hole-punching)
    • Port reuse in TCP. Implemented. We should enable this and should help.
    • In theory the Identify protocol (which we will be using) helps with this since it provides the observed addresses to anyone querying, so they can communicate through those addresses back (this is handled automatically by libp2p AFAIK).
    • AutoNAT. Lets peers request dial-backs. (libp2p/rust-libp2p#2262)
  • TURN-like methods

For more detail check their tracking issue.

Notes

  • Also worth mentioning is QUIC support, which we could use instead of TCP, QUIC builds on top of UDP and will make NAT traversal issues easier too. On top I believe is an straight upgrade in our case since our use case will trend towards a high number of connections but relatively short-lived and small in size on average, and here QUIC will be a good improvement over TCP, and I don't see any downsides to using it over TCP really. QUIC support within rust-libp2p seems to be finally near completion: libp2p/rust-libp2p#2159
    • Hole punching success rate in QUIC is higher that over TCP and equivalent to that of UDP, without having to deal with all the quirks of UDP cause those are handled by QUIC (and is encrypted by default too). 80% according to (MIT paper on Hole Punching, 2006).
  • Lack of concurrent attempts is a hindrance towards successful hole punching and there is an outstanding issue to address this: libp2p/rust-libp2p#1896 (comment)
  • Multistream select is important for hole punching, since both peers are attempting to initiate the connection and this is at odds to how is currently implemented (there must be one initiator). Ongoing effort to extend the protocol to allow for multiple initiators:

Solutions

Working on top of UDP to add our own solutions disqualifies a big chunk of libp2p, as is not possible to use UDP directly as transport protocol in libp2p (support is under development but only in the Go impl, and is just a prototype). There may be an option, in theory, to build up our own protocol and plugin it in the libp2p (is all based around traits/interfaces so I believe is doable, but we would have to look up) and to use other components that we find useful (e.g. all the identity protocol).

However this would mean building up a significant part of the plumbing and packet handling libp2p does for you and would be a large effort so I would say this should be a last option (unless we decide we don't want to use libp2p at all). Then there is the obvious caveat: If a higher number of developers with more resources have taken a long time to tackle this and haven't succeeded (yet) why would we be faster moving? This would be a major distraction from building our core logic, so if we have to at least would be nice to have the support of other developers. Then, I would favor an other approach, that is:

  • Identify what additional functionalities/methods are beneficial regarding the efforts to add NAT traversal capabilities.
  • Monitor their implementation, and if necessary contribute upstream and help the libp2p folks to push them through the finish line (this would have the added benefit of gaining familiarity with libp2p internals). A bunch of the work to be done has not been done at all, but there is a lot of work which is near completion so I expect within reasonable time there will be more improvements. I believe their idea is to extend protocol/web3-dev-team#21 (e.g. Go impl: libp2p/go-libp2p#1039) to all implementations (including Rust) so that is what we can use as guideline and to be the end status when it comes to NAT traversal support in libp2p.
    • If necessary fork and replace/modify whatever parts we deem necessary, however that would be a last option too, ideally we want to to contribute any work upstream and have synergies with the libp2p team and benefit from their expertise.

So we can continue with the current efforts and focus on other areas which are core to our project and make a reasonable assumption that, within reasonable time this will be improved, and if necessary help upstream to improve the situation re. NAT when we run into practical problems as we test.

Write an anti "write amplification" layer to avoid running the same ops repeteadly

Due to broadcasting and forwarding mechanisms is very possible nodes may be hit several times, from different peers, to replicate the same exact operation in the case of update notifications.

The solution to avoid duplicate costly work (like executing the same contract with the same value over and over) and be safe that this will apply to any ops is to keep a commit log for transactions, since the history of an initial operation is covered by the span of the same id, and double check that this operation has not been already commited previously and resolved.

The commit log can be clean up when garbage from old ops who have timed out is cleaned up based on this same time out duration.

send update doesn't work

Commit: f363ec1
Platform: m1 Monterey
Problem with steps:
Copying index, state html from root into contracts/freenet_microblogging_web/web
Then running the following script

#!/usr/bin/bash

# clean locutus temp_dir
rm -rf /var/folders/4g/x2ybv6lj4s794_j2w41d6j_r0000gn/T/locutus/

cd crates/http-gw/examples
rm freenet_microblogging_data
rm freenet_microblogging_data.wasm
rm freenet_microblogging_web
rm freenet_microblogging_web.wasm
cd ../../..

cargo build	

cd contracts/freenet-microblogging-web
bash compile_contract.sh 	
mv ./freenet_microblogging_web.wasm ../../crates/http-gw/examples/
cd ../..

cd contracts/freenet-microblogging-data
bash compile_contract.sh 	
mv freenet_microblogging_data.wasm ../../crates/http-gw/examples/
cd ../..

cd crates/locutus-dev
cargo run --bin build_state -- --input-path ../../contracts/freenet-microblogging-web/web --output-file ../http-gw/examples/freenet_microblogging_web --contract-type web
cargo run --bin build_state -- --input-path ../../contracts/freenet-microblogging-data --output-file ../http-gw/examples/freenet_microblogging_data --contract-type data
cd ../..

cargo run --example contract_browsing --features local

Then replacing DATA_CONTRACT with the new key(state.html file)
Then opening localhost: 127.0.0.1/<key>/state.html
State page is displayed, ws established
Clicking on Send Update button
Browser dev tools show:

Sending: [{"author":"IDG","date":"2022-06-15T00:00:00Z","title":"New msg","content":"..."}]
state.html:74 Update response: {"err":"client error: unhandled error: RuntimeError: unreachable","result":"error"}

I tried to figure out the place where it crashes and it appears here:
https://github.com/freenet/locutus/blob/f363ec1d26486dac959e05d99e393cd63ee8b104/crates/locutus-runtime/src/runtime.rs#L386

I verified that summary_func is valid(summary_func.is_ok())
So it crashes on call
I tried print an error instead of unwrapping and got that:

    RuntimeError {
        source: Trap(
            UnreachableCodeReached,
        ),
        wasm_trace: [],
        native_trace:    0:        0x100de36d4 - <unknown>
           1:        0x100de3b50 - <unknown>
           2:        0x100a3ebbc - <unknown>
           3:        0x100a3ea08 - <unknown>
           4:        0x100dd3d9c - <unknown>
           5:        0x18e5e44e4 - <unknown>
        ,
    },
)

RUSTSEC-2022-0040: Multiple soundness issues in `owning_ref`

Multiple soundness issues in owning_ref

Details
Package owning_ref
Version 0.4.1
URL https://github.com/noamtashma/owning-ref-unsoundness
Date 2022-01-26
  • OwningRef::map_with_owner is unsound and may result in a use-after-free.
  • OwningRef::map is unsound and may result in a use-after-free.
  • OwningRefMut::as_owner and OwningRefMut::as_owner_mut are unsound and may result in a use-after-free.
  • The crate violates Rust's aliasing rules, which may cause miscompilations on recent compilers that emit the LLVM noalias attribute.

No patched versions are available at this time. While a pull request with some fixes is outstanding, the maintainer appears to be unresponsive.

See advisory page for additional details.

possibility to replace a key in the address with its alias

While running a contract through a browser in order to execute a contract I should specify its key. Like: 127.0.0.1/Eu4ByDpJ7mFdyeMHZ8z9WcDQ4wP2i5hTXSac1ZKQ2R9Q/state.html. For the end user it's not so clear what it means(arbitary symbols). I think it's more comfortable for the end user to specify something more meaningful for him.
Curious: can we introduce aliases? So that 127.0.0.1/Eu4ByDpJ7mFdyeMHZ8z9WcDQ4wP2i5hTXSac1ZKQ2R9Q/state.html could be replaced with 127.0.0.1/my-app-name/state.html for example. As I see eth, near have such functionality so probably it also can work for us the same way?

Contract versioning

The contract API will evolve over time, and we should support a contract versioning mechanism to allow these changes in a backward compatible way.

Here is a suggested approach for representing the versions.

enum ContractContainer {
  Wasm(WasmAPIVersion),
  // Non-Wasm contracts supported here
}

enum WasmAPIVersion {
  V0_0_1 { wasm_code : Wasm, parameters : [u8] },
  V0_0_2 { wasm_code : Wasm, parameters : [u8] },
}

For serializing these enums I suggest using a variable length integer library such as varuint, for maximum flexibility.

Add observability tools to the node application

In order to have better observability and enable performance analysis (specially in async context) swap "log" for "tracing (include a compatible sink for log dependencies to get debug output from the dependencies) and use the open telemetry standard so we can plugin data to external systems (like prometheus or jaeger):

Improve join ring algorithm efficiency

The current implementation is a bit naive implementation and when propagating a join requests backtracks and waits for an answer from the request chain of all nodes it has been forwarded to (according to the hops-to-live configuration), this should be changed to be made more efficiently.

The suggested algorithm is such that in parallel to the gateway accepting/rejecting the connection from the new peer, forward a message in "fire and forget" fashion (sending the peer connection information along with it), then each node that is being forwarded to will attempt a direct connection to the new peer, in case it would be willing to accept it (enough capacity) while propagating the chain.

The original requester will keep track of how many have responded so far and will keep the operation alive until the op time out is reached or has gotten all the expected new connections (configurable parameter). If the maximum has not been reached, later on a different mechanism will attempt to establish more connections as the node receives new peers information.

Take into consideration the connectivity between peers at the p2p level and passing the connection info when implementing this, cannot be isolated just to the abstract ring layer.

Karma: decentralized reputation and trust

Trust is the ability to predict whether an identity will do what they say they will do in the future. Karma's purpose is to allow identities to establish trust from their first interaction.

History

F2's Karma mechanism is inspired by Freenet's Web of Trust plugin.

Uses

  • Prevent resource leaching

Design

#[derive(Serialize, Deserialize)]
pub struct TrustLog {
  pub public_key : EcKey<Public>,
  /// Trust entries signed by public_key
  pub log : Vec<(TrustEntry, EcSignature)>,
}

#[derive(Serialize, Deserialize)]
pub struct TrustEntry {
  /// Entry creation time in milliseconds since epoch
  pub timestamp : u64,
  /// What did this entity promise to do, in a standardized but extensible format
  pub promise : str,
  /// Did the entity follow-through on its promise?
  pub kept : bool,
}

Contract Runtime Environment

A number of utilities are made available to the web assembly contracts via extern functions, including:

  • Symmetric and asymmetric encryption (sign, blind sign, verify etc)
  • Hashing including popular algorithms like SHA256
  • Data manipulation utilities (eg. to read and write [u8]s)

Contract interaction

Goal

Allow contracts to inspect the state of other contracts when deciding whether to validate their state or update it in response to a delta.

Approach

This approach treats update_state() like a consumer of UpdateData events, which can contain the state or delta for the current contract or related contracts as specified. The function may return when the state has been updated or request additional related contracts.

fn validate_state(
        _parameters: Parameters<'static>, 
        state: State<'static>,
	    related: Map<ContractInstanceId, Option<State<'static>>>,
) -> ValidateResult

pub enum ValidateResult {
   Valid,
   Invalid,
   /// The peer will attempt to retrieve the requested contract states
   /// and will call validate_state() again when it retrieves them.
   RequestRelated(Vec<ContractInstanceId>),
}

// Delta validation is a simple spam prevention mechanism, supporting
// related contracts for this would be overkill
fn validate_delta(
        _parameters: Parameters<'static>, 
        state: Delta<'static>,
) -> bool

/// Called every time one or more UpdateDatas are received,
/// can update state and/or volatile memory in response
fn update_state(
	data: Vec<UpdateData>,
  parameters: Parameters<'static>,
  state: State<'static>,
) -> UpdateResult 


pub enum UpdateData {
  State(Vec<u8>),
  Delta(Vec<u8>),
  StateAndDelta { state : UpdateData::State, delta : UpdateData::Delta },
  RelatedState { relatedTo: ContractInstanceId, state : UpdateData::State },
  RelatedDelta { relatedTo : ContractInstanceId, delta : UpdateData::Delta },
  RelatedStateAndDelta { 
     relatedTo : ContractInstanceId, 
     state : UpdateData::State, 
     delta : UpdateData::Delta,
   },
}

pub struct UpdateResult {
	new_state : Option<State<'static>>,
    related : Vec<Related>,
}

pub struct Related {
	contract_instance_id : ContractInstanceId,
    mode : RelatedMode,
}

pub enum RelatedMode {
  /// Retrieve the state once, not concerned with
  /// subsequent changes
  StateOnce,

  /// Retrieve the state once, and then supply
  /// deltas from then on. More efficient.
  StateOnceThenDeltas,

  /// Retrieve the state and then provide new states
  /// every time it updates.
  StateEvery,

  StateThenStateAndDeltas,
}

Web and websocket interface

Purpose

A web browser can be used as a general-purpose user interface for decentralized applications on Locutus. These applications can be distributed via Locutus and interface with Locutus through an efficient WebSocket connection to a user's Locutus node.

This is analogous to FProxy in Fred, but much more powerful because it supports interactive web applications.

HTTP gateway

  • Listens for HTTP connections which include a contract hash, eg. http://127.0.0.1:8608/3M3fbA7RDYdvYeaoR69cDCtVJqEodo9vth
  • Contract state consists of metadata and a "payload" - each prefixed by a u32 length.
    • The payload a tar.xz file containing a website including index.html
    • The metadata may include things like a digital signature to be verified by the contract.
  • All files in payload will be made available on the requested URL, eg. http://127.0.0.1:8608/3M3fbA7RDYdvYeaoR69cDCtVJqEodo9vth/index.html

todo: Also needs to support redirect mechanism, so instead of a payload the address of another contract is given - the gateway will transparently redirect to that. This redirect will be updatable which will allow contract versioning.

WebSocket

A connection to the gateway may be upgraded to a websocket that can be used to get, put, and modify contract state

WebSocket commands

WebSocket messages are encoded in a binary serialization format such as MessagePack (TBD).

The following commands and responses are supported:

mod get {
    /// Client -> Node
    struct Request {
        key : Vec<u8>,
        request_contract : bool,
        subscribe : bool,
    }

    /// Node -> Client
    struct Response {
        contract : Option<Vec<u8>>,
        subscription_id : Option<u64>,
        state : Vec<u8>,
    }

    // Node -> Client
    struct Notify {
        subscription_id : u64,
        state : Vec<u8>,
    }
}

mod put {
    /// Client -> Node
    struct Request {
        contract : Vec<u8>,
        state : Vec<u8>,
    }

    /// Node -> Client
    struct Response {

    }
}

mod update {
    /// Client -> Node
    struct Request {
        contract : Vec<u8>,
        delta : Vec<u8>,
    }
    
    /// Node -> Client
    struct Response {

    }
}

Proxy

This HTTP interface can also serve as a HTTP proxy, which will only permit connections through this interface. This can be used to prevent applications from connecting to non-Locutus websites.

Create a test contract for testing purposes

The goal of this issue is to have a test contract that allows interaction on the state in the tests that require it. Changes required:

  • Generate a simple compiled contract available for modules where interaction with the contract is required.

Currently these tests are ignored and marked with the comment:
// FIXME: Generate required test contract

Possible spam resistance system using the hashcash proof of work system

Since Locutus will be processing messages, it will be necessary to prevent spam and DOS attacks. I think that hashcash serve this purpose. It is proven, being used in Bitcoin as well as email spam filters. If spam and DOS attacks are not accounted for, than these attacks can push aside legitimate users who wish to use the network. By adding a small cost to sending a message, than it would be much harder to send large amounts of spam, since a small amount of computing power would be needed per message, this would not be much of a burden on legitimate users, but it would make spam operations and DOS attacks exponentially more computationally expensive.

All the best,
Destroyer

RUSTSEC-2021-0139: ansi_term is Unmaintained

ansi_term is Unmaintained

Details
Status unmaintained
Package ansi_term
Version 0.12.1
URL ogham/rust-ansi-term#72
Date 2021-08-18

The maintainer has adviced this crate is deprecated and will not
receive any maintenance.

The crate does not seem to have much dependencies and may or may not be ok to use as-is.

Last release seems to have been three years ago.

Possible Alternative(s)

The below list has not been vetted in any way and may or may not contain alternatives;

See advisory page for additional details.

Distributed computation with "work contracts"

A contract-key can specify a time-consuming task to perform the output of which will be the value of the key, based on the contract parameters. This value will be cached and distributed in the usual way. A contract becomes like a pure function call.

Open questions?

  • Can a contract do get or puts on other contracts to calculate a value?
    • Would need to track and attribute resource usage

Services

Services like a HTTP requesting services could be made available using a contract as a conduit between service provider and consumer. The service provider may ask for compensation in Karma.

Service Marketplace

Decentralized data processing service

Overview

The proposal describes a system for allowing distributed processing on the Freenet network using WebAssembly functions to process input contracts and update the state of other contracts. The system includes mechanisms for validating the accuracy and timeliness of the computed state updates, as well as a reputation system to penalize workers who do not execute jobs correctly.

Jobs

A β€œjob” takes the state from one or more input contracts, processes it using a provided WebAssembly function, and then sets the state of another contract to the result.

Workers

Workers execute jobs. There is also a validation mechanism to ensure workers execute jobs in a timely and accurate manner, as well as a reputation system to punish and exclude workers that don't do their job correctly.

Job Lifecycle

Create a job

Job is created in WebAssembly, implementing this job function:

trait Job {

  /// Execute a job or determine additional input contracts
  fn job(
         /// The parameters that form the job along with the webassembly that implements
         /// this function.
         parameters : [u8],

         /// Any requested dependent contracts, this function requests them by returning
         /// JobOutput::Dependencies() containing the desired dependencies. 
         dependencies : HashMap<ContractInstanceId, Dependency>,
     ) -> JobOutput {
    // ...
  }
}

pub enum JobOutput {
    /// This job requires additional dependencies as specified
    Dependencies(HashMap<ContractInstanceId, DependencyCfg>)

    /// This job produced its output
    Output(Vec<u8>)
}

/// Specify how the contract wishes to receive dependency updates
pub struct DependencyCfg {
    /// Should the update include the initial dependency state we retrieved
    pub initial_state : bool,

    /// How many recent deltas should be included?
    pub recent_deltas : u32,

    /// How many recent states should be included?
    pub recent_states : u32,
}

pub enum Dependency {
    NotFound,
    Found {
        initial_state : Option<State>,
        recent_states : Vec<State>,
        recent_deltas : Vec<Delta>,
    },
}

Notes

  • The recent_states and recent_deltas mechanism are designed to allow the job to update state incrementally rather than needing to recompute everything every time a dependency updates

Wrapping by Job Contract

The job WebAssembly is wrapped into a standard job contract as a parameter, the job contract’s purpose is to advertise the job and to collect the job output from workers.

Job Ring

image

Jobs and workers are assigned positions on the job ring based on a hash of the job’s WebAssembly code or the worker’s public key, respectively.

Priority Ring Tree

To allow peers to find jobs close to them efficiently, we use a system of job discovery contracts arranged in a hierarchy. Each contract is responsible for a segment of the ring, half of the parent contract's segment.

image

The job ring is broken up into a hierarchy of these contracts, each bisecting the parent’s section of the ring.

Job list

Each discovery contract in the tree contains a list of the top $N$ jobs in its segment of the ring, ranked by:

rank = tokens / (1 + majority - minority)
  • tokens - number of antiflood tokens spent on this job in the past 24 hours

  • majority - estimated number of workers whose output is the same as the majority's

  • minority - estimated number of workers whose output is different to the majorities

Datastructures

mod DiscoveryContract {
    pub struct Parameters {
        /// The depth of this discovery contract in the tree, the root contract has
        /// a depth of 0.
        pub depth: u8,

        /// The index of the segment running counter-clockwise
        /// If depth=0 then there is only one segment so segment_ix=0. The
        /// number of segments = 2^depth.
        pub segment_ix : u32,
    }

    pub struct State {
        pub jobs : Vec<Job>,
    }

    pub struct Job {
        // The location of the job request
        pub location : Location,

        // The request itself
        pub request : JobRequest,

        /// The JobResult ranks together with the job results themselves
        pub results : Vec<(Location, JobResult)>,
    }

    pub struct JobRequest {
        pub code : Wasm,
        pub parameters : [u8],
    }

    pub struct JobResult {
        /// The results, keys are either the output itself or the
        /// hash of the output. The JobCertificates are those
        /// by workers closest to the Job::location.
        pub results : HashMap<ResultOutput, Vec<JobCertificate>>,
    }

    pub enum ResultOutput {
        /// If the output is smaller than a specified threshold, then
        /// embed it directly for efficiency
        Embedded([u8]),

        /// If the output is larger than the specified threshold then
        /// reference the hash of the output which allows the output
        /// itself to be retrieved from a simple content-hash contract
        Referenced(Hash),
    }

    pub struct JobCertificate {
        pub worker_location : Location,
        pub worker_pubkey : PublicKey,
        pub result_signature : Signature,
        pub tokens : Vec<TokenAllocation>,
    }

    pub struct TokenAssignment {
        // See https://github.com/freenet/locutus/issues/246
        // TokenAssignment::assigned_to must be the hash of the
        // job result hash.
    }
}

🚧🚧🚧 To be completed 🚧🚧🚧

Antiflood Tokens

Overview

Antiflood tokens increase the cost of abusive behavior like spam and denial of service attacks within Locutus.

To create antiflood tokens, the user must give up something of value to obtain a token generator. A public/private keypair that meets some cryptographically provable criteria - such as a signed certificate of donation to Freenet. This generator releases new tokens at regular time intervals that depend on the token tier. The lowest tier, 30-second tokens, are released every 30 seconds. Higher tiers release less frequently: 1 minute, 10 minutes, 30 minutes, 1 hour, 2 hours, 4 hours, 12 hours, or 1 day.

Other systems can require that a token of some minimum tier be issued as a condition of some action like adding a message to an inbox. In the event of bad behavior by the generator owner, the recipient can create a complaint, which will be visible to anyone interacting with the generator.
Initially, this will be a centralized solution. The user must make a cryptographically blinded donation to Freenet to obtain a token generator. In the future, we will provide a decentralized on-network way to create new token generators. We're still at the ideation stage with that, but it will not be based on proof-of-work.

Token Generator

Creating a Token Generator

  1. The user creates generator_key_pair and blinds generatorPublicKey to get blind(generator_public_key) before sending it to https://freenet.org/donations
  2. Freenet's donation system selected a donation_key_pair based on the donation amount
  3. Donation system signs blind(generator_public_key) with donation_private_key to produce signed(donation_public_key, blind(generator_public_key))
  4. signed(donation_public_key, blind(generator_public_key)) is sent back to the user
  5. The user unblinds it to obtain signed(donation_public_key, generator_public_key), this is the initialization certificate and can be used to prove that the generator has been initialized with a donation

Token Generator Contract

The Token Generator Contract keeps track of tokens that have been assigned

Contract State

The token contract state is a set of token assignments organized by tier (frequency of token release), and then the time the token is released.

The contract verifies that the release times for a tier match the tier. For example, a 15:30 UTC release time isn't permitted for hour_1 tier, but 15:00 UTC is permitted.

Note: Conflicting assignments for the same time slot are not permitted and indicate that the generator is broken or malicious.

struct TokenContractParameters {
    generator_public_key : PublicKey,
    current_date_utc : Date,
}

struct TokenContractState {
   /// A list of issued tokens
   tokens_by_tier : HashMap<Tier, HashMap<DateTime, TokenAssignment>>,
}

struct TokenAssignment {
   tier : Tier,

   issue_time : DateTime,

   /// The assignment, the recipient decides whether this assignment
   /// is valid based on this field. This will often be a PublicKey.
   assigned_to: [u8],

   /// `(tier, issue_time, assigned_to)` must be signed 
   /// by `generator_public_key`
   signature: Signature,
}

enum Tier {
  minute_1, minute_10, minute_30, hour_1, 
  hour_2, hour_4, hour_12, day_1
}

Requirements

For the TokenAssignment to be valid only one TokenAssignment must exist for a given issue_time and tier

Token assignment and verification

For example, consider a contract representing a message inbox, where senders must spend a token to add a message to the inbox. The inbox decides what tier of token is required, this should be made known to senders.

struct InboxContractState {
  messages : Vec<InboxMessage>,
}

struct InboxMessage {
  message: String,

  assignment: TokenAssignment,

  /// `(message, assignment)` signed by generator's public key
  signature: Signature,
}
  1. The sender creates an InboxMessage including a valid TokenAssignment
  2. The sender attempts to modify the inbox to add the new message
  3. The InboxContract verifies that the tier is high enough and the assignment signature matches
  4. The InboxContract verifies that the assignment is valid by checking the relevant TokenContractState, ignoring it if it isn't

Complaints

Generator Complaint Contract

A contract that maintains a list of valid complaints for a generator - complaints can be created by a token recepient. Tokens from a generator with an excessive number of complaints may be rejected. The recepient is responsible for adding complaints to this contract.

   struct GeneratorComplaintsParameters {
       generator_public_key: PublicKey,
   }
   
   struct GeneratorComplaintsState { 
      complaints : HashMap<DateTime, HashMap<Tier, Complaint>>,
   }
   
   struct Complaint {
    /// A valid assignment is required for a complaint
    assignment: TokenAssignment,

    /// Reason for the complaint
    reason: String,

    /// `(assignment, reason)` must be signed by the token
    /// recipient to be a valid complaint
    recepient_signature : Signature,
}

Enhancements

  • We should randomize the daily changeover time for current_date_utc based on the generator_public_key to avoid a synchronized network change at 0:00 UTC.

  • assigned_tofield inTokenAssignmentis redundant in anInboxMessagebecause we already know theassigned_to here. We should avoid wasting these bytes.

  • Token recipients can specify a maximum token age, preventing passive accumulation of a large number of tokens over time

  • A duplicate token assignment is evidence of malicious behavior by the generator owner and will disable the token generator

Conclusion

We've now described a mechanism for generating scarce tokens that can be spent to gain access to potentially floodable resources like a message inbox.

Document app/component SDK

  • Change the node distributable crate to include the HTTP gateway and local development node
  • Add a publish-local option to the dev tool so users can check out their contracts easily.
  • Publish a new version of locutus with the executable, and stalone cli tool
  • Publish the locutus stdlib crate
  • Publish the locutus typescript std lib in npm
  • Make guide using mdbook and pin @sanity so he can check it.

Contract resource usage management

Goal

Peers attempt to relay contracts that they’re likely to receive requests for. Relaying means that the peer subscribes to contract updates and can then respond to requests for that contract immediately rather than forwarding the request.

Due to the small world network topology, peers are generally more likely to receive requests for contracts close to them.

Overview

  • Peers track resource usage per-contract
  • Resources include storage, bandwidth, CPU usage, memory
  • Cost computed from resources according to cost function
  • Peers advertise contracts they are relaying to neighbors, who may decide to relay them too

Cost function

A simple cost function that looks at each resource as a proportion of the total available to the peer and takes the maximum of these proportions:

$cost = max(storage_c /storage_t, bandwidth_c/bandwidth_t, cpu_c/cpu_t, memory_c/memory_t)$

  • $resource_c$ = contract resource usage
  • $resource_t$ = total resource availability

Relay decision

The peer's goal is to relay the contracts that maximize $requestRate / cost$, where $requestRate$ is the average number of requests per time interval.

New contracts will have an unknown $requestRate$, for these we use an isotonic regression to estimate the $requestRate$ for a given distance between the peer and the contract, assuming that the rate will be higher for closer contracts.

If the peer is exposed to a contract with an estimated $requestRate / cost$ higher than the worst currently related contract, the new contract replaces the existing one.

Related Issues

  • While this mechanism tracks and managed resource usage per-contract, #4 describes a parallel mechanism that tracks and manages resource usage per-peer

Locutus over Tor/Arti or I2P

It would massviely improve the privacy of Locutus if users could opt in to running locutus over Tor/I2P. Once the Tor project finishes Arti, Tor could be integrated directly into Locutus, since arti is written in rust. I2P features a protocol called SAM to facilitate communication between I2P, which is written in Java, and applications written in other langauges. There is a SAM client Library written in Rust on this github repo. (https://github.com/i2p/i2p-rs) Arti is currently on version 0.6.0, and they are aiming for version 1.0 in the future. (https://blog.torproject.org/arti_060_released/)

Contract API

/// Verify that the state is valid, given the parameters. This will be used before a peer
/// caches a new State.
fn validate_state(parameters : &[u8], state : &[u8]) -> bool;

/// Verify that a delta is valid - at least as much as possible. The goal is to prevent DDoS of
/// a contract by sending a large number of invalid delta updates. This allows peers
/// to verify a delta before forwarding it.
fn validate_delta(parameters : &[u8], delta : &[u8]) -> bool;

enum UpdateResult {
  VALID_UPDATED, VALID_NO_CHANGE, INVALID,
}
/// Update the state to account for the state_delta, assuming it is valid
fn update_state(parameters : &[u8], state : &mut Vec<u8>, state_delta : &[u8]) -> UpdateResult;

/// Generate a concise summary of a state that can be used to create deltas
/// relative to this state. This allows flexible and efficient state synchronization between peers.
fn summarize_state(parameters : &[u8], state : &[u8]) -> [u8]; // Returns state summary

/// Generate a state_delta using a state_summary from the current state. Tthis along with 
/// summarize_state() allows flexible and efficient state synchronization between peers.
fn get_state_delta(parameters : &[u8], state : &[u8], state_summary : &[u8]) -> [u8]; // Returns state delta

fn update_state_summary(parameters : &[u8], state_summary : &mut Vec<u8>) -> UpdateResult;

struct Related {
  contract_hash : [u8],
  parameters : [u8],
}
/// Get other contracts that should also be updated with this state_delta
fn get_related_contracts(parameters : &[u8], state : &[u8], state_delta : &[u8])
    -> Vec<Related>;

Note that the Related struct implies the following approach to generating a contract hash:

contract_hash = hash(hash(contract_wasm) + parameters)

The reason is so that a contract_hash can be created with only a hash of the contract_wasm, not the entire contract_wasm. This should reduce contract sizes.

Implement join ring op

Add and finalize the ring protocol related ops, including the following functionality:

  • complete net join op between 2 nodes with 0 hops (direct between gateway and connecting peer)
  • complete net join op after randomly traveling through the network to fetch more nodes to avoid local disconnected clusters
  • retry joining other gateway(s) upon initial failure
  • pruning dead connections (this is a responsibility of the connection manager now and we will propagate that dropping through the application somehow else)

Include tests and pass them all:

  • Happy path unit test state transitioning.
  • Broadcast state transition unit test.
  • Connect one node and a gateway unit test.
  • Connect several nodes in a network unit test.
  • Detect unavailability of any network gateways unit test. (I don't think this makes much sense to cover at the ring level either, more at the connection manager layer)

Switch to RocksDB store back end as default storage

In the future build with both Sqlite and RocksDB as possible backends (via feature) in order to compare behavior and performance characteristics. Probably we will want to stick with one, eventually, but otherwise, RocksDB can be used as a building block KV for more complex databases (is already used as such for distributed databases ala TiKV, for example), but it does not have SQL semantics on top of it, unlike Sqlite, which could prove useful.

In any case demonstrating we can quickly swap between store backends is benefitial so we are not stuck and coupled with one (and since right now we only need to maintain simple key-value semantics is rather easy to keep the backend implementation decoupled from the rest of the application, something we would like to continue in the future).

Contract-key API

A WebAssembly contract-key will implement the following functions:

trait ContractKey {
  /// Determine whether this value is valid for this contract
  fn validate_value(value : &Vec<u8>) -> bool;

  /// Determine whether this value is a valid update for this contract. If it is, modify
  /// the value and return true, else return false.
  fn update_value(value: &mut Vec<u8>, value_update: &Vec<u8>) -> bool;

  /// Obtain any other related contracts for this value update. Typicall used to ensure 
  /// update has been fully propagated.
  fn related_contracts(value_update : &Vec<u8>) -> Vec<ContractKey>;

  /// Extract some data from the value and return it.
  ///
  /// eg. `extractor` might contain a byte range, which will be extracted
  /// from the value and returned.
  fn extract(extractor : &Vec<u8>, value : &Vec<u8>) -> Vec<u8>;
}

Proposal for a possible incentive system without launching a new cryptocurrency.

I have noticed that Locutus had no incentive to help the network. I think an incentive system is possible without launching a new cryptocurrency, since as far as I understand Locutus is not a cryptocurrency, but feel free to correct me if I am wrong about that. This incentive system would be based on a simple subtraction problem:

messages you have processed for others - messages others have processed for you

  • in this case, you refers to your local node

If the result is less than zero, than all message you request after that you owe to people who processed your messages anyway, and your number will not be positive again until after you have paid back those who loaned their computing power to you.
Nodes that do not help the network will receive slower service, since in this system, higher trust balances would be processed first, this will reward those who contribute to the network with faster speeds than those who don't.

There are multiple methods to achieve this, one is a trustchain, and another is a distributed hash table, which would sacrifice reliability for lower resource use. However, it will be necessary to use an opaque trustchain similar to Monero if the trustchain is chosen as the store of trust, luckily, ring signatures, stealth addresses, and ringCT are well researched and can be replicated, even if they are not being used to create a cryptocurrency.

If the trustchain is used to store the equation I mentioned, then measures should be implemented to protect the privacy of the trustchain to protect node operators from repercussions of participating in the protocol, since the trustchain will probably last much longer than the messages themselves. It would be smart to flush older blocks once the trustchain is larger than a certain amount, unspent trust tokens in these "wallets" could then be added to the new genesis block. If this design choice is made, I would recommend flushing the trustchain every 90 days.

Links:
https://www.getmonero.org/resources/moneropedia/ringsignatures.html
https://www.getmonero.org/resources/moneropedia/stealthaddress.html
https://www.getmonero.org/resources/moneropedia/ringCT.html

Best Wishes,
Destroyer

Clean up CI pipeline and testing

The goal of this issue is to have a functional CI pipeline working again were we can be confident on the result of it and re-enable the tag at the repo level. Changes required:

  • Disabling tests in the locutus-node subcrate and icnrementally re-enable them as we work on it again, and remove lint warns from there since now that crate is out of sync with the state of the repo.
  • All the tests outside of the locutus-nodesubcrate should be sensible and working, also clean up any linter warnings (including clippy).

Once that is done add back the tag/label at the README

Document the app development process

We need to document the process of writing and compiling contracts, and writing web apps that communicate with the node via websocket.

We can keep this low-level and general so that others can use it to build convenience tools like an assemblyscript wrapper for contracts, or a friendly API for websocket communication.

Peer resource usage balancing

  • Peers track CPU and bandwidth usage, attributing it either to the peer's user, or to other peers
    • eg. Peer A sends request to peer B, peer B keeps track of all CPU/bw usage as a result of this request and attributes it to peer A
  • This includes "downstream" resource usage, anything this peer asks other peers to do
  • This means that every response to a request must include an accounting of downstream resources used to answer the request
  • A peer has a user-configurable maximum CPU and upstream/downstream usage limit
  • Peers can request that other peers throttle requests by requiring a minimum time between requests
    • Another peer that violates this request will be disconnected
  • A peer maintains a global minimum request interval, which is determined using an isotonic regression to avoid exceeding the user-configurable CPU/bw limits

Related

  • While this mechanism tracks and managed resource usage per-peer, #244 describes a parallel mechanism that tracks and manages resource usage per-contract

Prune dead connections

Add functionality at the ring level to prune dead connections when we start to implement the connection manager layer.

Add tags to the repo

Right now, the repo lacks tags which describe what Locutus is or does, it'd be nice to have some tags such as locutus, p2p, decentralized, etc, so the repo is more discoverable (currently, a JS project shows up when searching for projects tagged with "locutus").

Intelligent routing

  • Greedy routing will always pick the closest peer to the desired location.
  • But optimal strategy is to pick the peer that will complete the request fastest
  • Use isotonic regression to estimate response time from a peer based on its ring distance from the target location of the request
  • For each peer, track the average delta between iso estimated response time and actual response time
  • Intelligent routing, for a given request, uses the iso to estimate response time for each peer then adjust by average delta
  • Pick the peer with the lowest estimated response time

Building requires llvm

LLVM must be installed prior to cargo build:

$ sudo apt install llvm

Can this dependency be made explicit in Cargo.toml so that the user gets a useful warning?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.