Git Product home page Git Product logo

sia's Introduction

Sia Logo v1.3.3 (Capricorn)

The Sia project has moved to GitLab.

sia's People

Contributors

aiorla avatar annalissac avatar avahowell avatar bitspill avatar chrisschinnerl avatar davidvorick avatar dnetguru avatar eddiewang avatar ericflo avatar glendc avatar huetsch avatar joshvorick avatar lukechampine avatar marcinja avatar mharkus avatar mikkeljuhl avatar mingling94 avatar mnsl avatar msevey avatar mtlynch avatar nielscastien avatar petabytestorage avatar rnabel avatar rudibs avatar sanasol avatar seveibar avatar sgraves66 avatar tbenz9 avatar triazo avatar voidingwarranties avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sia's Issues

Unsafe Integers

Right now all of the integers that we use are unsafe. Meaning they'll overflow and golang won't warn us or provide an error.

So you could have an input of 50, and an output of uint64 max and a different output of 51, and the program will think that the total is 50 and will accept the transaction.

so... we need to add checks in that provent integer overflows in all situations. Maybe the 'Currency' type can be extended to return an error if there's an overflow.

Committing to state for debugging reasons.

As far as consensus is concerned, there are only a handful of variables that have significance. Namely, the utxo set and the OpenContract set.

For testing reasons, there needs to be a way to turn these structures deterministically into a single hash. We can implement Interface from the sort package (Len(), Less(), Swap()) on OutputID and ContractID. Alternatively, we can convert each type to a string before sorting, which might be less work but equally functional.

If each block, everyone has the same UnspentOutputs set and the same OpenContracts set, then the network can be said to be in consensus. That will be extremely handy when trying to figure out if AcceptBlock(), rewindABlock(), etc are actually behaving consistently.

Code Reorganization

Trying to figure out the best way to organize the code. I think I have a decent solution:


  • types.go --> a purely definitional file containing some types, and a handful of very simple functions that derive things like IDs. This should all be implementation-independent stuff (IE defined at the protocol level)
  • encoding.go --> contains methods for encoding and decoding structs and types according to the definitions in the protocol.
  • signatures.go --> crypto stuff, public keys and such
  • hash.go --> crypto stuff, hashing and merkle trees
  • erasure.go --> erasure coding stuff. (not in beta)

  • network.go --> contains functions for communicating with the network
  • state.go --> contains the state, and a bunch of helper structs that are implementation dependent. (eg could vary per implementation). Also contains basic helper functions.
  • synchronize.go --> has everything needed to synchronize to the network, including code for creating the genesis state and for handling reorgs.

  • transactions.go --> everything related to accepting, verifying, and integrating transactions.
  • blocks.go --> everything related to accepting, verifying, integrating, and forking blocks.
  • contracts.go ---> they're complex enough that they get their own file. rewinding transactions with + contracts, proofs of storage, and rewinding blocks all make calls to contracts.go

  • wallet.go --> a more low-level management of wallets and transactions. Each wallet contains one password from which it knows how to derive some set of signatures. Can look at the blockchain or the state and figure out unspent output that it is able to spend. Wallets should be saveable, relocatable, and constant size. A wallet should be able to spend all of its potential outputs without needing any context

  • host.go --> all of the functions and context necessary for being a host on the network. Announcing yourself, submitting storage proofs, responding to client requests for files, etc.
  • client.go --> all of the functions and context necessary for being a client on the network. Creating a host database, finding hosts, creating contracts, uploading files, etc.
  • (consumer.go) --> some functions that allow you to send money from one place to another. Maybe to negotiate with merchants, maybe to set up things like micropayment channels, I'm not fully sure. Maybe all of this stuff should be in wallet.go. We definitely need some function that sends coins from one wallet to another, but I don't feel like wallet.go is the right place to put it. wallet.go is supposed to be more low-level than that. I think I'd be okay with a consumer.go or an agent.go or some name like that which has only a single function that's used for sending money from one place to another.
  • (merchant.go) --> functions that allow basic merchant behaviors. For example, every customer should pay into a different address, which allows the merchant to know who has paid their bills and who hasn't. Not really sure that we need this at all. Merchants can use other currencies. Certainly not going to be implemented by beta or 1.0

Large transactions can fail

sometimes we'll send like 25,000,000 coins and it'll produce an invalid signature, but the wallet won't realize it and won't try again.

I think it's a problem with either one of the inputs in particular, or just a problem when you have 30+ inputs to your transaction. Needs more investigation.

State lock to RW mutex

In addition to speeding up a few things, it should also make the code as a whole cleaner. Mutexes are used all over siad, but its only for writing in two places (Accept*)

RPC Registration Interefers with Cobra

When you register an RPC, it disrupts cobra somehow. Uncomment the code in sia-cli/main.go and try running 'sia-cli' - you'll get an error message.

Not sure what's causing the problem.

Please fix this bug after merging #50 to minimize merge conflicts.

Split Network.go into two files

It seems like network.go has a pretty clear distinction between the top half and the bottom half. TCPServer stuff should probably all be in its own file.

Move network package to be a part of siad?

If we move CatchUp() to siad instead of being in siacore, then we should also be able to move the network package over right?

Just something I was thinking about. Seems to make sense to me that the networking code would be in the same library as the daemon. Maybe not though.

Update transaction pool when accepting a block.

If a block is released that has unexpected transactions, some transactions that are already in the transaction pool may become invalid.

Need to remove all conflicts within the transaction pool when integrating a block.

Networking, Host, Client, Wallet: Plan of Action

Networking should be split into its own package. The cli can initialize any RPCs as it already does currently. The network object will be removed from the state, making the state completely isolated from the internet.

Calling AcceptTransaction() and AcceptBlock() will return an error. If the error is nil, than the client knows to broadcast the block and do any necessary requests.

Renters rely on seeing the arbitrary data. If they need to re-read arbitrary data in a block that's already gone through, they can use the state api: Block(height) or Block(ID), each of which will return the block at the given height or id. Hosts need to be able to pair outputIDs with the amount of coins in them (that information is not included in the txn data). To do this, they make a state API call Output(OutputID), which will return the output of the given id, which will contain both the value and the CoinAddress of the output.

For the time being, renter, cli, host, are all going to go in the same package. This may change in the future but such fragmentation seems silly at the current point in time.

Logger

Some bare-bones logging would be nice. Nothing sophisticated. Even the standard logger would probably suffice for now.

Block subsidy spending limit

Bitcoin enforces that the block subsidy should not be spent for 100 blocks. I think that this is a good precedent to follow, but it's not currently implemented.

Package Safety

We should enforce safety by separating the code into multiple packages.

For example, nothing should ever call any function in verify.go except for AcceptBlock and AcceptTransaction. But they're all available to everyone. If verification was in its own package, there would be additional safety.

This is pretty low priority.

Remove pointers from StorageProofs

no reason to have the pointers, knowing the size of the file is enough to reconstruct the tree and figure out where the tree is imbalanced.

I'm pretty strongly against having pointers of any kind in the in-band structs. The fact that the signatures are pointers is a fluke and one that we can pretty easily dance around, and we should be much better off once we're using a crypto library where the signatures are byte arrays.

Peer list maintenance

Need to establish connections to a bunch of peers and maintain health in the peer list.

I'm not fully sure what the current algorithm is, but a good algorithm for the beta is to probably connect to a bunch of hardcoded peers, followed by asking for more peers, and then setting a timer that every 2-5 minutes asks for more peers.

It's pretty easy to sybil attack peers, so we'll want something that's robust to that. I don't have a ton of ideas at the moment, but perhaps when requesting peers, uhhh, they get batched based on who told you about them?

I'm struggling to think of something that would be easily exploitable. Perhaps Bitcoin has better solutions, but iirc finding peers and not getting DDOS'd is a very hard problem and potentially the weakest link in the Bitcoin protocol.

Travis CI

We should figure out some way to set up Travis CI.

Actually, since this is going to be public, we can probably wait until we publish the code to the rest of the world.

Codebase

I think we should really just have andromeda be the whole deal, including all of the code. We will probably need to pull some of the code that we already wrote (like that merkle proofs), but I'd like to keep stuff here for the time being.

Idk what we'll do when we decide it's time to move all the code over to a new repository.

Initial Peer

Need to add a flag for starting a new network, i.e. don't try to bootstrap, catchup, etc. Otherwise you get errors and panics.

State, Consensus State

So I'm trying to figure out how to build this, and this is the idea that I've got:

Have 2 structures, one is the state, which tracks all the transactions and blocks, and the other is the consensus state, which tracks what the state of consensus is.

So the state keeps track of all the verified blocks, and all the not verified blocks, and puts them in a tree that somehow knows which is which. Then, as new transactions come in it keeps them in organized piles that are useful for figuring out consensus state.

Consensus state tracks two things: unspent outputs, and open contracts. From those two things you can arrive at consensus over the validity of every single transaction. We only keep one consensus state, and we keep it on the longest fork and that's it. We have a way to rewind it and build it up again, but we only do this if some other fork gets more work. If the fork ends up being invalid, (because some of the blocks included are invalid), we give up and go back to the original fork.

This means every block and transaction in the consensus state needs to be reversible. I don't think that this will be a difficult thing. It also means we're vulnerable to DoS attacks where constant forks force us to keep reversing long sets of blocks, figuring out a block is invalid, and then reversing the whole way back to the longest valid chain. Each time we do this though is expensive to the attacker, because they are doing enough work to create a block that is invalid. For the time being, I don't think that we need to worry about this attack vector.

The state needs to be in charge of reorganizing the ConsensusState, because only the state will be able to know which transactions are valid but haven't yet appeared in the current state. The state will need to keep track of all transactions, and will need to know which ones are potentially valid, and which ones won't be valid unless there's a reorg. But, it does need to keep track of transactions that become valid as the result of a reorg.

In short, no transaction is ever thrown away, no block is ever thrown away, and at any point the state knows what the current consensus is, and it knows from the list of every single transaction that it's ever seen what the complete list of potentially valid transactions are (potentially valid means they could go into a block and be confirmed without breaking any of the rules of consensus).

Environment Redesign

For now, I'm just going to leave things as-is, but it seems like the design of Envrionment needs a second look.

Right now it's essentially one monolith, which could either be a good or a bad thing. It definitely needs some mutexes, which is going to be a pain.

Originally I designed states and wallets and hosts with the idea that a single environment could house multiple of each, but that has since transformed into an Envrionment which is pretty rigid and can't really support multiple of each. This could be for the better, but it happened accidentally and I haven't really had time to consider the consequences.

For the closed beta, the plan is to keep things as steady as possible, but then to do a bit of a rework immediately following.

Rename 'Freqency' Variables to 'Window'

It's confusing because frequency can be inverted. A low number for frequency indicates a high amount of proofs.

It'd be less confusing if everything were named in terms of the block window size.

Saving/Loading the blockchain

Once we stop pretending the blockchain can exist in RAM forever, things are going to get tricky. We're going to want to store the block chain in a database of some sort, and I don't trust us to write one, at least not an efficient one.

It looks like the official Bitcoin client uses LevelDB. There's a detailed write-up of the Bitcoin database here. It's probably in our best interest to stick to this as close as possible (at least wherever our needs are aligned).

LevelDB uses Snappy, a compression library designed by Google. It optimizes for speed over compression ratio (anywhere from 20% to 100% larger than zlib), so it may not be the best choice. Depends on how many reads/writes you're doing, I suppose.

Broadly speaking, my opinion is that our initial client should make design decisions that benefit the average, "non-enthusiast" user. In this case, that would mean choosing a database scheme that minimizes the size of the blockchain on disk, at the expense of slower reads/writes.

Signatures and Marshalling

I moved signatures into their own package but we ran into a problem. Sia marshalling means that the MarshalSignatures function needs to be in the signatures package. I'm not sure if this is the best place for it.

Thinking about it more, I guess it is, although we may want to come up with a different name from MarshalSia(), and give our encoding scheme a formal name or something.

License

We need a license that either makes all of this stuff open source or makes it all property of Nebulous Inc.

I think we're okay picking an open source licesnse for this repo, because our ultimate goal seems to be open sourcing it.

Whitepaper Things

I'm basically using this as a place for notes.

We should mention in the whitepaper that using the blockchain to find hosts is completely optional. A centralized service (like a coin exchange, a forum, a private tracker) is a perfectly acceptible way to find hosts that you are interested in.

Generally though we like having all of the hosts listed in one place because then the random file placement works better.

I am worried about a few things. Simplicity is nice but also scary. With the new system, there's nothing forcing hosts to be available to everyone. It felt more rigorously powerful to force hosts to accept random files, and it felt more rigorously powerful to force files upon random hosts.

It's more powerful because the big players can't leave the system. Super reliable hosts couldn't go off to the side and collect more expensive fees, scaring less reliable hosts away. Also, hosts had no incentive to be "super reliable" if it was significantly more expensive than being "mostly reliable". I'm afraid that people are going to be suboptimally trusting more expensive "super reliable" nodes when in fact the safest behavior is to buy 2 or 3 slots on less reliable nodes than a single larger slot on a super reliable node.

The old Sia didn't have any room for people changing that.

Basically we're trusting that our control of the client will be sufficient to prevent people from doing largely suboptimal things.

I guess that's also an advantage to us though. If our client can produce statistics that's substantially more reliable, cheaper, and faster than other clients, nobody is going to use a competing service. We have a way to stand out without losing customer share to other coins, because our super awesome client won't store on other coins.

Or something. Maybe it will, but only after we take a fee for ourselves again. lol.

It's just nerves though. I think that by being really good we'll be able to stay well ahead of the competition. But that requires actually being really good.

The way IDs are determined

The way that IDs are determined seems to be independent of the actual transaction itself. The information we use guarantees that there won't be a collision within a single chain, but if you don't also use some extra information when deriving the id, then potentially transactions in different forks could have the same id, which means that signatures would still be valid even though the history has changed.

I'm not sure if that's a good thing or a bad thing. It potentially means that the original owner of the coins can be changed without disrupting the future signatures that spend the coins. That leaves room for some interesting attack vectors, though generally not terribly significant. Might be prudent to add the signature or CoinAddress of the previous output or something. Have to think about it.

Transaction Form

Right now we have an ability to sign the miner fee, I want to add a flag that suggests stack/no-stack.

If you don't sign the miner fee, then it doesn't matter and the miner fee can be whatever.
If you sign the miner fee with 'no-stack', then the miner fee must be exactly what you signed.
If you sign the miner fee with 'stack', then the miner fee must be at least what you signed, summed with every miner fee that was signed 'stack'.

Example:

Alice signs 'stack' for a fee of 2 coins.
Bob signs 'stack' for a fee of 2 coins.
The minimum miner fee must then be 4 coins, because at least 2 have come from Bob, and at least 2 have come from Alice, and they must stack.
Charlie signs 'no-stack' for 3 coins. This transaction will now never be valid, because Charlie has stated that the fee must be exactly 3 coins, which is in conflict with what Alice and Bob have mandated.

Alternatively, Charlie could sign 'no-stack' for 5 coins, and the transaction will be valid as long as there are exactly 5 coins in the miner fee. This fits with what Alice and Bob have mandated.

And now that I think about it, we may want to add the 'stack' flag to all outputs. This makes sense for something like charity, where you want to be sure that if you and other parties are adding arbitrary inputs, nobody is going to be setting extra outputs where you don't want them, etc.

We also want a flag for 'all inputs', which means 100% of inputs are signed.
We also want a flag for 'all outputs', which means 100% of outputs are signed.
This prevents unwanted malleability.

Live Bootstrapping

So, lets say that I'm a node, and you announce a block that I don't have the parent of. Well now I need to do the bootstrapping process again and find all of the blocks that I'm missing. So somewhere in AcceptBlock() there needs to be code that deals with orphan blocks.

this is low priority.

A plan for orphans

Assumption: If the orphan is an orphan on the longest chain, calling CatchUp() will be sufficient to download the longest chain.

  1. Do not broadcast orphan blocks until they have been fully connected to the genesis state
  2. When presented with an orphan, ask for the 6 prior blocks, sent one at a time. Nothing more intense than that. (So I make one request for the 6 parents, and then you send me 6 blocks, starting with the parent, then grandparent, etc.)
  3. If the 6-grandparent of the block is still an orphan, call CatchUp()

that's it. If CatchUp() says that you are on the longest chain, you just accept it as true and tell yourself that you are on the longest chain.

The problem here is if you and your peer are both stuck in the same situation, which can happen if the network temporarily forks with ~half on each side of the fork (happens to Bitcoin every once in a while - usually resolves within a block or two). So CatchUp() should really verify with the whole peerlist (or some large randomly selected set) which block is the most recent in the longest fork.

Testing for MerkleFile

No need to do it yet b/c we're not actually dealing with files yet, but MerkleFile also needs some tests.

Create target math library

something that can be used to multiply targets, divide, add, etc. Ended up doing more of that than I initially expected.

finite precision big.Rats

Right now we use infinite precision floats for calculating depth, which is fine in theory but in practice you end up adding a significant number of bits every time you recalculate the depth. At depth 500 I had a full terminal (on my 3K screen with tiny font!) of digits representing the block depth.

Depth should probably have some sort of 256 bit representation the same way that target does, to prevent the big.Rat's from getting out of control. It can't be cpu friendly to be multiplying a 16 bit number by a 1kb number.

Future Block Handling

Right now if a block is given to us that is in the future, we just ignore it. Instead we should put it in some sort of queue that will attempt AcceptBlock() again once the block is no longer in the future. Maybe we can use threading or something go func{ sleep(); AcceptBlock(); }.

Mutation testing

https://en.wikipedia.org/wiki/Mutation_testing

It seems like there's at least one toolset for helping with this in go.

This is something we don't need to do right away, but should probably do before the 1.0. The basic idea is that we intentionally introduce mutations into the code (chainging '>' to '<' or += 1 to -= 1, or adding 15 instead of 16, etc, and that each mutation should cause a test to fail. If it doesn't, it means your test coverage isn't sufficient and doesn't catch simple programmer errors.

`Catch Up` Doesn't work

Simply asking for everthing after a certain height won't work.

Required process if you are just looking for the longest chain:

  1. Find a common parent
  2. Get everything after that parent.

Required process if you are looking for the parents to an orphan block:

  1. present an orphan block
  2. find a common parent
  3. get everything between common parent and orphan.

The hard part in each of these is communicating back and forth until a common parent is found. A brute-forcy solution would be to present every block ID that you have, and then let the server figure out the common parent from the list of IDs and fill you in.

More elegant things could involve binary searching and multiple messages being sent.

Finding the parents of an orphan isn't exactly a critical problem, but finding the longest chain is certainly a critical problem. Right now 'catchUp' is only guaranteed to work if you call catchUp(1). If you call catchUp(2) and you're block for 2 is different from the other nodes block for 2, you will get a bunch of orphan blocks and you won't be able to integrate them into the state, which is why you need to find a common parent.

Also, calling catchUp(1) is likely to result in the sending of potentially gigabytes of data, so there should probably be some mechanism to only request a few blocks at a time.

RPC errors

Do we need them? It'd be nice if the answer was "no." Right now they aren't implemented.
The real question here is: will there be a scenario in which the codepath is dependent on an RPC error? For RPCs like AcceptBlock and AcceptTransaction, no error checking is needed; you just transmit the value, and if the receiver can't process it, too bad. There's no resending. SendBlocks is in a similar boat: if it fails, it'll return 0 blocks, and CatchUp will just try again.

Ultimately, though, I think the answer is going to be "yes." There are a few cases where the requester needs to know if the RPC failed so that it can ask a different peer. So I'll make this low priority for now.

Size Requirements and Rejecting Blocks

We should set the size limit at 1mb per block and 16kb of arbitrary data per block.

Functions need to be written that can guage each of these numbers and reject any blocks that are in violation.

Exported Fields in Packages

Unfortunately, this is a rather small but many-line issue.

Packages other than siacore should not be able to modify anything within the State, Blocks, or BlockNodes, but need to be able to query all of it... which means unexporting all of the fields and writing a ton of getters.

Kind of gross but ultimately an important move.

We'll hold off for the time being.

Blocks and Block Headers, Transactions

We're going to need to revamp blocks entirely once we implement merge mining, which is something I'm intentionally putting off for a while because I'll need a few days to actually learn everything that goes into merge mining.

For the beta, I think it's just best to have one struct, which is the block and nothing else. Since we won't be creating an actual header struct for a while (after beta release), we can just leave it at hashing the whole block.

Transactions be different tho. We can't sign the whole thing because that would include signing the signature you're creating. Plus we're supposed to be following the covered field stuff anyway. But I don't trust my implementation of covered fields, so we're going to leave that out completely for the beta.

So: leave blocks the way they are, but adjust transactions so you get back the encoded transaction minus the slice of signatures. Except, you need at the tail one of the signatures containing all the extra signature fields like InputID, PubKeyIndex, etc. Idk if that makes sense.

Example:

Transaction {
stuff
[]signatures
}

What gets signed:

stuff+signature[i].stuff

where i = whatever signature you are manipulating. The signature doesn't need to include the value of i, just the stuff inside the signature type (except for the signature itself of course).

In my PR I'll comment out all the covered field stuff b/c it's not going to be implemented for the time being.

Marshalling Stuff

Marshalling and unmarhsalling sometimes causes unexpected panics, related to using pointers.

I'm not sure that taking the address of the object is the best solution.

Low priority.

Block & Transaction Version numbers

I was thinking about the best way to do blocks and transactions that allow for updates & soft forks to the protocol.

I think that the best approach is the simple one, where blocks are versioned.

Then, if the client sees a block of a version it doesn't understand, it just includes it and doesn't process any of the transactions, assuming that the work is valid. This allows old clients to keep up with recent blocks, though their utxo set is going to be incomplete.

A lower version block can't spend any outputs that were created in a higher version block. That's all you need though. I'm not sure if lower version clients should be mining on higher version blocks or not. If they do mine on higher version blocks, they might be mining on invalid garbage, which would be an issue.

But if they don't, then they'll be completely useless b/c their forks will never be accepted by the majority.

All ints as 8 bytes

Right now there's some inconsistency, or at least I'm worried that there is.

To encourage as much consistency as possible, I want all integers to be unsigned, and I want all integers to be encoded as 8 bytes. Trying to marshal an int or any intX type should cause a panic. Marshalling a uintX shoud be the same as typecasting to a uint64 first.

I know this introduces some unneeded inefficiencies, especially in the places where we use uint8. But I've already made a few mistakes in this regard, and I don't want it to cause problems. We can relax a little bit about the inefficiency though because we don't need to send the marshalled types over a wire, we just need to enforce 8 bytes-ness while hashing things. Type errors over the wire are less significant.

net.Conn outside of the network package

Just noticed tha RPCs use "net.Conn", which I think is not best practice. Should probably wrap it as a type within the network package, so that developers working with the sia network package only need to import one thing. Also modularity, etc.

Last Minute Whitepaper Change

Decided that we need flexible contracts after all. You'll have to work this into the whitepaper somewhere. Concept is pretty simple.

Contracts also have spend conditions, which, instead of indicating what signatures and timelock is needed to spend money, it indicates which signatures and timelock are necessary to update a contract. Presumably, it'll contain both a host signature and a client signature.

If both the host and the client agree to replace the existing contract with a new contract, the action is executed. Whatever is remaining in the current ContractFund rolls over into the new contract. Funds can be added. The contract should keep its prior ID.

The simplest way to implement this is to just add another transaction field, ContractChanges.

CLI and siad layout

The CLI has an environment object, which contains the state, a network object, a miner object, a renter object, a wallet, and a host object.

Must of those objects need to be able to reference the state. I am split between passing the state as an argument and having a pointer to it in each of the structs, which seems redundant. I am though leaning towards having the state pointer be in each struct.

Creating functions for determining the IDs

We should have a function for each time an ID is created that we can call. Right now I think that I'm just redefining it every time which is redundant and prone to causing errors.

=/.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.