Git Product home page Git Product logo

protocol's Introduction

CircleCI Coverage Status

Livepeer Protocol

Ethereum smart contracts used for the Livepeer protocol. These contracts govern the logic for:

  • Livepeer Token (LPT) ownership
  • Bonding and delegating LPT to elect active workers
  • Distributing inflationary rewards and fees to active participants
  • Time progression in the protocol
  • ETH escrow and ticket validation for a probabilistic micropayment protocol used to pay for transcoding work

Documentation

For a general overview of the protocol refer to the wiki resources.

Development

All contributions and bug fixes are welcome as pull requests back into the repo.

A note on branches as of LIP-73: Confluence - Arbitrum One Migration:

  • The confluence branch contains the latest contract code deployed on the Arbitrum One rollup. Since the core protocol is operating on Arbitrum One rollup going forward all contract code changes pertaining to the core protocol should be on this branch.
  • The streamflow branch contains the latest contract code deployed on Ethereum. Since the only operational contracts (not paused) on Ethereum, excluding the Controller, are the LivepeerToken and BridgeMinter the only contract code changes on this branch would be for those contracts.

The Arbitrum bridge contracts can be found in the arbitrum-lpt-bridge repository.

ERC20 Note

The Livepeer token is implemented as an ERC20 token in token/LivepeerToken.sol which inherits from the OpenZeppelin ERC20 token contract and all implemented ERC20 functions will revert if the operation is not successful. However, the ERC20 spec does not require functions to revert and instead requires functions to return true if the operation succeed and false if the operation fails. The contracts bonding/BondingManager.sol and token/Minter.sol do not check the return value of ERC20 functions and instead assume that they will revert if the operation fails. The Livepeer token contract is already deployed on mainnet and its implementation should not change so this is not a problem. However, if for some reason the implementation ever does change, developers should keep in mind that bonding/BondingManager.sol and token/Minter.sol do not check the return value of ERC20 functions.

Install

Make sure Node.js (>=v12.0) is installed.

git clone https://github.com/livepeer/protocol.git
cd protocol
yarn

Build

Compile the contracts and build artifacts used for testing and deployment.

yarn compile

Clean

Remove existing build artifacts.

yarn clean

Lint

The project uses ESLint for Javascript linting and Solium for Solidity linting.

yarn lint

Run Tests

All tests will be executed via hardhat.

Make sure to add relevant API keys inside .env file (by copying provided .env.example) to assist tests and deployments.

To run all tests:

yarn test

To run unit tests only:

yarn test:unit

To run integration tests only:

yarn test:integration

To run gas reporting tests (via hardhat-gas-reporter) only:

yarn test:gas

To run tests with coverage (via solidity-coverage) reporting:

yarn test:coverage

Deployment

Make sure that an ETH node is accessible and that the network being deployed to is supported by the hardhat.config.ts configuration.

export LPT_DEPLOYMENT_EXPORT_PATH=~/Development/lpt_contracts.json
yarn deploy

protocol's People

Contributors

dependabot[bot] avatar dob avatar eladmallel avatar ericxtang avatar hjpotter92 avatar jozanza avatar kautukkundan avatar kyriediculous avatar leos avatar red-0ne avatar riccardobiosas avatar rickstaa avatar victorges avatar yondonfu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

protocol's Issues

Should bond(), unbond(), transcoder()... be defined in IBondingManager?

I was updating the Whitepaper with the Livepeer transactions, and I saw that these popular transactions aren't in the IBondingManager interface? Should the interface be updated to include them, or is it only meant to include the external functions that are called by Jobs and Rounds manager?

Loop safety measures

There are several spots in the protocol where we loop through a number of rounds to update state since the last time the state transitioned. For example when a delegator changes their bonding status we loop through every round since the last time they did so.

We're going to have to add limits to these loops, and allow them to invoke this function multiple times in order to get current. Otherwise we risk making these transactions impossible to finish due to gas limits.

Think about the incentives of calling EndJob

The incentives for calling EndJob() are currently unclear. Who would call it, the transcoder or the broadcaster and would they be incentivized to call it such that the benefits outweigh the costs of an on-chain transaction?

Transcoder's share of block reward and fees without factoring in its bonded stake

Note: these thoughts exclusively refer to block reward, but I think they apply to both block reward and fees

Currently, a transcoder's block reward share is:
(mintedTokens * blockRewardCut) + (delegatorsRewardShare * transcoderBond) / transcoderTotalStake) where delegatorsRewardShare = mintedTokens * (100 - blockRewardCut). A delegator's block reward share is (delegatorsRewardShare * delegatorBond) / transcoderTotalStake.

This slightly complicates transcoder block reward share calculation since we also have to calculate the transcoder's share of the block reward as a delegator (i.e. (delegatorsRewardShare * transcoderBond) / transcoderTotalStake). It also seems like there is a good chance that a transcoder's bonded stake will increase at a faster rate than a delegator's bonded stake meaning over time the delegator's block reward share will fall unless a) the delegator bonds additional tokens or b) the transcoder decreases the blockRewardCut. If the delegator's block reward share is continually falling, the delegator will just move to a different transcoder. Option A requires the delegator to purchase more tokens, so option B seems more likely. Over time, in order to provide adequate block reward share to delegators, a transcoder could end up decreasing the blockRewardCut to 0 and exclusively get block reward based on it delegated to itself.

An alternative approach that might reduce complexity is for a transcoder's block reward share to just be: mintedTokens * blockRewardCut. Rather than, also earning block reward based on its own bonded stake (i.e. (delegatorsRewardShare * transcoderBond) / transcoderTotalStake), a transcoder would set blockRewardCut to a percentage that factors in whatever amount of stake it itself has bonded. A delegator's block reward share would become (delegatorsRewardShare * delegatorBond) / transcoderTotalDelegatedStake such that the transcoder's bonded stake is not included in transcoderTotalDelegatedStake.

Replace test values with realistic default values for time related variables

Some of our protocol variables such as roundLength, unbondingPeriod and jobEndingPeriod are time related and have values denominated in blocks. In our tests, we use a helper function to fast forward a certain number of blocks by repeatedly using TestRPC's evm_mine request to mine a block. A large number of evm_mine requests slows down our tests a lot, so currently we use smaller test values for these protocol variables.

We should replace these smaller test values with realistic default values at a later point in time.

Transcoder shouldn't be able to update their pending fees/price immediately before new round

There needs to be a period of time where delegators can review a transcoders fees when they aren't subject to change. Otherwise there is an attack where transcoders advertise great rates, and then switch them at the last minute before being elected in.

I think the simplest temporary implementation might be that the transcoder() function throws if you're within transcoderFreezeBlocks of the start of the next round. Perhaps this is a couple hours worth? It's not great UX though.

The other option is that this is just defended against socially. If people notice this happening, then they delegate away, and we don't need to update the implementation.

Finally, people could set a client side flag that says that if a transcoder changes the rates beyond a certain threshold, their client automatically watches and unbonds.

Reward transcoders based on proportional stake

Active transcoders that call reward should receive block reward relative to their proportional stake of the total stake of all active transcoders. At the moment, all active transcoders receive the same amount of block reward when calling reward

Restrictions on withdrawing job deposits

Currently, a broadcaster can withdraw deposits at any time using the withdraw method. As is, this method is a vulnerability because a broadcaster could call withdraw after a transcoder has done work, but before it actually claims work on-chain (which would transfer funds from the broadcaster's deposit to job escrow). Tracking this here as a reminder to fix this

Transcoder Election Rounds

Transcoders should be elected in rounds. This overlaps and borrows from #3 which states:

As for progression of the rounds, a good starting point may be to follow the simple casper viper contract which does a nice job of keeping track of the current validators, next validators, and next-next validators. It just shuffles through these arrays by reassigning their state as the protocol proceeds.

I believe the logic there is as follows:

  • When stake is bonded, this updates the next-next-dynasty state.
  • When a new dynasty beings, the next-dynasty becomes the current-dynasty, and the next-next-dynasty becomes the next-dynasty. next-next gets reset to an empty state.

This means that current and next are locked in, while next-next continues to fluctuate. The side effects of this are that:

  • Nodes must wait at least 2 rounds before becoming a transcoder.
  • There is also a delay in unbonding, in that when you unbond you can't effect the current or next round's transcoder selections.

Since the protocol dictates N, or how many transcoders are active at one time, and the current set of transcoders can not change, the protocol can always determine who the valid transcoder is for the given cycle. The challenge however is keeping track of this in an efficient way on chain. If N is 50, then we need to constantly determine the top 50 delegated transcoders in the next-next round while the list is being built.

One idea for this is that the contract keeps the 2 * N top getters and updates it upon new Bond() and Unbond() transactions. Additionally, during the RoundLength prior to the start of the next round, any transcoder can claim their spot in the top N of next, purely by invoking a transaction. Since at that point in time no more stake could have been delegated towards them for that round, they're just claiming the evidence on chain that they are in the top N.

Package specific version of TestRPC

We can include a specific version of TestRPC as a dependency in package.json so that a fresh clone of the protocol repo and npm install will result in the same version of TestRPC always being used. This should also eliminate the need for a docker image of a version of TestRPC for CircleCI

Updates to feeShare, blockRewardCut, and pricePerSegment in transcoder() should be pending

We need to differentiate between pending state and active state in transcoder() settings. One way to handle this is that whatever the transcoder sets when they call this function is considered pending. When initializeRound() is called, setCurrentActiveTranscoders() can copy the pending parameters into the active parameters for the active transcoders inside the loop.

I'm not certain whether the active states should live inside the Transcoder objects themselves, or whether the active states should be copied into some separate ActiveTranscoders structure. I could go either way.

Upgrade to solidity 0.4.13

Update the pragma statements in each of the solidity files to force use of the latest compiler version to get optimizations and bug fixes.

Events

We should define events so that clients can listen for occurrences of common actions on chain.

  • Job()
  • EndJob()
  • ClaimJob()
  • VerificationPassed()
  • VerificationFailed()
  • Deposit()
  • Withdrawl()
  • Bond()
  • Unbond()
  • Slash()
  • Reward()
  • Transcoder()
  • NewRound()

I'm sure there are more.

Should Return JobID and StreamID in NewJob event

The jobID never gets passed back to the broadcaster, and it needs it to be able to call EndJob() in case the transcoder isn't being responsive. It needs the StreamID to figure out which JobID to choose when monitoring the event.

Transcoder fee vs. broadcaster's max price per segment

Currently, the price per segment of a job is always the transcoder's fee. As a result, even if the broadcaster's max price per segment is greater than the transcoder fee, the transcoder will still only earn the fee it originally charged.

In this case, the broadcaster benefits because it is always charged the lowest fees, but the transcoder loses out on additional returns if the broadcaster is willing to pay more i.e. (broadcaster's max price per segment > transcoder's fee).

At first glance, I think the current design makes sense because transcoders set their fee based upon what they believe the minimum compensation for their work should be. Thus, as long as the transcoder receives this minimum compensation, it would be happy. Opening this up for future discussion though

Bonding and unbonding

The two functions that should be exposed to users for this are:

  • Bond()
  • Unbond()

The various states that accounts can be in are Inactive -> Pending Bonding -> Bonded -> Unbonding -> Inactive.

The Inactive state is essentially Livepeer Protocol unaware. This is just a user holding token, but not participating in the protocol.

After the user calls the Bond() function, they should move into the Pending Bonding state. The protocol proceeds in rounds, so prior to the start of the next round they remain in the Pending Bonding state, and then when the next round begins they move into the Bonded state.

After they call Unbond(), they move into the Unbonding state where their token is tied up for UnbondingPeriod() time. At the completion of the UnbondingPeriod, they can withdraw their funds by calling Withdraw(), and they are in the Inactive state.

We should debate this, but for now a call to Bond() must take an address of a transcoder as input, and the full bonded stake will be delegated towards that transcoder. If that transcoder is not known to the protocol as the result of a Transcoder() call, then the Bond() transaction should throw so that the funds are returned and the transaction fails.

As for progression of the rounds, a good starting point may be to follow the simple casper viper contract which does a nice job of keeping track of the current validators, next validators, and next-next validators. It just shuffles through these arrays by reassigning their state as the protocol proceeds.

Jobs Should Contain Multiple Video Profiles / Prices

Currently each job has one price and one video profile.

During Adaptive Bitrate Streaming, we want to transcode the original stream into many different streams. In the current design, it's likely those streams will be assigned to different transcoders, which is not an efficient way for transcoding. (For example, you can gain efficiency by transcoding the same segment into a variety of different bitrates.)

We should make the change in the protocol to allow for multiple video profiles in the same job.

Note, this introduces a change in the verification process, where verify() should also take in the video profile.

ERC23 Transfer Fallback Support?

What do you all think about adding ERC23 support to the LivepeerToken contract so that users don't lose token if they accidentally send it to the contract address?

This video does a good job of explaining it, and it seems like it's only adding an additional method + interface in order to add this protection. There will be extra gas for using the transfer function however, since it always has to check whether it's a contract that it's transfering to.

Transcoder info retrieval

Related to: livepeer/go-livepeer#81

Since the active transcoder array is an array of structs and Solidity does not support contracts externally returning structs yet, we need a custom getter function to grab the active transcoder addresses. Then, the active transcoder addresses can be used to retrieve the associated stats.

We should also be able to retrieve transcoder stats for non-active transcoders. We should be able to define custom getter functions for these transcoder addresses as well and then use them to grab the associated stats.

Livepeer Token

The Livepeer Token (LPT) should be a standard Ethereum ERC20 token which is extended for mintability so that new token can be issued by the mint.

In Livepeer's case, the mint will be the Livepeer Protocol contract, such that the Reward() function can mint new token to the Transcoder and Delegators.

In order to take advantage of peer reviewed code, we can use OpenZeppelin's StandardToken, Ownable, and MintableToken contracts as a starting point.

This design takes most of the logic out of the token, and moves it into the LivepeerProtocol itself. One of the slight negatives is that the contracts will have to use the the approve() and transferFrom() methods to do two step transactions when staking token. See ERC 223 for proposals to get around this issue.

Reward function

Active transcoders (the N with the most delegated stake) are expected to call the Reward() function in order to distribute the newly minted Livepeer Token when it is their turn. If they don't, they are subject to slashing.

For this pass, let's implement Reward() as follows:

  • Check if it is actually this transcoder's turn to call reward.
    • Use random input as the block hash during the final Reward() call at the end of the previous round.
    • Use this to determine random ordering of active_transcoders.
    • Use N and CyclesPerRound to determine the valid block/time window for a given transcoder to call Reward().
    • When the transcoder calls Reward(), make sure that they are in active_transcoders and that it was their turn.
  • Make sure that they haven't already called Reward() in this cycle.
  • Distribute the token to the transcoder and their stakers. Leave the exact implementation of token distribution as TBD, as it depends on the transcoder's BlockRewardCut, fees, and slashing.

Being able to effectively do the token distribution in under the gas limit may be challenging for large numbers of delegators. We should consider how to calculate a delegators share when they go to Unbond() perhaps, rather than looping through every cycle.

Verification interface

The plan is to eventually use Truebit for scalable verification of selected segments. However prior to Truebit's implementation, we should create a common interface that wraps various potential backends for verification. In the short term options for implementations include Oracle, 3x transcoder verification, and even the trivial "true" verifier that just stubs a result for testing. The interface should probably specify:

  • The swarm hash of the verification code to run
  • The swarm hash of the input data
  • The function address to invoke with the output
  • The interface to providing the output (is it also a Swarm hash?)

The interface is also likely payable, as we'd have to pay to use Truebit or an Oracle to do verification.

Verify() and proof of transcoding

The verify() transaction is what a transcoder calls when he provides proof that he transcoded a particular segment. The inputs are:

  • streamID (or jobID)
  • segmentSequenceNumber
  • hash of transcoded output for that segment
  • signed transcode claim (which includes signed segment data from original broadcaster)
  • merkle proof that this transcoded segment is included in the merkle root from the preceeding endJob() call.

The function should then:

  1. Verify that this segment for this job id is in fact eligible for verification (can be deterministically known based upon the endJob() call).
  2. Verify that the original segment was signed by the broadcaster.
  3. Verify that the merkle proof is valid and this segment was claimed in endJob() before the transcoder knew this would be challenged.
  4. Invoke the verification process. To do so it will also need a swarm hashes of the location of the original and transcoded segments. This should be able to be determined from the signed transcode claim from the broadcaster.

This is non trivial and requires some on chain merkle proof verification and signature verification. We can track those as separate issues, and they would be good opportunities to contribute back to a common library like OpenZeppelin.

Transcoder with 0 delegators block reward share and fee share

Right now, a transcoder that has 0 delegators and blockRewardCut < 100 and feeShare > 0 will take its block reward and fee shares for currentRound, and leave the rest for its delegators in transcoders[transcoder].tokenPoolsPerRound[currenRound]. However, since transcoder has no delegators, no one will ever be able to claim these tokens.

In this case, I think the transcoder should just take all the fees and rewards. If its delegator count > 0, then the transcoder will only claim its own share of the block reward and fee share.

Livepeer Protocol initialization

For the first pass of the Livepeer Protocol contract we should initialize the contract state with all the required state variables. See the protocol parameters section of the whitepaper for a list of the params that should be initialized.

Additionally the initializer should

  • Deploy a new version of the LivepeerToken contract, and set itself as the owner.
  • Set the address of the Truebit contract and any other external dependencies.
  • Initialize genesis state of the token.
  • Initialize initial transcoders.

It's debatable whether it should deploy a new token contract, or if an existing token contract can just update the owner permanently to a newly deployed LivepeerProtocol contract. Keeping them separate would allow token distribution to occur prior to the launch of the network, and also allow the genesis state setup to be established outside of the protocol logic.

Additional restrictions on how many tokens the Minter can mint in a round

#75 includes a Minter contract that acts as a trusted token escrow for users that make deposits with the JobsManager and bond stake as a delegator with the BondingManager. Additionally, it mints tokens when called by the BondingManager. The Minter assumes it can trust the BondingManager because the associated contract address is registered with the Controller, however, it might be beneficial to include additional restrictions to guarantee that the Minter cannot mint more than a certain number of tokens in a round.

EndJob() and transcoding claim

The endJob() transaction allows the transcoder to claim which segments they transcoded for a particular stream. The params are:

  • streamID (or maybe jobID is better)
  • startSequenceNumber
  • endSequenceNumber - these two params provide the range of segments they're claiming
  • merkle root of the segments they are claiming.

When this segment range is claimed, it should kick off a timer window for verification to be completed. When the block is mined, it should be deterministic which segments will be challenged (if any), based on the new block hash, verification rate, and merkle root (or other appropriate sources of unpredictability). The challenged segments don't necessarily need to be stored in state, as long as they can be calculated deterministically after the fact, though if easier to store them in state then that is ok.

The verification will kick off in a call to Verify() to be tracked in a separate issue.

Gas accounting statistics

We should run through the protocol, especially the job-claim-verify loop, and see what the total gas costs given realistic gas price dynamics are. Additionally, we should check how much each step costs to find any bottlenecks where we can optimize gas usage

Transcoders might want to control how much can be delegated to them

@ericxtang brought up this idea and we discussed it briefly. Thought I'd document some of the thoughts here.

A transcoder might want to control how much can be delegated to them. One reason might be as a defense against a reputation attack from a whale. Consider this extreme case: suppose a transcoder just registered on-chain and has a hardware setup that is sufficient for a light transcoding workload. A whale can delegate a large amount to the transcoder, drastically increasing the new transcoder's total stake, pushing the transcoder into the active transcoder set in the next round. The transcoder's total stake might now be so large that it receives a much larger proportion of transcoding jobs than it was originally expecting. The transcoder's hardware setup is not powerful enough to support the number of jobs the transcoder is being elected for. As a result, the transcoder is unable to properly fulfill these jobs and takes a reputation hit. All this happens, with basically zero cost for the whale that delegates to the transcoder. The whale might even economically benefit from the attack because the transcoder will call reward and distribute a reward share to the whale if the transcoder's blockRewardCut < 100. The transcoder could set its blockRewardCut to 100, but that would not come into effect until after another round has passed.

Enhance transcoder() function to include job parameters.

Enhance the transcoder() function to include _feeShare and _pricePerSegment.

In the future we will want to use gas accounting rather than a flat _pricePerSegment, but for now this is a reasonable starting point.

_feeShare should be given as a percent. For example, if they pass it in as 5, then they will share 5% of the fees with their delegators.

_pricePerSegment can be given in LPT base units (we need a name for that since 1 LPT == 10^18 base units).

Upgrade to OpenZepplin v1.0.5

The latest release of OpenZepplin makes SafeMath a library. This will make SafeMath functions easy to integrate into our heap libraries and any other libraries that we write in the future. Furthermore, the library functions are easier to use and clearer to read because the syntax goes from safeAdd(a, b) to a.add(b) thanks to the using SafeMath for uint directive.

Heap implementation for transcoder pools

Yondon designed a nice heap implementation for keeping track of the active and reserve transcoders here: #4 (comment)

If possible, this would either make use of an existing solidity heap implementation if there is a suitable one, or else it would be designed as a library that could be reused in our, and other projects.

Delegators not eligible for fees until the round after they bond

After delegating to a transcoder by calling bond, a delegator does not transition to the Bondedstate until the following round. If the target transcoder is an active transcoder, the delegator's bond does not affect the active transcoder's total stake until the following round. If the target transcoder is a candidate transcoder, the delegator's bond will increase the candidate transcoder's total stake, but the candidate transcoder cannot become an active transcoder until the following round. Thus, it follows that a delegator should not earn fees or rewards until the following round.

This would simplify the logic for checking if a delegator is eligible for claim fees: instead of checking whether a delegator delegated to a transcoder before the block at which a claim was submitted, we can check if delegator.startRound is before the round at which a claim was submitted. This would allow us to aggregate fees into a pool per round and a delegator is only eligible for the fee pool for a round if delegator.startRound is less than or equal to a feel pool's round. Note: this would require distributeFees to allocate fees to the round during which claims were made rather than allocate fess to the round during which distributeFees is actually called

currentRound calculated when initializeRound() is called

Talked with @dob briefly about this - for now, the currentRound value is set when initializeRound() is called - this means there is a chance for someone to "sneak in" a staking into the last round if initializeRound() hasn't been called yet during the current round. Is this intentional?

One way to get around this is to make currentRound() a function call, so the checks will always do a real-time calculation instead of depending on another function call.

Register all contracts in protocol registry

Currently we are only registering the manager contracts (JobsManager, BondingManager, RoundsManager) in the protocol registry. The LivepeerToken and Verifier contracts should also be registered. The benefits of doing so:

  • The node would only need to know the protocol contract address - it can use the registry to discover the addresses of all other contracts including the token contract
  • The type of Verifier contract can be easily swapped out since the JobsManager would just ask the protocol registry for the address of the current Verifier contract

Moving tokens internally across contracts

Currently, JobsManager transfers tokens to BondingManager when a transcoder calls distributeFees. It is a little weird that users "deposit" tokens in both JobsManager and BondingManager. Perhaps it makes more sense to keep token management in a single contract whether it be BondingManager or some other treasury contract to reduce inter-contract communication and reduce the number of contracts that a user needs to approve for token transfers

Job() request transaction

The job() function is how broadcasters request transcoding jobs on chain. The inputs are:

  • streamID
  • transcoding options
  • max price/segment

streamID lets the transcoding node know which stream to request on the network

transcoding option semantics are TBD. Let's assume it's a binary string of fixed length for now, and we can interpret it as 0x1, 0x2, etc, without use.

The function should consider all transcoders who are currently available and are charging a price/segment less than or equal to the max price/segment provided, and choose one (hopefully randomly). The function should populate the metadata associated with a Job and store this so that the contract knows what jobs are open, who they are assigned to, and what they're status is in terms of claiming the work and verification. This likely requires a new Job struct. Jobs can also be assigned a unique ID so that they can be retrieved from the contract by ID.

Unbonding a portion of bonded stake without withdrawing as transcoder

Transcoders (and probably delegators) should be able to unbond a portion of their bonded stake to cover costs without unbonding entirely and losing their place as a transcoder. Either we enable this, or we deposit block rewards and fees into the unbonded state. What do you think?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.