Git Product home page Git Product logo

dex-contracts's Introduction

Build Status Coverage Status

Gnosis Protocol - Smart Contracts

The Gnosis Protocol Exchange is a fully decentralized trading protocol which facilitates ring trades via discrete auction between several ERC20 token pairs.

It uses a batch auction for arbitrage-free exchanges while maximizing trader surplus to facilitate the development of a fairer Web3 ecosystem for everyone.

Documentation

Checkout the Smart Contract Documentation.

Audit report

The audit report can be found here.

CLI Examples

Checkout wiki

Deployment Process

For the deployment of the contracts into an official network, follow this steps:

  1. Make sure that all depended contracts and libraries - e.g. BytesLib - has been deployed to the intended network and that their network information is available in the npm modules

  2. Run the following commands

yarn install                        # This installs all dependencies
npx truffle build                   # This builds the contracts
npx truffle migrate --network $NETWORKNAME --reset
yarn run networks-extract           # extracts deployed addresses to networks.json

If you are building for a local development network, ganache has to be running locally. For this you can e.g. in a separate shell run

yarn run ganache # start a development network (blocking)

If you want to deploy the contracts with an already existing fee token (tokenId 0), you can set the env variable

export FEE_TOKEN_ADDRESS=...

before running the migration script.

  1. Verify the contracts for some cool Etherscan.io goodies (see below for more help)
npx truffle run verify BatchExchange --network $NETWORKNAME
  1. List some default tokens on the StableX exchange
npx truffle exec scripts/add_token_list.js --network $NETWORKNAME

Verifying Contracts

In order to verify a contract on Etherscan.io, you need to first create an account and an API key

  1. Navigate to https://etherscan.io/myapikey
  2. Login or create an account
  3. Generate a new API key
  4. Add export MY_ETHERSCAN_API_KEY="..." to your ~/.zshrc, ~/.bash_profile, or similar

Note, if you have a specific contract address in mind (i.e. one which is not specified in networks.json) it may be referred to by address as

npx truffle run verify $CONTRACT_NAME@$CONTRACT_ADDRESS --network $NETWORKNAME

Retrieving previous deployments

In order to use the previously deployed contracts, which are documented in the network.json file, the following steps are necessary:

  1. Build the contracts:
npx truffle compile
  1. Inject address from network.json into the builds:
yarn run networks-inject

Deploying a simple market maker scenario to Rinkeby:

The following script deploys a simple market maker order and a necessary owl order, to enable trading:

# Get token ID of DAI
npx truffle exec scripts/invokeViewFunction.js 'tokenAddressToIdMap' '0x5592EC0cfb4dbc12D3aB100b257153436a1f0FEa' --network rinkeby

# Export the resulting token ID
export TOKEN_ID_DAI=[Result from last call]

# Get token ID of TrueUSD
npx truffle exec scripts/invokeViewFunction.js 'tokenAddressToIdMap' '0x0000000000085d4780B73119b644AE5ecd22b376' --network rinkeby

# Export the resulting token ID
export TOKEN_ID_TUSD=[Result from last call]

# Make sure that the users have deposited sufficient funds into the exchange
# Please be aware that the specified amounts are multiples of 10**18
npx truffle exec scripts/deposit.js --accountId=0 --tokenId=0 --amount=30 --network rinkeby&& \
npx truffle exec scripts/deposit.js --accountId=0 --tokenId=$TOKEN_ID_TUSD --amount=100 --network rinkeby

# Place  market-maker order in current auction
# This simulates a strategy expected from market makers: trading stable coins against each other
# with a spread of 0.02 percent
npx truffle exec scripts/place_order.js --accountId=0 --buyToken=$TOKEN_ID_DAI --sellToken=$TOKEN_ID_TUSD --minBuy=1000 --maxSell=998 --validFor=20 --network rinkeby

# Place owl token order for the fee mechanism
npx truffle exec scripts/place_order.js --accountId=0 --buyToken=$TOKEN_ID_DAI --sellToken=0 --minBuy=1000 --maxSell=1000 --validFor=20 --network rinkeby

Then, the market order can be place, after switching to another account. Usually, this is expected to happen via the UI. If it will be done via the console, following commands can be used:

# Deposit funds into exchange:
npx truffle exec scripts/deposit.js --accountId=0 --tokenId=$TOKEN_ID_DAI --amount=100 --network rinkeby

# Place market order with 1/2 limit-price
npx truffle exec scripts/place_order.js --accountId=1 --buyToken=$TOKEN_ID_TUSD --sellToken=$TOKEN_ID_DAI --minBuy=500 --maxSell=1000 --validFor=5 --network rinkeby

Now, the market can be inspected by:

# view the market status:
npx truffle exec scripts/get_auction_elements.js --network rinkeby

And the output should look like this:

[ { user: '0x740a98f8f4fae0986fb3264fe4aacf94ac1ee96f',
    sellTokenBalance: 100000000000000000000,
    buyToken: 7,
    sellToken: 3,
    validFrom: 5247563,
    validUntil: 5247583,
    priceNumerator: 1e+21,
    priceDenominator: 998000000000000000000,
    remainingAmount: 998000000000000000000 },
  { user: '0x740a98f8f4fae0986fb3264fe4aacf94ac1ee96f',
    sellTokenBalance: 30000000000000000000,
    buyToken: 7,
    sellToken: 0,
    validFrom: 5247563,
    validUntil: 5247583,
    priceNumerator: 1e+21,
    priceDenominator: 1e+21,
    remainingAmount: 1e+21 },
  { user: 'account',
    sellTokenBalance: 100000000000000000000,
    buyToken: 3,
    sellToken: 7,
    validFrom: 5247750,
    validUntil: 5247755,
    priceNumerator: 500000000000000000000,
    priceDenominator: 1e+21,
    remainingAmount: 1e+21 } ]

Building on top of BatchExchange

The integration of the Gnosis Protocol contracts into your own truffle project are demonstrated here: https://github.com/gnosis/dex-contracts-integration-example. This repository contains a minimal truffle project allowing to build on top of contracts. Please consult its readme for further information.

Contributions

The continuous integration is running several linters which must pass in order to make a contribution to this repo. For your convenience there is a pre-commit hook file contained in the project's root directory. You can make your life easier by executing the following command after cloning this project (it will ensure your changes pass linting before allowing commits).

cp pre-commit .git/hooks/
chmod +x .git/hooks/pre-commit

For any other questions, comments or concerns please feel free to contact any of the project admins:

dex-contracts's People

Contributors

anxolin avatar bh2smith avatar c3rnst avatar cgewecke avatar davidalbela avatar denisgranha avatar dependabot-preview[bot] avatar e00e avatar fedgiac avatar fleupold avatar josojo avatar nlordell avatar pdobacz avatar rafanator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dex-contracts's Issues

Emit Event on Contract Deployment

This event is required but the Event listener to all account balances to 0 for each accountId and tokenId.

This should emit the

event SnappInitialization(bytes32 stateHash, uint8 maxTokens, uint16 maxAccounts);

It might also make sense to include these values as part of the constructor's parameters.

constructor (bytes32 _stateInit, uint8 _maxTokens, uint16 _maxAccounts) public {
    // The initial state should be Pederson hash of an empty balance array
    stateRoots.push(_stateInit);
    MAX_ACCOUNT_ID = _maxAccounts;
    MAX_TOKENS = _maxTokens;
    emit SnappInitialization(_stateInit, _maxTokens, _maxAccounts);
}

but then we couldn't declare MAX_ACCOUNT_ID and MAX_TOKENS as constants... Is this a problem @josojo.

[Feature] Decentralized Dispute Resolution Mechanism

The purpose of this issue is to track different ways with which we can resolve disputes about the correctness of a state transition or auction settlements.

The main aspects of a solution are:

  • order hash - rolling hash of all orders that were considered in this solution
  • price vector - each value denotes the price of token_i to some arbitrary reference - e.g. $
  • volumes - list of buy and sell volumes for each order that is part of order hash
  • surplus - the trader surplus that is generated with this solution
  • newAccountRootHash - hash of all account balances after solution as been applied
  • new starting order hash - Depending on #85, a new batch might start with some orders carried over from the previous batch.

Since deposit and withdrawals only contain state transitions they can be seen as a sub-problem of auction settlement and thus a suitable dispute resolution method for the latter should work for them as well.

The main points of a solution that can be challenged are:

  • State Transition - balances under newAccountRootHash are not updated according to trading volumes)
  • Negative Account Balance - after applying the trades some accounts end up with negative balance. This is temporary possible (e.g. if a sell order gets processed, before the balance is filled with a later buy order), but should not be true at the end of an auction settlement.
  • Price Coherence - the ratio of buy and sell volume doesn't match the ratio of price of token A to price of token B in the price vector. This would mean an order is not filled at the uniform clearing price.
  • Limit Price Respected - the ratio of executed buy and sell volume is at least as large as buy and sell amount specified as a limit in the original order.
  • Surplus - The surplus actually generated by the solution is not the one claimed at submission time
  • Conservation of Value The sum of total buy and sell volume for a given token is not 0 (tokens are not burned or minted out of nowhere)
  • Order Hash The computation of the order hash (according to #85 and #86) on which this solution is based (or the starting order hash for the next batch) the is incorrect.

Each challenge can be done independent of one another and one successful challenge is enough to render a solution invalid.

The different resolution ways to implement resolution can be discussed below.

Place & Update Special Order Batch

Reserved account holders need to be able to update their standing orders.

Special orders are to be submitted as a collection of up to

10 = (MAX_ORDERS - 500)/MAX_RESERVED_ACCOUNTS)

they would be treated as a loop and the placeSpecialOrder function would require that the order request is coming from a sender with a valid reserved account. Would also need to emit an event indefiniteOrder(accountId, buyToken_i, sellToken_i, buyAmount_i, sellAmount_i, index)

Order Collection Parameters

As we know, there are several different types of different orders that can be placed in a batch auction.

Categories (Market vs Limit)
Types (Buy, Sell & Double sided)

The question here for @fleupold @twalth3r and @josojo is which types we would like to accept and how will we do this?

Assuming we only handle Limit orders (Buy, Sell & Double Sided), it should suffice to use the following:

function submitLimitOrder(
    buyToken: uint8,
    sellToken: uint8,
    maxSellAmount: uint128,
    maxBuyAmount: uint128,
    priceBuyToken: uint128,
    priceSellToken: uint128
)

Notice that specifying a positive integer for both sell and buy amount would constitute a double-sided order, whereas setting one or the other to zero would uniquely define the order as a sell or buy order respectively (i.e. maxBuyAmount = 0, could be interpreted as a Sell order and maxSellAmount = 0 interpreted as a Buy order). This could be interpreted as somewhat misleading (since the 0's would actually represent a value of infinity), but this should cover all possible cases of limit order definitions.

Purge ClaimableWithdrawSlots

Once the bitmap for a claimable withdraw slot has no remaining "claimable" entries, this information can be purged. An example code might be.

function purgeClaimableWithdrawal(uint slot) public {
    for (uint i; i < 100; i++) {
        if (claimableWithdraws[withdrawSlot].claimedBitmap[i]) {
            return;
        }
    }
    delete claimableWithdraws[withdrawSlot];
}

Order Batch Compatibility for Reserved Accounts

"Create 50 mappings for storing the order information of reserved accounts
mapping (orderhash + nonce -> [from, to]), getting updated by each order renewal, and a currentPointer"

Perhaps the mapping would involve accountId, orderHash (for up to 10 orders), auctionStart, auctionEnd. This could be stored as a list of structs

struct specialOrderBatch {
    bytes32 orderHash,
    uint auctionStart,
    uint auctionEnd,
    uint size?
}
mapping (accountId => specialOrder[]) reservedAccountOrders; 

[Scripts] Make SetupEnvironment Faster

Currently the setup environment script takes a long time because it setup the environment for a default of 10 accounts. This should be a configuration parameter and could be sped up by 5x

Auction Settlement

Contract should allow for State Transition as a result of auction results.

This would involve:

  1. Input parameters (oldStateRoot, newStateRoot, prices, fulfilmentVector)
  2. Check similar things to deposit and withdraw processing. For example
  • that current state root agrees
  • auction batch is an inactive slot
  1. Emit event(s) with price & volume (i.e. settlement) information

Deposit Slot Limit

Since one needs to provide snark proofs for all state transitions, there is a technical limit on the number of deposits that can be included/applied from a single deposit slot.

Thus, a counter must be included for each active deposit hash so that technical limitations of the SNARK technology are not exceeded.

Deposit Scripts

Some useful scripts would include

  • apply_deposits; taking depositSlot, newStateHash as arguments
  • request_withdraw; taking accountId, tokenId and amount as arguments

Some more fundamental level scripts could include the separation of setup_environment into register_account, register_token, fund_account and approve_contract, but the arguments will be less generic (i.e. not accepting integer arguments, but addresses etc...)

[Refactor] objectId indexing (tokenId and accountId)

Currently we are experiencing inconveniences in the off-chain services regarding the fact that tokenId and accountId are starting at 1 instead of zero.

It would be nice if this was sorted out at the source of confusion (i.e. here in the contract).

Don't issue batches with 0 elements

At the moment we create a batch upon contract creation for deposits, withdraws and orders.

After 20 blocks all these batches get finalized even if there were no items added. We should only start a batch if there are items to be processed at the end of it.

0️⃣

[Easy] Linking Library to contract deploys contract

In the following migration script, there is no need to keep line 9 (since the linking somehow deploys the contract to the same address as the Library).

module.exports = function (deployer) {
deployer.deploy(BiMap).then(() => {
deployer.link(BiMap, SnappAuction).then(() => {
deployer.deploy(SnappAuction)
})
})
}

This could equivalently be migrated as

module.exports = function (deployer) {
  deployer.deploy(BiMap).then(() => {
    deployer.link(BiMap, SnappAuction)
  })
}

[Snapp] Optimistic: Rollback account state on successful dispute

Rollback on Successful Dispute

Because this trading engine offers decentralized order matching, dispute-resolution mechanism must be implemented.
The anchor contract is incapable of determining valid from invalid solution proposals and assumes validity by default.
When a challenge is successful, there must be possibility to rollback or, at least, ignore faulty state transitions.

The smart contract only knows a compressed representation of the account balances on the exchange at any given time, and thus, is incapable of determining when a balance update (i.e. state transition is valid).

Setting the stage

In the current (MVP) implementation of the SnappBase contract, account states are stored as an array of hashes

bytes32 [] stateRoots;

There are three types of state transition (Deposits, Withdraws and Auction Settlement): All different transition types are similar in the sense that they are the result of processing a batch collected requests. However, no two of them are treated identically must be considered individually for implementation dispute resolution. Below is a brief summary of each transition type with their subtle differences highlighted**!**

  1. Deposit

    • Deposit requests are collected in a batch via requestDeposit.
    • ! Funds are immediately transferred to the contract upon request. !
    • Sender's new account balance is only represented stateRoot when batch of requests has been processed (via applyDeposits).
      This process suggests that deposit request batches can not be forgotten in the event of a rollback
  2. Withdraw

    • Withdraw requests are collected in a batch via requestDeposit.
    • Funds are NOT taken out of the account
    • Sender's new account balance is only represented stateRoot when batch of requests has been processed (via applyWithdrawals).
    • ! Funds are only transferred out of the contract after the Withdrawals have been applied ( via claimWithdraw) !. [This will imply that withdraw transitions must be finalized before funds can be claimed].
  3. Auction Settlement

    • limit orders are collected in a batch via placeOrder
    • orders are put in to an optimization problem and solutions are proposed by many solvers, collected by contract and organized in the form of a max heap.
    • after solution proposal period has ended, contract selects the "best" by popping the proposal heap.
      • Note: if there are no solutions, the NullSolution is applied.
    • proposer to provided solution information and new state hash (via applyAuction)

Finalization

The simplest imaginable form of state finalization is to impose a fixed time-delay T (aka challenge period) for each state update. Thus, disabling any additional updates until the challenge period for the current state has passed. Such a finalization is agnostic to the transition types and allows for one to carefully consider the implementation details of rollback individually without any additional complexity.

An alternative, slightly more complex, might be to consider the definition of finalization differently for each individual transition type. This idea will be investigated after giving implementation proposals for the simple case.

Challenge & Resolution

One key factor in dispute resolution is the number of transactions required for the contract to verify if rollback is required. This essentially breaks down into single vs. multiple transaction validation. For example, based on the discussions in #87, we see that snark-resolution could be validated in a "single" transaction (if the deployment of proof contract is considered separately), whereas on-chain validation often requires multiple transactions (due to payload constraints and overall block-gas limit). When a fraud proof can be validated in a single transaction, it suffices to implement the challenge, validation and rollback all within the single function. However, for multi-transaction fraud-proof validation, roll-back will requires a few steps such as

  • challenge state by index and raise flag.
  • provide pieces of proof in steps.
  • if state turns out invalid (after several steps), rollback.

The multi-transaction case inevitably implies that the time-delay must be reset (or extended in some way) on each successful step in the fraud-proof (giving the prover sufficient time to continue). In this case one may want to consider the possibility of allowing someone to provide a legitimacy proof to terminate the dispute process. Such time extension could also result in a DOS attack if not implemented properly.

It will likely turn out, that a hybrid solution for rolling back both single and multi transaction validation. This means consideration in both scenarios. Notice also, that any single transaction validation may not even require the challenger to bond their proof.

[Feature] Rollover Orders

In order to lower transaction costs for market makers, we need a way to post an order that will be automatically rolled over into subsequent batches (unless filled).

There a bunch of ways to achieve this. The goal of this issue is to capture the different solutions and pros & cons.

@josojo @bh2smith @twalth3r to subscribe.

Reserved Accounts

Register/Unregister account places. Reuse existing address mapping making first 50(=MAX_RESERVED_ACCOUNTS) slots be considered "reserved account slots". Might be as simple as defining a constant

[Optimization] Unnecessary Bitmap on applyWithdrawals

The Event Listener's database contains sufficient information regarding which withdrawal requests are legitimate (i.e. will be fulfilled and contained in the Merkle Root). This implies, we could actually save MAX_WITHDRAW_BATCH_SIZE Bytes of data being transferred to the applyWithdraw function.

This would also require a slight change of logic in the claimWithdraw function as well.

Currently, the inclusion of the bitMap as a parameter of applyWithdrawals is purely a convenience.

[Question] Unreserved Order Batch Updates

In placeStandingSellOrder we have unreserved order batch updates checked for and given as

if (
auctionIndex == MAX_UINT ||
block.timestamp > (auctions[auctionIndex].creationTimestamp + 3 minutes)
) {
createNewPendingBatch();
}

where as, in regular placeOrder the batch is updated as

if (
auctionIndex == MAX_UINT ||
auctions[auctionIndex].size == maxUnreservedOrderCount() ||
block.timestamp > (auctions[auctionIndex].creationTimestamp + 3 minutes)
) {
createNewPendingBatch();
}

Questions

  1. Do we need to update unreserved order batch index on during placement of reserved order? Is this intended to update the auctionIndex
  2. Is there a reason why the second condition
auctions[auctionIndex].size == maxUnreservedOrderCount()

is missing from the other one?
3. If question 2 results in "this is a mistake", can we do the following:

    function updateAuctionIndex() private {
        if (
            auctionIndex == MAX_UINT ||
            auctions[auctionIndex].size == maxUnreservedOrderCount() ||
            block.timestamp > (auctions[auctionIndex].creationTimestamp + 3 minutes)
        ) {
            createNewPendingBatch();
        }
    }

[Feature] Order Cancellations

The purpose of this issue is to discuss, how order cancellation should work. Cancellation is definitely needed for rollover orders (in case they are no longer wanted). This issue is therefore somewhat dependant on #85.

However, it would also be nice to be able to cancel one-off orders in the current batch assuming the batch hasn't been closed yet.

Subscribing @bh2smith @twalth3r @josojo

Deposit Index Gap

Having deposit slots determined by the integer division blockNumber / 20 creates massive gaps in the deposit index. For example, (as of this date) the first non-empty deposit index would be 7064235 / 20 = 353211. Furthermore, whenever there is a lull in deposits, there will be empty deposit blocks between different indices.

This can be avoided by using a depostHashes array (rather than mapping) which is continually appended to at the appropriate time (rather block time or deposit limit reached).

Multi-order Submission for Regular Accounts

"Order placement function accepts arrays of order parameters and throws several events."

Currently users must submit their orders individually paying gas fees for each order transaction. It would be nice to be able to submit multiple orders in a single transaction (this would reduce the gas costs for multiple orders and also enable us to unit test what happens when an order batch is filled before the batch closes. In this case, several events or type SellOrder would be emitted.

Order Collection

One should be able to submit a limit order of the following form

  • Buy at most X of token i for token j if price p_{i, j} < P
  • Sell at most Y of token i for token j if price p_{i, j} >= P

@twalth3r could you please confirm the order types that should be collected by the contract?

Expressing a Sell Order in terms of integers should be as follows:

  • buyToken: uint8
  • sellToken: uint8
  • buyAmount: uint
  • sellAmount: uint
  • priceBuyToken: uint
  • priceSellToken: uint

State Transition to emit slot

Since the deposits, withdraws and auctions all lie in a current slot, the Event listener would need to know (not only which type of state transition occurred), but also which slot was applied.

This would imply the inclusion of uint slot as part of the StateTransition Event.

100% Branch Coverage

Somewhere in applyDeposits, our unit test do not cover all possible branches. I suspect that it has to do with the requirement that slots be applied sequentially (since this condition has an OR and we may not have tested both scenarios).

[Feature] Order Stream

Streamlining orders

The following idea outlines a concept for order collection and order cancelations. The higher level goal is to, generally, be able to consider more orders in a batch and to make order submissions cheaper.

Order collection
Let's assume that there is a stream of orders being submitted by the traders. These orders are sent to anchor contract and getting hashed with the rolling hash technique. However, there is not just one rolling hash per batch. Rather orders are sequentially put into order sub-batches, consisting of n-orders and for each order sub-batches a rolling hash is calculated. Order sub-batches are always fully filled, only at the time of closing an auction batch, the current intermediate hash of the order sub-batches is stored as well. This allows determining which orders of order sub-batches are included in an auction batch.

Every 3 minutes a batch is closed. Once a batch has closed the orders of the batch are determined. All orders submitted in the current order sub-batches plus the p-1 previous order sub-batches are being considered. In total, a maximum of p * n order can be considered for the optimization problem.

Solution submission
Every solution submission will only contain the matched orders. In total, only L (L<<np) orders are allowed to be matched.
Solutions will have the following field:

  • surplus,
  • prices,
  • order indices ( index representing the order index in the n*p orders)
  • indices-hash: hash of all indices and will be calculated on the chain.
  • rolling hash of matched orders
  • trading volume per order

Besides the usual challenges for a solution, there needs to be a challenge determining whether the matched orders are really a subset of the given orders and whether the orders have not been matched or canceled before:

Subset Challenge
Given the n*p considered orders and the order indices of the solution, any client can recalculate the rolling hash of the matched orders. In case, the rolling hash of matched orders was not given correctly, the rolling hash needs to be calculated on chain. For this, the challenger will shoot 1 transaction per order sub-batches. In each transaction, the following logic is checked:

  • list of matched order indices hashes to the indices-hash
  • orders of payload hash to the rolling hash of the order sub-batches
  • the rolling hash of matched orders is updated, by hashing all order is in the matched order index of this order sub-batch.

After the p transaction, the calculated order hash and the hash submitted in the solution must be the same, if the solution was valid.

Matched Challenge
In case the order was matched before, a reference to the solution and its matched index can be verified on chain.

Reasonable numbers for the protocol would be:
n =4000, as then a sub-batch verifying transaction would cost 884*4000 gas in payload only.
p= 8
L = 1000

In this case, the order indices of the solution submission would only cost 1000 * ln(4000*8) = 15000 bits. The number of bits could be reduced by using bloom filters techniques.

Order cancelation
Each account maintains on chain a cancelation hash, encoding all the cancelations. Each trader can update the hash by providing the complete list of all indices of canceled orders. This list will be hashed and then stored as the cancelation hash. Each account has several cancelation hashes, each for one batch within the finalization period.

Inclusion proofs of canceled orders in solution submissions have the following logic:

  • It points out the exact order index of the matched order, which was already canceled
  • it needs to verify all data from the order sub-batch containing the order to recreate the exact owner of the order
  • For this owner, the canceled indices from the cancelation hash need to be resubmitted and checked that they hash to the cancelation hash of the ower
    -The canceled indices must contain the matched order.

Order roll-overs are not part of the protocol. However, an order is eligible for several batches, depending on quickly the next n*p orders are being submitted.

Fee mechanism
compatible with basically any discussed fee model.

Analysis
Orderbook clearing attack:

The order book can only be cleared if np new orders are put into the system. With n=4000 and p=8, this would cost: 32000*884 gas. With higher gas prices, this is already around several hundred dollars.

If clogging/ order book clearance would become a problem for the exchange, we could always increase p.

Options
Streams of order blobs may be running in parallel with different fee models, as for example an order space renting model. In this model, only certain traders would be allowed to submit an order to certain streams. In each batch, the last p blobs of each stream would be considered for finding the optimal solution.

[Bug] Coverage Fails on waitForNSeconds

Observe that this function

const waitForNSeconds = async function(seconds, web3Provider=web3) {
const currentBlock = await web3Provider.eth.getBlockNumber()
const currentTime = (await web3Provider.eth.getBlock(currentBlock)).timestamp
await send("evm_mine", [currentTime + seconds], web3Provider)
}

when changed to

const waitForNSeconds = async function(seconds, web3Provider=web3) {
  const currentBlock = await web3Provider.eth.getBlockNumber()
  const currentTime = (await web3Provider.eth.getBlock(currentBlock)).timestamp
  console.log("Current Time: ", currentTime)
  await send("evm_mine", [currentTime + seconds], web3Provider)
  const nextBlock = await web3Provider.eth.getBlockNumber()
  console.log("New Time: ", (await web3Provider.eth.getBlock(nextBlock)).timestamp)
}

outputs

Current Time:  1559735013
New Time:  1559735013

when running ./node_modules/.bin/solidity-coverage

but actually works when running truffle test

Subscribing @fleupold or @josojo. Any ideas?

[Easy] Word Change

I would find this method MUCH easier to read if standingOrderBatch was replaced with currentOrderBatch

uint currentBatchIndex = standingOrders[accountId].currentBatchIndex;
StandingOrderBatch memory standingOrderBatch = standingOrders[accountId].reservedAccountOrders[currentBatchIndex];
if (auctionIndex > standingOrderBatch.validFromIndex) {
currentBatchIndex = currentBatchIndex + 1;
standingOrders[accountId].currentBatchIndex = currentBatchIndex;
standingOrderBatch = standingOrders[accountId].reservedAccountOrders[currentBatchIndex];
standingOrderBatch.validFromIndex = auctionIndex;
standingOrderBatch.orderHash = orderHash;
} else {
standingOrderBatch.orderHash = orderHash;
}
//TODO: The case auctionIndex < standingOrderBatch.validFromIndex can happen once roll-backs are implemented
//Then we have to revert the orderplacement
standingOrders[accountId].reservedAccountOrders[currentBatchIndex] = standingOrderBatch;

Make claimWithdraw entirely Public

Anyone can call the claim function on behalf of anyone else (as long as the funds are transferred to the rightful owner address).

In this case, then we wouldn't necessarily need to compute the Merkle Leaf within the contract

One could simply supply accountId, tokenId, amount and the leaf (comprised of this information)

This will involve removing onlyRegistered modifier from the claimWithdrawals function and adjusting the units test to reflect these changes.

[Documentation] Include Readme with Badges

Readme should include install guide, example CLI instructions and badges (e.g. travis build, solidity coverage)

Note that there is already a branch called readme that has been started.

Place Order Script

This should take the form of a CLI script that runs as;

npm run placeOrder <accountId> <buyToken> <sellToken> <minBuy> <maxSell>

Document the Contract

The contract should include standard doc strings on all functions.

Also, a reordering of view, public, private, structs, payable and other elements will be appropriate.

Unit test Public Views

Although these functions are not relevant for the contract's logic, they should still be tested (for maximal coverage)

Emit slotIndex rather than prevStateHash on state transition

The information of (stateIndex, stateHash) is equivalent to (prevStateHash, currStateHash) (since the order of records is recoverable). However, from the archive node's prespective, it is much easier to gather and sort these records by index rather than having to navigate through several prev-currs to reconstruct the ordering.

[Unit Testing] Use Truffle assertions

Replace

const {  assertRejects } = require("./utilities.js")

with

const truffleAssert = require("truffle-assertions")

Which can be used as

await truffleAssert.reverts(instance.function(), <REQUIREMENT_REVERT_MESSAGE>)

[Bug] maxUnreservedOrderCount returns 756

This function:

function maxUnreservedOrderCount() internal pure returns (uint16) {
return AUCTION_BATCH_SIZE - (AUCTION_RESERVED_ACCOUNTS * AUCTION_RESERVED_ACCOUNT_BATCH_SIZE);
}

is expected to return 500 (i.e. AUCTION_BATCH_SIZE - (AUCTION_RESERVED_ACCOUNTS * AUCTION_RESERVED_ACCOUNT_BATCH_SIZE) but it returns 756.

This is because both AUCTION_RESERVED_ACCOUNTS and AUCTION_RESERVED_ACCOUNT_BATCH_SIZE are of type uint8 so the product AUCTION_RESERVED_ACCOUNTS * AUCTION_RESERVED_ACCOUNT_BATCH_SIZE overloads an evaluates to 244.

There should be a unit test showing that this value is the expected result.

Furthermore, it would appear that the unit tests for placeStandingSellOrder are not making use of the contract constants. For example,

await truffleAssert.reverts(
instance.placeStandingSellOrder([0,0,0,1,1,1,1,1,1,1,2,0,1], [1], [1], ["0x10000000000000000000000000"], { from: user_1 }),
"Too many orders for reserved batch"
)
})

This is dangerous!

Conditional Coverage

Currently (although we do have 100% Line and Branch Coverage), we do not have exhaustive conditional coverage:

Example:

Check that all batches can be filled to their maximum capacity and then move to the next once full. This applies to deposit, withdraw and order batches.

Solution:

This will require fitting many transactions into a single block when testing with ganache (since there are 1000 orders allowed in a batch, but the batch times out after 20 blocks).

Emit single event on Multi-Order request

Currently, the placeMultiSellOrder in SnappAuction.sol makes use of placeSellOrder which emits an individual event for each order placed. It would be significantly more gas efficient (see discussion on #100) to emit a single event with arrays of orders information. This, however, would also require adaptation to how the event listener (as part of dex-services repo) handles this information. Specifically, here

Incorrect JS implementation of `encodePacked_16_8_128`

This is only correct in the cases when a, b and c are < 16. It seems that the 'hex' is not doing what we had hoped for.

// returns equivalent to Soliditiy's abi.encodePacked(uint16 a, uint8 b, uint128 c)
const encodePacked_16_8_128 = function(a, b, c) {
return Buffer.alloc(13)
+ Buffer.from(uint16(a), "hex")
+ Buffer.from(uint8(b), "hex")
+ Buffer.from(uint128(c), "hex")
}

I could really use some help with this one.

Should be getting something like

(3, 3, "18000000000000000000") --> 
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,249,204,216,161,197,8,0,0]`

But instead it is being literally translated to

(3, 3, "18000000000000000000") --> 
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,18,0,0,0,0,0,0,0,0,0]

Some proposed suggestions about mimicking this

function encodeFlux(uint16 accountId, uint8 tokenId, uint128 amount) internal pure returns (bytes32) {
return bytes32(uint(amount) + (uint(tokenId) << 128) + (uint(accountId) << 136));
}

solidity functionality in JS are maybe to use Buffer.from(value, 'utf8') instead of hex...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.