cowprotocol / services Goto Github PK
View Code? Open in Web Editor NEWOff-chain services for CoW Protocol
Home Page: https://cow.fi/
License: Other
Off-chain services for CoW Protocol
Home Page: https://cow.fi/
License: Other
It looks like the equation used by the naive solver may suffer from rounding errors, that cause an "off-by-one" in certain cases. This leads to failed settlements for tokens that don't have any buffer in the settlement contract:
The computed input and output amounts from the tenderly link from https://logs.gnosis.io/app/kibana#/discover/doc/63bc0b10-5f93-11e9-bf17-fbe383fc7c9c/logstash-2021.04.27?id=4kE6FXkBDHKb0ncgiERY (Apr 27, 2021 @ 23:27:02.000)
"input":{
"amountIn":"147240147441114393"
"reserveIn":"490350406561504850302"
"reserveOut":"334129741725736"
}
"output":{
"amountOut":"99999999999"
}
In this log document, we see the batch call that does the getReserves
call https://logs.gnosis.io/app/kibana#/discover/doc/63bc0b10-5f93-11e9-bf17-fbe383fc7c9c/logstash-2021.04.27?id=00E6FXkBDHKb0ncgiERY (Apr 27, 2021 @ 23:27:01.417):
{"jsonrpc":"2.0","method":"eth_call","params":[{"data":"0x0902f1ac","to":"0x3d2d3c4446e8a82445418523b9b5b376765e55a8"},"latest"],"id":56931}
Matching it to the response from the log https://logs.gnosis.io/app/kibana#/discover/doc/63bc0b10-5f93-11e9-bf17-fbe383fc7c9c/logstash-2021.04.27?id=1EE6FXkBDHKb0ncgiERY (Apr 27, 2021 @ 23:27:01.519):
[id:56931]: "0x00000000000000000000000000000000000000000000000000012fe3a490802800000000000000000000000000000000000000000000001a94fa98c7a6de717e000000000000000000000000000000000000000000000000000000006088814b"
ABI decoding this:
> const ethers = require("ethers");
> ethers.utils.defaultAbiCoder.decode(["uint112","uint112","uint96"], "0x00000000000000000000000000000000000000000000000000012fe3a490802800000000000000000000000000000000000000000000001a94fa98c7a6de717e000000000000000000000000000000000000000000000000000000006088814b").map(x => x.toString())
[ '334129741725736', '490350406561504850302', '1619558731' ]
So, in this particular simulation, we see that the naive solver and the actual computed swap amounts differ despite having the same reserve values.
Original issue gnosis/gp-v2-services#524 by @nlordell
Our fee estimation is able to provide different fees for different sell amounts of a token which makes it more accurate. We use this when giving the frontend fee estimates before users place their order. However, when validating the fee we ignore the amounts to workaround issues when the sell amount becomes lower because of the estimated fee which in turn increases the fee gnosis/gp-v2-services#574 .
With the new feeAndQuote endpoint (after the frontend has switched to it) we have more control and could be smarter in this case. We would still need to add an algorithm to the route that finds a stable fee + sell_amount combination.
The impact of not fixing this issue is that users using the api directly are able to "cheat" themself a lower fee in some cases.
Original issue gnosis/gp-v2-services#776 by @vkgnosis
A future improvement would be to modify how solution competition happens so that it doesn't require calldata. Each settlement could include a "gas limit" which is used for computing the objective value and will get used for sending the final transaction. This would allow gas to still be factored into the objective value, but still enable us to request calldata as a final step so that we can integrate with PMMs that do not want to provide binding 0x orders that we end up not using (the cause of the issue with ParaSwapPool4
"DEX" that we were seeing with the ParaSwap solver).
The baseline solver could still do a eth_gasEstimate
in order to estimate gas for its settlement, while solvers like the ParaSwap solver would use the gas estimates from the price query API result for computing the objective value.
Original issue gnosis/gp-v2-services#828 by @nlordell
While reviewing #313, since it was the first PR I review of the database structure, I notice a few things that I wanted to mention. But they were unrelated to the PR, so I breing my comment gnosis/gp-v2-services#313 (review) here.
I wouldn't do this one, until we see we have performance issues, but, one thought I had is, If we expect issues with having many orders, you can set a flag for the orders that are final:
Original issue gnosis/gp-v2-services#319 by @anxolin
It would be nice to have a status page to inform users about potential outages. We would to setup a way to get some basic status information to users.
Original issue gnosis/gp-v2-services#966 by @nlordell
I am a little worried about the scope of the backend order book api. Imo there are are three broad categories of apis:
The reason I am making this distinction is because I feel that right now we are mixing all three categories together and before we implement further apis we should consider if we want to separate them in some way. The drawbacks I see:
get_orders
from the perspective of the solvers does not need a user filter and OrderMetaData
does not need to contain a executedSellAmountBeforeFees
field. But of those are useful for the frontend.What I would like to do is at least separate category 1 from the rest on a route and code level. We could have a paths like (ignoring bad naming) /api/minimal/v1
, /api/user-friendly/v1
and implement them in different rust modules. This gets rid of the api level entanglement. It will make it easier to move to separate crates / binaries in the future which has more organizational overhead so we might not want to that right now.
I hope that from the perspective of the frontend this would increase our development velocity and potentially allow frontend team members to implement some apis directly in rust themself without probably waiting (probably frustratingly long) for us to discuss and implement the suggestions.
Original issue gnosis/gp-v2-services#297 by @vkgnosis
I manually took a look at the longest trade in our testing session (@fedgiac's one which took 20 minutes to fill):
From looking at the logs, I was able to see that another order had to expire first before the former one became part of our problem set.
This is because at the moment we look at all orders of a user in the order they have been created and deduct the sell amount for their "available sell balance". When the available sell balance is not sufficient orders are not added to the set of solvable orders (the MIP solver assumes all orders to have sufficient balance).
While this seems fine for disjunct token pairs, I'd argue that for orders on the same token pair we should prioritize the order with the most "lax" limit price. It seems quite likely that users place an order for their entire sell balance, the price moves and they replace the order at a more lax limit price. Without cancelling their previous order (which is not intuitive), they will have to wait a long time before the order gets matched.
This could also explain some of the large outliers in our "order placement to trade execution time" metric.
Original issue gnosis/gp-v2-services#757 by @fleupold
As summarized from this comment,
Adding a new table requires updates in a bunch of scattered places which are easy to forget.
Could make a trait DbTable
(and maybe also EventTable
) to specify common functionality that all tables must fulfill. There could be a single enumerable ALL_TABLES
type which the places that rely on this behaviour could use to enumerate and call into all tables.
This, also ties into the fact that db.clear()
has a bunch of functionality that could also be separated into independent methods (for easier unit testing).
Eg. db.drop_orders()
which would be handy when testing the trade query with and without corresponding orders.
Original issue gnosis/gp-v2-services#438 by @bh2smith
We should probably start using chain ID instead of network ID everywhere, especially since it is tied to the signatures.
Originally posted by @nlordell in gnosis/gp-v2-services#507 (comment)
Original issue gnosis/gp-v2-services#514 by @vkgnosis
It's possible that we see a trade event without having seen the order being placed via our API beforehand (e.g. some "private liquidity" is used or simply the order was placed via our staging endpoint, so the prod endpoint didn't see it).
We should still be able to recover the order creation data and most of the order metadata (except for creation timestamp) just from looking at the calldata of the transaction that created the event.
Creation timestamp could become a "first seen" timestamp and just set to now() in that case.
Original issue gnosis/gp-v2-services#463 by @fleupold
This PR captures the work required for implementing support for EIP-2929/2930 access lists. This is required in order to support transferring ETH to SC wallets.
This is needed because of the gas increase in the recent Berlin hardfork which caused Solidity transfer
to not use enough gas when sending ETH to a contract.
We should also consider increasing the gas amount for the transfer
in the smart contracts.
Original issue gnosis/gp-v2-services#685 by @nlordell
As we have been getting some complaints from users about fee prices, we should add a Prometheus metric of the fee responses tagged by token address.
cc @anxolin
Original issue gnosis/gp-v2-services#962 by @nlordell
OE is reaching its end of life (London hardfork will be the last one supported) and therefore @giacomolicari asked us to move away from it in the mid term.
There are a bunch of alternative nodes we could look at:
In particular for our API (and the bad token detector) we require trace_callMany
to be enabled (ruling out geth). Also while nethermind implements parts of the trace api, it doesn't seem to implement trace_call
or trace_callMany
.(https://docs.nethermind.io/nethermind/ethereum-client/json-rpc/trace)
I tested Erigon in rinkeby staging a while back and it was working fine for the API. However, it wasn't working well for the solver, as it doesn't seem to implement eth_getBlockByNumber(pending)
(it only works for latest). We use this when trying to find a pending transaction after a restart. The erigon team suggested to instead use the txpool_content
method.
Last time I tried this one it had issues with the size of the mem-pool but those should be fixed in the most recent version.
Since EIP 1559 however, the trace_callMany
is complaining unless we set a gasPrice on our requests that exceeds the base fee. It doesn't seem to check the funds of the from account (thus we can use U256::max as a dummy value), however that value doesn't seem working with OE (they check funds, but are fine with a 0 gas price). I asked in their discord and am waiting for a suggestion.
Maybe in the worst case, we could estimate the current gas price in the BadToken Detector and set a reasonable value that satisfies both OE (in terms of balance) as well as Erigon (in terms of base fee)
All in all, I think this struggle is another reason why #858 might be a better way to go than relying on non standard RPC methods. If this was implemented the need for tracing would go away as bad tokens would be detected via a single baseline solution simulation instead of a series of test calls.
Original issue gnosis/gp-v2-services#909 by @fleupold
It a token has absolutely no liquidty, the orderbook API is returning a BadToken
error instead of an InsufficientLiquidity
error. This is because the BadTokenDetector
needs to find some token holder with a large enough supply for the simulation but is unable to.
This issue captures the work to change the orderbook API to return the more accurate error.
/cc @anxolin
Original issue gnosis/gp-v2-services#797 by @nlordell
Currently we find settlements first and then wait for the transaction to be mined.
We can improve this by continuing to solve while waiting for the transaction continuing to treat all orders as valid as if there was no transaction. Every interval we are currently updating the gas price we would use the currently best available settlement. This works for both public mempool and archerdao. For the mempool we can only update on increased gas price but for archerdao we can always change the transaction.
Original issue gnosis/gp-v2-services#879 by @vkgnosis
I want to continue integrating this and then come back to adding a test with mocks. From my pov I don't feel they will add much confidence in that this is correct and they make the code more complicated because of extra traits. If we mocked web3 on a transport level the test set up would probably be very annoying so I think an extra trait for our specific high level logic is better.
Originally posted by @vkgnosis in gnosis/gp-v2-services#825 (comment)
Original issue gnosis/gp-v2-services#864 by @vkgnosis
We encountered a failed simulation on xdai. There is no contract code, so it's difficult to determine the root cause.
The token involved claims to be deflationary, so it might be taking fees on balances or transfers. In this case, it might be interesting to determine why it escaped our detection mechanism.
Original issue gnosis/gp-v2-services#760 by @fedgiac
Noticed this when reading the code again. We will only do this if the whole final merged settlement can be done this way. But if only some part of the merged settlement can be done with internal buffers then the whole thing fails. If we looked at individual settlements before merging then we would suceed more often.
Original issue gnosis/gp-v2-services#591 by @vkgnosis
Some numbers like the uniswap balances must be strictly positive (> 0). On the rust side we should make sure of this in order to not give invalid data to the solver.
Original issue gnosis/gp-v2-services#700 by @vkgnosis
This issue captures the work required to support empty list parameters for our binaries. Specifically, consider the following code:
use structopt::StructOpt;
#[derive(StructOpt)]
struct Options {
#[structopt(long, env, use_delimiter = true, default_value = "1")]
my_values: Vec<String>,
}
fn main() {
println!("{:#?}", Options::from_args().my_values);
}
At first glance, there does not appear to be any way to specify an empty list:
$ cargo run -q
[
"1",
]
$ MY_VALUES='' cargo run -q
[
"",
]
$ MY_VALUES='1,2' cargo run -q
[
"1",
"2",
]
This issue seems to affect our services, in our staging configuration we have:
global:
# ...
unsupportedTokens: '0x0000000000000000000000000000000000000000' # None, work around not supporting empty list.
# ...
Original issue gnosis/gp-v2-services#660 by @nlordell
Hi!
I'm starting to make some spec documents to break down some of the features for the Explorer.
I started describing the home page of the Explorer, where we want to include some basic stats and a top traded tokens ranking.
In the document, you'll find some early draft on the APIs that would allow to build that.
I'd love to get your input and ideas on what's best to show in this first home page.
DISCLAIMER: Note that although the image posted here looks pretty good, the design is still ongoing. I would find extremely value to discuss this functionally, so we discuss what is a good information to users in the home page, and also in terms of API design, so we can agree on the REST endpoint name, parameters, and return data. Design feedback is welcome, although not part of this issue.
Original issue gnosis/gp-v2-services#265 by @anxolin
In this case we currently assume all orders are valid because they individually have enough balance but if we actually tried to settle this it would not work out.
Original issue gnosis/gp-v2-services#309 by @vkgnosis
This happened in prod yesterday:
We sent a tx at ~5:35 UTC with nonce 24383;
At 2021-07-26T05:36:19.043Z we sent another tx with the same which got acknowledged by the Infura server with tx hash 0x8da176b1cd6fe6b3b84788b0276aa3923e4926926fc5cf41f81b85b5de4f9be4. However this hash doesn't show up on etherscan and a retry attempt 30s later with increased gas lead to an โnonce too low errorโ.
I assume we were connected to a node that hadn't yet seen the original tx and therefore gave us an outdated nonce for which it initially accepted the transaction. Upon retry 30s later it noticed that this nonce is no longer valid and errored.
Maybe we can keep track of nonces in between runloops, but given that this is a rare and not so severe issue (we want to move to more persistent node connection anyways), it's not super important.
Original issue gnosis/gp-v2-services#896 by @fleupold
Some tokens like RARE v2 have rules on approve
that prevent allowances greater than the current balance. This causes issues with out solver where an the approve
interaction leading up to the Honeyswap swap
interaction would fail.
This is currently not detected by the bad token detector.
Original issue gnosis/gp-v2-services#657 by @nlordell
The code from gnosis/gp-v2-services#93 is correct but was created based on the smart contract code. This makes it low level in the sense that we do abi encoding manually and don't mention EIP-712 even though we implement it. It might be better to reimplement this in a more high level fashion.
This is essentially ABI encoding the order, maybe it can reuse some of the
web3
orethabi
functionality to do the work instead of manually specifying the offsets?
Personally I would prefer if the implementation referenced the EIP-712 which is the signing standard used instead of the SC (which just adds the detail of also supporting wallets that don't support signing typed data) maybe using the JS code from the same repo as reference (as that is what the solidity code is tested against). My main argument being that this would make the Rust code more expressive and self-documenting than if it just ported the Solidity implementation, which is a bit low level to get around Solidity/EVM limitations and not necessarily because it is the cleanest implementation possible. In fact, it could be implemented as keccack256(abi.encode(...)) if we weren't going for gas optimizations.
Maybe things like computing the domain separator, type hash and struct hash could maybe be under a eip712 module just so its SUPER clear the signing standard that is being used?
Maybe some functionality from ethabi can be reused for encoding the order for hashing to make the purpose clearer (although, maybe its just impractical, I don't remember the API too well).
Original issue gnosis/gp-v2-services#97 by @e00E
In the order book create order function we currently check that the user has balance and set the approval. One intention behind this is ddos protection to stop users from creating many orders and another is that it makes it clearer on the api level that you need to approve before orders can be considered.
On Slack we discussed foregoing the approval check to make the order creation flow for users easier. This way users could send their approval transaction and immediately start creating orders without having to wait for the approval to get mined.
Original issue gnosis/gp-v2-services#949 by @vkgnosis
This week @twalth3r and me started looking at uniswap v3 integration.
The good news is that apparently it is not hard to integrate this pool into our solver, and Tom even developed an experimental PoC.
On the other hand, during the process of collecting some testing instances, I realized that it is not easy at all to read the required pool parameters from the blockchain. Since I'm going on 2 weeks vacation, here's my notes in case someone wants to start on this task.
Each uniswap v3 pool holds a int->int dictionary which we need in the solver (the interpretation is here. There is not, as far as I can tell, a way to get the complete dictionary from the blockchain in one call.
The keys of this dictionary are also in a large bitmap, split in 65536 256-bit segments (words). Given a word, there is a smart contract method to collect all the (dictionary) values corresponding to the keys which are "enabled" in the bitmap (i.e. corresponding bit is one).
The problem is we don't know which words are non-zero, and we don't want to do 65536 smart contract calls to get the dictionary values. We don't even know what is the minimum and maximum non-zero words.
On the other hand, we know one word that is not zero (which corresponds to the "current" price). We also know, that the sum of the values in the dictionary must sum to zero.
To get around this (code here), I implemented an heuristic: we start from this known non-zero word, and then look for non-zero words going outwards from this initial word. We stop looking when the sum of values is zero and we don't find a new non-zero word in the most recent N attemps.
Anyway, the heuristic is not so important. The fact is that even after chatting with the uniswap team (see the thread in our dedicated slack channel) it appears that there is no way to get what we need efficiently. Even this heuristic takes 20 seconds to download dictionary of the largest pool (in TVL) using infura.
So I suppose we need to maintain some internal database mirror of the pools and keep it synchronized via events or something, so that data is readily available. I still think that a special purpose service for all our pools that does this task would make a lot of sense (like the orderbook but for liquidity).
Original issue gnosis/gp-v2-services#866 by @marcovc
We saw a failed transaction simulation today that involved a merged settlement.
There were two orders, all independently matched through Uniswap (and Sushiswap) paths:
While the solutions could work if submitted on their own, if one is executed then the price in the DAI->WBTC pool moves and the merged trade cannot be settled anymore.
=== Tokens ===
address | index | symbol | price
--------------------------------------------+-------+--------+-----------------------
0x2260FAC5E5542a773Aa44fBCfeDf7C193bc2C599 | 0 | WBTC | 2661999999824800000
0x6B175474E89094C44Da98b954EedeAC495271d0F | 1 | DAI | 960957811
0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 | 2 | WETH | 15868097
0xdAC17F958D2ee523a2206206994597C13D831ec7 | 3 | USDT | 954872860165796048640
=== Trades ===
------------------------------------------------------------------------------------------------------------------------
Order: SELL order, valid until 2021-06-22T13:16:23.000Z (1624367783)
Trade: sell 954.87286016579604864 DAI (1)
buy 865.839154 USDT (3)
fee 40.79531929247195136 DAI (1)
Owner: 0x6b9100896b556Cf3EB01be2c7c368F263771fE61
Receiver: 0x8b93cf141fd30F98bE545384eba5A160C254955d
AppData: 0x0000000000000000000000000000000000000000000000000000000000000001
Signature (eip-712): 0x5499b7ef75f58a25f579aebb925d8ef5bc90abfb1102dd333e666f88b6e964885900c9b3ef8b7abdde78c05d0c00f055e170b3763efd7e37df518bca214467ae1c
OrderUid: 0x84809ac0586387c4f4e057a8ddb9c885478186b8120fca9326e2c07875ea92326b9100896b556cf3eb01be2c7c368f263771fe6160d1e2a7
Order: SELL order, valid until 2021-06-22T13:16:28.000Z (1624367788)
Trade: sell 2.6619999998248 WETH (2)
buy 0.15772572 WBTC (0)
fee 0.0180000001752 WETH (2)
Owner: 0x431be6F935fDd4FC32b52A80FB8B8e0aFee55564
Receiver: 0x431be6F935fDd4FC32b52A80FB8B8e0aFee55564
AppData: 0x0000000000000000000000000000000000000000000000000000000000000001
Signature (eip-712): 0x424f41c7f27e46e9b9f5a7dd7a4d8dee7c739bb1fff009e65c0abe6025a9ccd27df9d3cc9b1ce6ee03bb86277356cba16436251812c055ef25dc3ecab588d11c1c
OrderUid: 0x0e00dd1cbd39f94ea1b60c7d63a54f1ddb6601a7e55dd9939c0c3405d200354a431be6f935fdd4fc32b52a80fb8b8e0afee5556460d1e2ac
=== Interactions ===
------------------------------------------------------------------------------------------------------------------------
--- Pre-interactions ---
โ--- Intra-interactions ---
โโโโ Interaction: target address 0x7a250d5630B4cF539739dF2C5dAcb4c659F2488D
โ function: swapTokensForExactTokens
โ - amountOut: 0.03237643
โ - amountInMax: 955.827733025961844689
โ - path: 0x6B175474E89094C44Da98b954EedeAC495271d0F (DAI) -> 0x2260FAC5E5542a773Aa44fBCfeDf7C193bc2C599 (WBTC)
โ - to: 0x3328f5f2cEcAF00a2443082B657CedEAf70bfAEf
โ - deadline: 115792089237316195423570985008687907853269984665640564039457584007913129639935
โ calldata: 0x8803dbee000000000000000000000000000000000000000000000000000000000031670b000000000000000000000000000000000000000000000033d0c642fea75c9fd100000000000000000000000000000000000000000000000000000000000000a00000000000000000000000003328f5f2cecaf00a2443082b657cedeaf70bfaefffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00000000000000000000000000000000000000000000000000000000000000020000000000000000000000006b175474e89094c44da98b954eedeac495271d0f0000000000000000000000002260fac5e5542a773aa44fbcfedf7c193bc2c599
โโโโ Interaction: target address 0xd9e1cE17f2641f24aE83637ab66a2cca9C378B9F
โ function: swapTokensForExactTokens
โ - amountOut: 0.540787685122734122
โ - amountInMax: 0.03240881
โ - path: 0x2260FAC5E5542a773Aa44fBCfeDf7C193bc2C599 (WBTC) -> 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 (WETH)
โ - to: 0x3328f5f2cEcAF00a2443082B657CedEAf70bfAEf
โ - deadline: 115792089237316195423570985008687907853269984665640564039457584007913129639935
โ calldata: 0x8803dbee00000000000000000000000000000000000000000000000007814388cea5782a00000000000000000000000000000000000000000000000000000000003173b100000000000000000000000000000000000000000000000000000000000000a00000000000000000000000003328f5f2cecaf00a2443082b657cedeaf70bfaefffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00000000000000000000000000000000000000000000000000000000000000020000000000000000000000002260fac5e5542a773aa44fbcfedf7c193bc2c599000000000000000000000000c02aaa39b223fe8d0a0e5c4f27ead9083c756cc2
โโโโ Interaction: target address 0x7a250d5630B4cF539739dF2C5dAcb4c659F2488D
โ function: swapTokensForExactTokens
โ - amountOut: 960.957811
โ - amountInMax: 0.541328472807856857
โ - path: 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 (WETH) -> 0xdAC17F958D2ee523a2206206994597C13D831ec7 (USDT)
โ - to: 0x3328f5f2cEcAF00a2443082B657CedEAf70bfAEf
โ - deadline: 115792089237316195423570985008687907853269984665640564039457584007913129639935
โ calldata: 0x8803dbee0000000000000000000000000000000000000000000000000000000039470d7300000000000000000000000000000000000000000000000007832f60c0845ad900000000000000000000000000000000000000000000000000000000000000a00000000000000000000000003328f5f2cecaf00a2443082b657cedeaf70bfaefffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0000000000000000000000000000000000000000000000000000000000000002000000000000000000000000c02aaa39b223fe8d0a0e5c4f27ead9083c756cc2000000000000000000000000dac17f958d2ee523a2206206994597c13d831ec7
โโโโ Interaction: target address 0x7a250d5630B4cF539739dF2C5dAcb4c659F2488D
โ function: swapTokensForExactTokens
โ - amountOut: 4720.124801233017022568
โ - amountInMax: 2.6646619998246248
โ - path: 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 (WETH) -> 0x6B175474E89094C44Da98b954EedeAC495271d0F (DAI)
โ - to: 0x3328f5f2cEcAF00a2443082B657CedEAf70bfAEf
โ - deadline: 115792089237316195423570985008687907853269984665640564039457584007913129639935
โ calldata: 0x8803dbee0000000000000000000000000000000000000000000000ffe0e3f26dad42a86800000000000000000000000000000000000000000000000024fac7f88a075ca000000000000000000000000000000000000000000000000000000000000000a00000000000000000000000003328f5f2cecaf00a2443082b657cedeaf70bfaefffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0000000000000000000000000000000000000000000000000000000000000002000000000000000000000000c02aaa39b223fe8d0a0e5c4f27ead9083c756cc20000000000000000000000006b175474e89094c44da98b954eedeac495271d0f
โโโโ Interaction: target address 0x7a250d5630B4cF539739dF2C5dAcb4c659F2488D
function: swapTokensForExactTokens
- amountOut: 0.15868097
- amountInMax: 4724.844926034250039591
- path: 0x6B175474E89094C44Da98b954EedeAC495271d0F (DAI) -> 0x2260FAC5E5542a773Aa44fBCfeDf7C193bc2C599 (WBTC)
- to: 0x3328f5f2cEcAF00a2443082B657CedEAf70bfAEf
- deadline: 115792089237316195423570985008687907853269984665640564039457584007913129639935
calldata: 0x8803dbee0000000000000000000000000000000000000000000000000000000000f220c1000000000000000000000000000000000000000000000100226532ed4644012700000000000000000000000000000000000000000000000000000000000000a00000000000000000000000003328f5f2cecaf00a2443082b657cedeaf70bfaefffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00000000000000000000000000000000000000000000000000000000000000020000000000000000000000006b175474e89094c44da98b954eedeac495271d0f0000000000000000000000002260fac5e5542a773aa44fbcfedf7c193bc2c599
--- Post-interactions ---
Original issue gnosis/gp-v2-services#773 by @fedgiac
One very important page in the block explorer will be the "Top traded tokens". Actually, our idea is to include them in the home page.
It will show the volumes and prices for the most traded tokens. It will allow the user to quickly understand what they can do in Gnosis Protocol, and get an idea of the expected volumes and prices for the active tokens.
Details about this page can be found in the spec: https://docs.google.com/document/d/1at2Vb8iZMxAKxUkBMR_HaMn6UjYhTD7dE9mxy_xaJP0/edit#
In this issue I would like to discuss a suitable API for supporting thing use case. In the spec document there's already a detailed proposal which we could use for iterating and shaping the API.
Original issue gnosis/gp-v2-services#267 by @anxolin
Now that an order must specify 1 of 3 balance sources (EOA, vault internal or external), we must add additional consideration for "balance sufficiency"
balanceFrom = EOA
: This is the check we already have (allowance manager approval + sellAmount < balance)balanceFrom = vault internal
: Check that the user has allowed the GPv2VaultRelayer
to act as a relayer and that the internal balance > sellAmountbalanceFrom = vault external
: GPv2VaultRelayer
relayer allowance + Balancer V2 vault approval + sellAmount < balance.Original issue gnosis/gp-v2-services#901 by @bh2smith
The upgraded GPv2 contracts support a swap
settlement where a single order is traded directly against Balancer.
As the Balancer SOR solver will be coming along soon, we should support these kinds of solutions so that the Balancer SOR solver has a gas advantage WRT the objective function.
This task would entail:
Original issue gnosis/gp-v2-services#925 by @nlordell
This is a todo after gnosis/gp-v2-services#323 is merged.
Original issue gnosis/gp-v2-services#392 by @vkgnosis
As proposed by @vkgnosis in gnosis/gp-v2-services#956 (comment), it may become too cumbersome to pass along a long list of denied tokens and banned users as a runtime configuration in the future. Also, given the correct DB write access, it would be much more dynamic to have these values stored in the DB and read by the services when necessary.
This task is to create two new tables (denied_tokens
and banned_users
) for which the existing logic read by services as a runtime argument is redirected. This may also involve populating the denied_tokens
table from the start -- we may want to consider including the denied token list directly in the codebase (since it is unlikely that denied tokens will ever be removed). People may stumble upon such a hard coded list of denied tokens and wonder why their "precious" token is denied (so we may want to include a comment along with it if it is made public).
Note the special case of the deprecated version of sUSD - this token has transfer functionality disabled, but it could be re-enabled at anytime. Similar story for USDT - if whoever can enable or disable transfers of this token from token owner or even the settlement contract then this would have to be denied at least until this block has been lifted.
Original issue gnosis/gp-v2-services#960 by @bh2smith
At the moment the majority of our solvers will settle trades one by one. Since we wait 30 seconds between batches and mining the solution tx takes some time, we can effectively handle about 0.5-1 individual trades per minute.
This has become noticable in today's testing challenge (worsened by the fact that we were stuck 5 minutes without a solution submission) when multiple people tried to trade non-overlapping tokens at the same time, some trades got very delayed.
This might be important to address, before we get real traffic which might be spiky (e.g. around tweets, etc).
A simple approach would be to make the solver trade return a list of solutions, which we could batch up into a single settlement (or multiple parallel txs). However, this bears the risk of one solution effecting e.g. the uniswap state of another solution and might therefore be invalid by the time the first is mined (maybe we can base all our queries on the pending block which could already have the first solution applied).
We should at least skip the 30s pause when there has been a solution submission in the current run loop.
Original issue gnosis/gp-v2-services#402 by @fleupold
This issue captures the work to add an additional macro for importing contract bindings to generate tests verifying deployment information is included in the artifact.
Originally posted by @bh2smith in gnosis/gp-v2-services#630 (comment):
Maybe we could somehow generate these lines from a constant list
include!(concat!(env!("OUT_DIR"), "/WETH9.rs"));
and include in this test and assertion each element from the list was verified here...
Original issue gnosis/gp-v2-services#646 by @nlordell
Problem:
Some erc20 tokens are weird and don't uphold invariants that some solvers expect. For example there are tokens that take a fee on transfer so that when you transfer amount X to an account your balance is reduced by X but the target's balance is increased by less than X.
In the past this has caused problems because solvers would find solutions that fail simulation so they get stuck until the ordering trading the offending token expires.
Current solution:
The bad token detector performs some heuristic simulations using the token to determine whether it behaves as expected.
Idea proposed by @fleupold:
The more logic we have to implement for the bad token detection, the more I wonder if we should eventually just simulate the baseline solution in the API to check if tokens are tradable?
Questions I have:
In order for this to be useful it has to be a successful on chain simulation. But for that we need a real signed user order. Such a simulation would fail if the user doesn't have enough balance but we do want to accept orders with not enough balance because the user might add balancer. It would also fail if the order's price is too restrictive but we want to accept the order anyway in case the price changes later.
Original issue gnosis/gp-v2-services#858 by @vkgnosis
In transactions like this one the Uniswap transactions are encoded as separate calls to the router instead of a single call with a longer path. This is both gas-inefficient as well as makes the settlement more brittle to price movements.
Here the interactions did the following swaps:
==> Swap 1
# PATH
"0x3472a5a71965499acd81997a54bba8d852c6e53d"
"0x2260fac5e5542a773aa44fbcfedf7c193bc2c599"
# AMOUNTS
"117564799828227484609"
"3848370"
==> Swap 2
# PATH
"0x2260fac5e5542a773aa44fbcfedf7c193bc2c599"
"0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2"
# AMOUNTS
"3848371"
"533688591820273024"
==> Swap 3
# PATH
"0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2"
"0x02d3a27ac3f55d5d91fb0f52759842696a864217"
# AMOUNTS
"533688591820272987"
"1192623949789449945088"
This could have instead been represented as:
==> Swap 1
# PATH
"0x3472a5a71965499acd81997a54bba8d852c6e53d"
"0x2260fac5e5542a773aa44fbcfedf7c193bc2c599"
"0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2"
"0x02d3a27ac3f55d5d91fb0f52759842696a864217"
# AMOUNTS
"117564799828227484609"
"1192623949789449945088"
Original issue gnosis/gp-v2-services#675 by @nlordell
We have accumulated some scattered TODOs in the driver code for handling partial order fulfillment.
offchain_orderbook.rs
:
# inside get_liquidity
TODO - could model inflight_trades as HashMap<OrderUid, Trade>
TODO - driver logic for Partially Fillable Orders
TODO - The following assertion will fail and need to be adapted
# inside normalize_limit_order
TODO discount previously executed sell amount
Given that LimitOrder
holds the settlement encoding information about the order anyway, it should not be a problem to simply set the sell_amount
and buy_amount
fields directly according to the already executed amounts.
Something like
sell_amount: order.order_creation.sell_amount - inflight_trade.executed_sell_amount
buy_amount: order.order_creation.buy_amount - relative_buy_amount
Where the relative buy amount is computed from executed_sell_amount
and the originally posted limit price. Note that due to integer division we will have to take either of the floor or ceiling to provide a buffer in a direction that will still satisfy the users original limit price.
Original issue gnosis/gp-v2-services#673 by @bh2smith
Cf. Kibana logs: https://logs.gnosisdev.com/goto/aa165cdbdba01c8750991118c26cba58
orderbook::event_updater: updating events in block range 8051255..=8056117
Note that the range is roughly 5000 blocks. We never actually tighten it to e.g. only the last 25 blocks. It's possible that this is causing some strain on our nodes (we have seen an increased amount of alerts in staging recently and I noticed rinkeby alerts popping up when I just restarted the rinkeby API after a bit of downtime as the nodes seem to have rescaled).
Same seems to be true for xDAI and mainnet.
I think this might be related to the fact that we only store the last_handled_block
in storage if there is activity and thus in times of little activity accrue a very large range.
Original issue gnosis/gp-v2-services#305 by @fleupold
It would be nice if solvers would allow you to unwrap WETH for ETH without paying gas. The order's fee amount would reimburse the solver for the gas fee.
Original issue gnosis/gp-v2-services#971 by @nlordell
Original issue gnosis/gp-v2-services#311 by @bh2smith
Add pagination to API for both trades and orders calls:
GET /api/v1/orders
GET /api/v1/trades
API should accept an optional parameter pageSize
stating how many results should be returned at a time. Defaults to 10
.
API should accept an optional parameter page
stating which is the current page being requested. Defaults to 0
.
pageSize
must be an integer within the range (inclusive) from 1 - 100
When not set, set to a value outside of the range or an invalid value (undefined, 1.3, gibberish
, etc), it should default to 10
The API - to protect itself - should not allow to force-request all db values at once.
The response will probably have to change to include metadata regarding the pagination.
Instead of returning a flat list of results, it'll have to change to something like:
{
"results": 2123,
"page": 3,
"pageSize": 50,
"results": [{}, ...]
}
Discussion:
pageSize
, 100 is a conservative proposalpageSize
, no strong reason to keep it at 10Original issue gnosis/gp-v2-services#969 by @alfetopito
Its very difficult to filter out the logs with grep because of these multi-line print statements
We see a lot of the following and essentially only the following with logs | grep -v DEBUG
(which we would ideally not want to see)
Original issue gnosis/gp-v2-services#669 by @bh2smith
This is still somewhat of an open question, but for example, we will want to prevent deflationary tokens from being traded via the Balancer vault option, since such trades may pass simulation - but are known to still be a problem.
TODO - come up with other examples where the token detector needs to be adapted according to the new order model.
Original issue gnosis/gp-v2-services#903 by @bh2smith
It would be nice to have an about endpoint exposed by the service
Some idea of the info that could be exposed:
Gnosis Protocol API
Some nice to have is the commit hash of the API.
Original issue gnosis/gp-v2-services#132 by @anxolin
To avoid multiple roundtrips to the backend and ensure a congruent fee for the given sell amount (after fee) it would be nice if the backend exposed an endpoint that allows for a given token pair, order kind and amount to return the following:
We could potentially repurpose https://protocol-rinkeby.dev.gnosisdev.com/api/#/default/get_api_v1_markets__baseToken___quoteToken___kind___amount_ for this, but this might break existing clients so we might need a new route.
Original issue gnosis/gp-v2-services#679 by @fleupold
Currently there is an implicit assumption that if a SettlementEncoder
method fails, then it is in a undefined state and shouldn't be used.
The correct way to describe this with the type system would be to change the method signatures to be:
fn do_thing_that_can_fail(mut self, ...) -> Result<Self>
This way, if there is an error, the we would no longer have access to the encoder.
Original issue gnosis/gp-v2-services#501 by @nlordell
With the introduction of the allowance manager, we should make it a cached component so that it can get shared and reused across various solvers without requiring repeated allowance retrievals and round-trips to the node.
Original issue gnosis/gp-v2-services#799 by @nlordell
The following settlement failed to simulate: https://dashboard.tenderly.co/gp-v2/staging/simulator/289cb4ce-9aef-4099-ac58-8c2fb26f0b9c
because of an invariant in the Miracle Token Contract:
require(amount < rTotal);
amount: 11074246714981099536953
rTotal: 1000000000000000000
I assume the bad token detector did not catch this because we test transferring a fixed amount that happened to be below the rTotal
value.
This issue would likely be also addressed if #858 was implemented.
Original issue gnosis/gp-v2-services#870 by @fleupold
Our current api expects to return all orders in one request. With there are a large number of orders this is not feasiable so we likely want to paginate this api.
Original issue gnosis/gp-v2-services#101 by @e00E
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.