Git Product home page Git Product logo

eigenlayer-middleware's People

Contributors

0x0aa0 avatar 8sunyuan avatar chaoticwalrus avatar gpsanant avatar karmacoma-eth avatar ronturetzky avatar samlaf avatar stevennevins avatar tudorpintea999 avatar wadealexc avatar ypatil12 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eigenlayer-middleware's Issues

Use the relative import path

Context

When import the repo as a submodule in foundry, the solidity does not understand the import path.
For example, here
When compile the master project, it is trying to find the above path import "src/VoteWeigherBaseStorage.sol"; on the main repo and it is failing.

Test: StakeRegistry/VoteWeigher unit tests

Description

Consolidate VoteWeigherBaseUnit.t.sol tests into StakeRegistryUnit.t.sol. Based on the new middleware contract changes it is now just one single contract.

Action Items

Cleanup: unify typing of state variables in StakeRegistryStorage

/// @notice In order to register for a quorum i, an operator must have at least `minimumStakeForQuorum[i]`
/// evaluated by this contract's 'VoteWeigher' logic.
uint96[256] public minimumStakeForQuorum;
/// @notice array of the history of the total stakes for each quorum -- marked as internal since getTotalStakeFromIndex is a getter for this
StakeUpdate[][256] internal _totalStakeHistory;

This is the only place in the middleware contracts where we use [256] to represent quorum information. I'd like to replace this with a mapping for consistency. I don't think it actually impacts a single thing, but if it this ends up breaking things it's fine to keep as-is.

Suggestion: store the registryCoordinator address in the OperatorStateRetriever

Currently every function in OperatorStateRetriever has to pass the registryCoordinator address that it is calling. The idea for this was that we could deploy a single OperatorStateRetriever to be shared by all AVSs.

Unfortunately the reality we are seeing is that most AVSs are slightly tweaking the registry contracts, which makes me fear (don't have empirical data yet) that the operatorStateRetriever wouldn't work for them, and they'd have to update it to reflect their contract changes and deploy it by themselves.

Given this, I feel like it would be easier to just have each team deploy their own OSR and store the registryCoordinator address in it. This would prevent the need to pass the registryCoordinator when passing it, but also (most importantly) make it easier for the offchain configs, which would now only need to track a single address, that of the OSR. Currently, inc-sq and other AVSs require passing both the RegistryCoordinator AND the OSR. If OSR points to RC then we would only need to pass address of OSR when starting offchain node, and we could retrieve all other contracts from that.

Track network deployment addresses

We want to track what contracts have been deployed to which networks so a configuration diff to the deployment script will be needed.

IndexRegistry does not update `currentOperatorIndex` when an operator is deregistered, in certain cases

Came across this while working on documentation:

uint32 newOperatorCount = _decreaseOperatorCount(quorumNumber);
bytes32 lastOperatorId = _popLastOperator(quorumNumber, newOperatorCount);
if (operatorId != lastOperatorId) {
_assignOperatorToIndex({
operatorId: lastOperatorId,
quorumNumber: quorumNumber,
index: indexOfOperatorToRemove
});

If operatorId == lastOperatorId, we never assign currentOperatorIndex[operatorId].

To fix this, we can change _popLastOperator or add an else condition

Rotate Bls keys PR missing historical storage for PubKeyHashToOperator

Hi,

Reference PR:
#249

How is the operatorStateRetriever's getOperatorFromPubkeyHash supposed to work with rotate bls keys PR?
In the current state of the PR won't it get the latest operator that is mapped to the pub key hash and not the operator that was using that pubKeyHash at the block the operatorStateRetriever is interested in?

Generally, there seems to be missing historical storage for PubKeyHashToOperator in the BlsApkRegistry...

Also, in the current state of the PR, multiple operators, at different times, may use the same bls keys? Is this intentional? Wouldn't this be problem if we wanted to know which operator to slash upon misbehavior of a pubKeyHash (operatorId) especially since we don't have historical storage?

I am concerned because I am implementing something that requires me to store the snapshot of the (relevant bits of the) registry contracts. And I was wondering if I should use the operator address as an identifier or the operatorId.
The current state of the PR suggests that I should use operator address. But since the operatorStateRetriever's getOperatorFromPubkeyHash issue will remain, you may introduce the missing historical storage for PubKeyHashToOperator.
It would be nice if you could comment on this (op address vs op Id) given eigenlayer's design philosophy/future decisions...

Thank you

Test: Create a Deploy/Setup base for middleware unit testing

Description

Sort of an optional task refactor, currently have unit tests inheriting from MockAVSDeployer and BLSMockAVSDeployer but we can try to recreate a more optimal testing deployment setup.
For reference in the core repo, we have util contracts EigenLayerUnitTestBase,EigenLayerUnitTestSetup and we follow the testing guide

Action Items

  • Create a new base setup/deploying contract similar to MockAVSDeployer BLSMockAVSDeployer
  • Get input from team on setting up the deployment/helper functions
  • Refactor existing unit tests to inherit from new contract
  • Create draft PR and get review

AVS cannot guarantee transaction faster than 12 seconds

This is more a question for design clarification

Describe the bug
Running tests of the Demo (Incredible Squaring AVS) with a POS DevNet I hit the invalid reference block error (more in Layr-Labs/incredible-squaring-avs#61).

It is trigger because the check enforce that a block has to be from the past. In the real Ethereum network, a block is a unit of time of around 12 seconds. If I understand correctly, every AVS can fail in the same way as the Demo in my tests, if conformation of the task comes in period shorted than a block.
That might not be a bug, but rather a feature/design choice (e.g. because in real life/production, a task won't have time to be deal in less than 12 seconds).

To Reproduce

  • Deploy iv1 (or other pos-devnet with right version of the EigenLayer contracts) - https://github.com/ivy-net/iv1
  • Deploy Demo AVS: in the https://github.com/Layr-Labs/incredible-squaring-avs run following commands (cd contracts ; forge script script/IncredibleSquaringDeployer.s.sol --rpc-url http://localhost:8545 --broadcast --unlocked --sender 0x123463a4b065722e99115d6c222f267d9cabb524 )
  • TopUp operator:make DEPLOYER_PRIVATE_KEY=0x2e0834786285daccd064ca17f1654f67b4aef298acbb82cef9ec422fb4975622 CHAINID=32382 send-fund
  • start aggregator make CHAINID=32382 start-aggregator
  • start operator make CHAINID=32382 start-operator

Expected behavior
Tasks should be validated in the same block

Environment
https://github.com/ivy-net/iv1

Potential solutions

  • Low priority and not a lot of work

Middleware contracts should publish an image containing smart contracts for integration testing

Currently how we do integration testing is using foundry docker image and deploy all the contracts (eigenlayer contracts and middleware contracts) https://github.com/Layr-Labs/eigensdk-go/blob/67be9ea97fb92a3ee090ab12b56e34e97010a76d/cmd/egnaddrs/main_test.go#L50

But it would be great if middleware repo can publish a docker image of deployed contracts state which any repository (incredible squaring, eigenlayer-cli) can use as a base image to test onchain interaction

AC

  • create design doc to make sure we're all on the same page
  • get people to review the design doc
  • published docker image with deployed eigenlayer core contract + eigenlayer middleware contracts.

StakeRegistry: fix inaccurate naming

From Dedaub audit, item A5:

The name of the function (and its argument) below is slightly misleading:
StakeRegistry::_validateOperatorStakeUpdateAtBlockNumber:445

function _validateOperatorStakeUpdateAtBlockNumber(
   	StakeUpdate memory operatorStakeUpdate,
    	uint32 blockNumber
) internal pure { … }

The function is used to validate StakeUpdate entries not just of the operatorStakeHistory mapping but also of the totalStakeHistory mapping.

Perf: can do better than Kernighan's algorithm in BitmapUtils' `countNumOnes` function

Here is the current function:

    /// @return count number of ones in binary representation of `n`
    function countNumOnes(uint256 n) internal pure returns (uint16) {
        uint16 count = 0;
        while (n > 0) {
            n &= (n - 1); // Clear the least significant bit (turn off the rightmost set bit).
            count++; // Increment the count for each cleared bit (each one encountered).
        }
        return count; // Return the total count of ones in the binary representation of n.
    }

It is linear in the number of bits set, so it's very efficient when the bitmap is empty or very sparse, but gets very inefficient for dense bitmaps.

Solady implements an O(1) algorithm, inspired by https://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel

I ran a simple test to figure out where the breakeven is:

  solady_countNumOnes(0): 158
  solady_countNumOnes(UINT256_MAX): 201

  eigen_countNumOnes(0): 73
  eigen_countNumOnes(1): 279
  eigen_countNumOnes(UINT256_MAX): 52852

Based on this, it seems that the current version of countNumOnes is only more efficient for the empty bitmap, and can cost up to 50k+ gas more than the O(1) version

Add a flag to turn off churner

Some AVS teams don't want to run a churner, either because they don't want to deal with the offchain complexity, or because they want permissionless ejection (operator entering can kick any operator out, as long as that opertor getting ejected has stake < avg operator stake).

Could we add a feature flag requireChurnerSignatures or something in the contract so that the ejection logic wouldn't require a churner signature when the feature flag is set to false?

BLSSignatureChecker::checkSignatures - Prevent current block from being used as `referenceBlockNumber`

From Middleware audit, items P1, M1, and L2:

Although BLSSignatureChecker::checkSignatures checks that the reference block number is not in the future, the current block (i.e., the next to be produced) is accepted as reference:

function checkSignatures(
	bytes32 msgHash,
	bytes calldata quorumNumbers,
	uint32 referenceBlockNumber,
	NonSignerStakesAndSignature memory params
){
	...
	require(referenceBlockNumber <= uint32(block.number), "BLSSignatureChecker.checkSignatures: invalid reference block");

As a consequence, users of eigenlayer-middleware could be tempted to submit confirmations for the current block, in an attempt to provide them as early as possible. This practice, however, is risky, as it creates race conditions between the signature check and other updates that might be performed in the same block.

If an update happens in the same block before checkSignatures, it might violate some of the data included in the check, such as the APK or the stake history indexes, causing the signature check to fail. In issues M1 and L2 we discuss in detail two scenarios in which such a race condition could be caused maliciously to achieve a DoS attack.

Similarly, an update might happen in the same block but after checkSignatures. In this case, the signature check will succeed, but there will be an inconsistency between the data being checked and the information stored for that block, which could have undesirable consequences for the protocol. For instance, the signatoryRecord hashed and stored by EigenDAServiceManager::confirmBatch could be referring to a block with data inconsistent with the globally-maintained data about that block. To reconstruct records, one would need to replay past transactions.

As a consequence we recommend strengthening the above require to ensure that only blocks strictly in the past can be used as reference. Alternatively, AVS implementations should be careful to ensure they can tolerate such races and apparent-inconsistency in records.

Chore: Remove Stake Update Functionality from ServiceManager

Can we remove all stakeUpdate related functionality from the IServiceManager interface? We can keep the `freezeOperator` call since that one is unlikely to change, but all the rest feels like will most likely change.

Will then need to update the ServiceManagerBase to remove all of this recordStakeUpdate proxy calls. We want this to release incredible-squaring and have a consistent story to tell when trying to onboard AVSs onto our testnet.

Originally posted by @samlaf in #29 (review)

BLSSignatureChecker::checkSignatures - document that `msgHash` parameter MUST be a hash, because it's otherwise treated as a point in G1

From Middleware audit, item P2:

Clients of EigenLayer-middleware will call its main validation function, checkSignatures.

Clients should be careful to ensure that argument msgHash is indeed a hash (i.e., collision-resistant), and not directly controlled by an untrusted party. If the latter were to happen, multiple purported signed messages (likely garbage) could be validated. The reason is that the argument is being passed to function trySignatureAndApkVerification and eventually to hashToG1, which (despite its name) is not a collision-resistant hash function: it directly applies a modulo operation to its argument, before attempting to find a group element.

Test: Unit Test Coverage for AVS/Middleware Contracts

Description

Master ticket covering achieving good unit test coverage for all contracts in this repo.

Action Items

TBD, needs to be broken down into smaller items (at least per-contract)

Blocking Issues

N/A

Test: RegistryCoordinator unit tests

Description

Lot of refactoring with this contract in particular, could also implement these tests after changes involving removal of the PubkeyCompendium and new operator registration flow. Mock contracts for the registry contracts should also be setup for testing functions here such as _registerOperator, _deregisterOperator, updateOperators

Action Items

StakeRegistry.modifyStrategyParams does NOT enforce the same conditions on multipliers enforced by `addStrategies`

_addStrategyParams requires that multipliers are nonzero:

require(
_strategyParams[i].multiplier > 0,
"StakeRegistry._addStrategyParams: cannot add strategy with zero weight"
);

modifyStrategyParams does not have this check:

for (uint256 i = 0; i < numStrats; i++) {
// Change the strategy's associated multiplier
_strategyParams[strategyIndices[i]].multiplier = newMultipliers[i];
emit StrategyMultiplierUpdated(quorumNumber, _strategyParams[strategyIndices[i]].strategy, newMultipliers[i]);
}

Not a huge deal, but something we should consider changing for consistency.

Test: integration/fuzz tests

Description

Core-style integration testing, deploying a full AVS and mocking as little as possible, then running through various scenarios and fuzzing some of the more complicated pathways.

Action Items

  • Build deployment/test setup
  • Identify some scenarios or specific paths in the code that should be tested
  • Build integration test framework a la core contracts
  • Write tests
  • PR

Solidity Compilation Error on Importing BLSApkRegistry: Base Must Precede Derived Contract

Describe the bug

An error occurs during the compilation of Sample.sol, specifically after importing BLSApkRegistry.

Sample.sol

// SPDX-License-Identifier: UNLICENSED
pragma solidity =0.8.12;

import {BLSApkRegistry} from "eigenlayer-middleware/src/BLSApkRegistry.sol";

Compilation Error

Compiler run failed:
Error (2449): Definition of base has to precede definition of derived contract
  --> src/interfaces/IStakeRegistry.sol:13:29:
   |
13 | interface IStakeRegistry is IRegistry {
   |                             ^^^^^^^^^

Detailed error logs and context can be found in the GitHub Actions workflow at this link.

To Reproduce
Steps to reproduce the behavior:

  1. git clone https://github.com/ny-alt/sample-fe3ae2.git
  2. forge compile

Expected behavior
The expected behavior is that the Solidity project should compile without errors.

fix: pinned eigenlayer-contracts version is broken

Describe the bug
Checked out commit of eigenlayer-contracts submodule script M2_Deploy_From_Scratch.s.sol does not work. The issue is it references old folder script/testing in line 103. This has been fixed on commit 41b4361 on eigenlayer-contracts repo.

To Reproduce
Steps to reproduce the behavior:

  1. Clone eigenlayer-middleware repo on dev branch or release tags v0.1.3-mainnet-m2+pragma-change or v0.1.3-mainnet-m2
  2. Start anvil
  3. Enter lib/eigenlayer-contracts
  4. Run forge script script/testing/M2_Deploy_From_Scratch.s.sol --rpc-url $RPC_URL --private-key $PRIVATE_KEY --broadcast --sig "run(string memory configFile)" -- M2_deploy_from_scratch.anvil.config.json
  5. See error "File not found"

Expected behavior
Eigen Layer contracts should be deployed to anvil

Environment
Enter important environment info needed to reproduce the bug.

  • dev branch
  • v0.1.3-mainnet-m2
  • v0.1.3-mainnet-m2+pragma-change

Add explanations above commented out slasher integration code

We decided to leave slasher integration out of the upcoming M2 testnet deployment.
However, we have only reflected that choice by commenting out all the places where slasher calls are made.
Some of them have a TODO comment, for eg

// TODO after slashing enabled: record stake updates in the EigenLayer Slasher

Others are simply commented without comments, such as

Could we be consistent about these messages. Would be nice to have a single standardized message. This would help code readability, would make it easier to find all these instances when we do want to reenable the slasher, and would make it easier for the AVS team to communicate with external AVS devs the choices that are currently being made.

Bug: register -> complete deregister -> register

Bug is on critical path to shipping eigenDA: please fix asap

Steps to reproduce:

  • register operator with quorum 0
  • deregister operator with quorum 0 (full deregistration)
  • register operator with quorum 0
    Get bug (from this line):
"BLSRegistryCoordinatorWithIndices._registerOperatorWithCoordinator: operator already registered for some quorums being registered for"

Problem is that complete deregistration doesn't push a new update to the _operatorIdToQuorumBitmapHistory mapping (compare to the partial deregistration branch) and the register function checks the previous quorumbitmap update, so it actually lands on the quorumbitmap of when the operator was registered.

The easiest way to fix this is prob just to add an update even when doing a full deregistration, but there might be another (better?) way to do this, so opening this issue.

Action Items

  • decide on best way to fix this bug
  • fix bug
  • write a new test to test register->deregister->register

Scoping: Decide whether or not we are removing the “global operator list” in the IndexRegistry for M2 mainnet

Description

At present, the IndexRegistry pushes entries to a duplicate-heavy "global operator list" defined here.
This was initially introduced as a solution to remove a requirement for off-chain indexers.
The data is intended to be used only off-chain, and does not appear to have any on-chain usage at this time.
It's an open question if/when this functionality will be removed -- this ticket focuses on whether or not it will be removed prior to the M2 mainnet release.

Action Items

  • Prepare doc explaining decision to be made
  • Meeting to reach / ratify decision
  • Share decision with any impacted parties
  • Tickets for any next steps (if applicable)

Blocking Issues

Not strictly blocked. Actually removing this behavior will be quick if we decide to get rid of it, so it's fine to procrastinate on this decision for a while until more context is available.

Check for stale stakes

registryCoordinator.quorumUpdateBlockNumber(uint8(quorumNumbers[i])) + withdrawalDelayBlocks >= referenceBlockNumber,

Hi,
Here the check assumes that update was triggered within RegistryCoordinator::updateOperatorsForQuorum but stakes could also be update from RegistryCoordinator::updateOperators. Shouldn't this be checked against StakeRegistry::operatorStakeHistory?

allowance issue with range payment

Describe the bug
There seems to be an issue with setting of allowance. The allowance can be overwritten by the subsequence range payment

    function payForRange(
        IPaymentCoordinator.RangePayment[] calldata rangePayments
    ) public virtual onlyOwner {
        for (uint256 i = 0; i < rangePayments.length; ++i) {
            // transfer token to ServiceManager and approve PaymentCoordinator to transfer again
            // in payForRange() call
            rangePayments[i].token.transferFrom(msg.sender, address(this), rangePayments[i].amount);
            rangePayments[i].token.approve(address(_paymentCoordinator), rangePayments[i].amount); <---- approve bug exist here
        }

        _paymentCoordinator.payForRange(rangePayments);
    }

For example, I am paying for two range

Range 0
token = TEST Token
amount = 10000
startTimestamp = 1000
duration = 100

Range 1
token = TEST Token
amount = 10
startTimestamp = 1100
uint32 duration = 100

Issue
rangePayments[i].token.approve(address(_paymentCoordinator), rangePayments[i].amount);

After the first loop, TEST token allowance for the contract is set to 10000
After the second loop, TEST token allowance for the contract is set to 10
The expected behavior should be the token allowance is set to 10010 instead of 10.

Note: This issue can happen in many different cases. Looping the rangePayments for different strategies, different reward window, etc. Basically, as long as you use the same ERC20 token and loop more than once, it will exists

When the following is called, it will have insufficient allowance to transfer TEST token
_paymentCoordinator.payForRange(rangePayments);

To fix this, very likely will need to

  • Get current allowance
  • set new allowance = current allowance + new token allowance

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, could you add screenshots to help explain your problem?

Environment
Enter important environment info needed to reproduce the bug.

  • [e.g. chrome, safari]
  • [e.g. version]

Don't Forget To

  • Assign this to a project (our default is eigenlayer)
  • Add priority + size estimate
  • Set status to New

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.