Git Product home page Git Product logo

Comments (5)

iamalwaysuncomfortable avatar iamalwaysuncomfortable commented on August 16, 2024

A runtime benchmark table for opcodes + pricing table is in the works.

from snarkvm.

vicsn avatar vicsn commented on August 16, 2024

EDIT: still syncing with Mike on this.

Hi @pranav and @howard, after syncing with Mike today, we believe we need the following adjustments, let us know if you agree with the direction:
1. Set BLOCK_SPEND_LIMIT to a number of credits equal to the price of a finalize block of 7.5 seconds.
- Suggestion: Mike will still continue analysis on this, but based on our current model it is likely above 100 credits.
2. Limit number of bytes hashed in finalize blocks as defense in depth.
- Suggestion: 4 million bytes, as BHP hashing 4 million bytes takes 2 seconds in Mike's benchmarks.
3. Set a TRANSACTION_SPEND_LIMIT, for UX reasons, so a single transaction can't eat up all the finalize space.
- Suggestion: BLOCK_SPEND_LIMIT / 100.

from snarkvm.

iamalwaysuncomfortable avatar iamalwaysuncomfortable commented on August 16, 2024

Basically as priced currently in the SnarkVM code. Opcodes are currently priced such that 1 second of runtime costs 100 credits. This was future proofing to ensure that in the future, people can reasonably afford to use hash functions. However governance will be able to set future prices to ensure network usability, so at the outset hash function prices can be higher as they're most abuse-able opcode.

Raising the per byte cost reasonably prices out using high byte inputs like nested arrays but it still leaves open the ability for people to use spend much less money to extended the block time using a LOT of hashes on smaller byte inputs like field elements. Limiting the # of hashes per execution doesn't solve this as people could send many executions at the maximum limit and achieve the same result.

What would be sensible is in my opinion

  1. Increase the base price of all the hash opcodes 10x (at least) at network outset so users trying to flood many hash executions reach the BLOCK_SPEND_LIMIT much faster before they can slow down block times. Also it would discourage people attempting the attack (as well as just excessive hash usage from poor finalize scope design) because of how expensive it is. I'm convinced this is a measure that should be taken.

  2. A TRANSACTION_SPEND_LIMIT or MAX_BYTES_HASHED per executions, would implicitly limit single programs from using a lot of hashes, which would be helpful, but it wouldn't limit the flood case.

  3. Potentially putting a hash limit per block or limit the # of total bytes hashed per block. This is the weirder option, but it would definitively stop validators from trying to process an amount of hash function usage that slows the network.

from snarkvm.

d0cd avatar d0cd commented on August 16, 2024

On #2, does the BLOCK_SPEND_LIMIT address the flood case?
The total spent in a block is the sum of the cost of the accepted and rejected executions.
If an execution's cost results in the running total exceeded BLOCK_SPEND_LIMIT then it's aborted.

from snarkvm.

iamalwaysuncomfortable avatar iamalwaysuncomfortable commented on August 16, 2024

@d0cd the conclusion I've come to is that a sufficiently low BLOCK_SPEND_LIMIT somewhere around 750 as you've suggeested and an increase of them current hash opcode prices 20x should be sufficient to prevent DoS attacks via Hash opcodes (and also has the benefit of being the simplest option).

I'd recommend we go with this route.

from snarkvm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.