Git Product home page Git Product logo

Comments (15)

tromp avatar tromp commented on August 28, 2024

from cuckoo.

dlongley avatar dlongley commented on August 28, 2024

No; it's not. It would require a lot of expertise to figure out
appropriate settings to make it work. And it may still require actual
source code changes.

Thanks for the response! Good to know.

What exactly are your requirements for running time and memory use?
Upper bounds? Lower bounds?

We have a target run time of 10 hours +/- 2 hours. Memory requirements are largely dependent on the goals of enabling the use of commodity hardware and preventing centralized systems with specialized hardware from circumventing or reducing the spam protection in a negative way. We've estimated 1GB-2GB but welcome other expert advice. The time/memory tradeoffs of Cuckoo Cycle here are also attractive (~order of magnitude less memory => ~order of magnitude slower).

And what aggregate proof size is acceptable or desirable?

Acceptable is 1KB with 2KB pushing it. Desirable is 0! :)

My understanding is that the proof sizes are under 200 bytes, so naively chaining roughly five of them together would be acceptable.

Do you think that implementing a difficulty factor (as described in the whitepaper) would be the way to go here?

from cuckoo.

tromp avatar tromp commented on August 28, 2024

from cuckoo.

dlongley avatar dlongley commented on August 28, 2024

Is that on a modern Intel CPU with 4 or more threads?
There will be some slowdown on systems without AVX2 (and they need to
run a different executable).

There's no assumption that it will be on a modern Intel CPU with 4 threads. But given our use case and that the target time is so long -- we don't think that being off by a couple of hours is a problem. It's effectively: "Start your PoW and go to sleep. It will be ready in the morning." There is no direct reward for being the fastest to solve a proof and the only potential indirect reward must be balanced against an alternative instant authorization mechanism for writing data to the ledger. I describe that a bit below ... I'm trying to keep it short and relevant to Cuckoo Cycle but some context may be required. :)

How bad is parallellism for your use case?

Well, it somewhat depends on the specific PoW. Some more context:

  1. The data that is written to the ledger is essentially a set of public keys and service information (see: DID Spec if you're interested). We do not expect significantly heavy writes in the same way you see with, for example, cryptocurrency ledgers.

  2. Obtaining authorization to write to the ledger can happen through two mechanisms: PoW or an authorization obtained from a third party that has agreed to pay X USD per bytes written to the ledger to a non-profit organization. That non-profit pays money to the entities that run elector nodes in the ledger and to the software maintainers. We expect users to pay Y USD to these third parties to get authorization (or become a third party themselves).

These two methods are essentially in conflict with one another -- the former starves the ledger of its funding and the latter replenishes it. The PoW is to ensure that permissionless writes are always possible, however, the goal is for the majority of writes to go through an entity that pays to help support the ledger. The system intentionally does not involve any tokens to avoid speculation and the debt incurred by ICO models. The goal is to create a relatively cheap public utility.

So, the PoW characteristics should help drive a strong preference for the mechanism that pays for the ledger whilst still allowing the PoW mechanism. A long running PoW pitted against an instant low cost alternative is likely sufficient to address this goal when considering individual users. However, the PoW requirements should also deter entities from setting up bulk PoW services, i.e., the PoW cost should be greater than the instant alternative.

If a bulk PoW service can significantly reduce the PoW time whilst beating the cost of the instant alternative, that could begin to starve the ledger's funding. So, this is primarily where parallelism could be a problem.

Other factors include that the proof solutions are stored on ledger (so we would prefer to minimize their size) and that a bulk PoW service could potentially precompute PoWs. The latter is at least partially mitigated by the fact that private key material must be paired against the data that is the input to the PoW, leading trust issues with such a "precompute" service (as it would have to generate and hold user private keys until distributed). The non-profit, with the consent of the ledger nodes, also has the option to change the PoW (or potentially bring a lawsuit) should such a service become trustworthy and popular.

It could be that perhaps the best mechanism for preventing bulk PoW services from arising is to change the entire PoW mechanism often enough, though it would be ideal to avoid that kind of the churn with simpler tuning for as long as possible.

Given the above, do you think a better approach is to turn the difficulty threshold on the cyclehash very high such that we only need a single proof or to choose something lower coupled with lowering the cycle length and increasing the number of chained proofs required? Or perhaps something else?

from cuckoo.

tromp avatar tromp commented on August 28, 2024

from cuckoo.

dlongley avatar dlongley commented on August 28, 2024

The second worry is that production of Cuckoo Cycle ASICs is imminent,
what with Grin launching near year end with Cuckoo Cycle as PoW,
yielding yet another order of magnitude reduction in cost per proof.
The second worry can be mitigated though by just tweaking Cuckoo Cycle
a little.

When you say tweaking a little -- are you referring to the tweaks we've already discussed or something more?

The shorter the proof chain, the more variance you have in total time
needed. You could be running 500 attempts and still be unlucky enough
never to hit on a 42-cycle. Or, with difficulty threshold, you could
be running 2000 attempts and be similarly unlucky never to hit a
42-cycle passing the 1 out of 4 difficulty threshold. So make proof
chain long enough to give acceptable variance, then shorten cycle to
get acceptable proof chain size, and finally pick threshold to get the
right amount of total effort needed.

Thanks, John, that's very useful. I think we can figure out the parameters we need.

But getting back to the earlier issue: how do you want to deal with GPUs?

Short answer: Wait until a problem arises, then increase the difficulty, potentially eliminating the option to use CPUs, or switch to a different PoW. We estimate that using cloud computing resources (renting GPUs) to do these proofs in 1 hour will not be at a lower total cost than the instant authorization method. Additional costs are incurred by a bulk PoW service via other mechanisms.

Longer answer: Our primary concern is with the threat of a bulk PoW service that can sell "instant" PoWs. If the service can't deliver in a near instant time frame, we believe that most users will opt for the actually instant mechanism that funds the ledger because the cost is already so low. To that end, we believe we've come up with a way to make it significantly difficult for a PoW service to efficiently provide precomputed PoWs -- such that whether they are using GPUs or not may be a separable issue.

By requiring the chained proof to mix in proofs of existence (hashes from contemporaneous blocks in the blockchain) we can cause ledger nodes to reject PoWs that do not fall into an acceptable window. We can also keep the proof size relatively small by merely referencing these hashes via block height in the proof itself.

While it is true that, for each link in the chain, some hardware may more efficiently compute a solution (and therefore be freed up to be used for other work), the total chain cannot be successfully submitted until the requisite time period has passed (Note: there are exceptions to this rule for cases when the ledger is idle, but submitting just one PoW to the ledger removes this idleness property, and, the conditions for even attempting to make a profit here require a non-idle ledger).

This forces a viable bulk PoW service to precompute PoWs -- there is no option to compute them instantly. This means that the PoW service must do two things: cover the cost to securely generate, store (and destroy) private keys and, secondly, mitigate wasted effort (generated PoWs that never get sold). It must either drive the cost of the PoWs to near zero or develop sophisticated logistics software to maintain a profitable level of efficiency. If a PoW service is successfully at either of these, it can be combated by increasing the difficulty required or changing the PoW.

Under certain conditions, the difficulty may need to reach a threshold that prohibits individuals from easily using the current PoW. It's possible that this will be an acceptable outcome at that time. If not, it could potentially be addressed by switching to a new PoW. Perhaps more ideally, the need for a PoW at the ledger layer could be removed entirely because a healthy ecosystem of "instant" authorization mechanisms has surfaced in the meantime. Given that sort of ecosystem, experimenting with PoWs or accepting various other forms of payment can be pushed out to the edges (to the decentralized entities that can hand out instant authorizations).

from cuckoo.

tromp avatar tromp commented on August 28, 2024

from cuckoo.

dlongley avatar dlongley commented on August 28, 2024

Something more; you'll need to solve a different class of graphs.
Which you'd do by slightly changing the mapping from indices to endpoints.
You could for instance use siphash_3_4 or siphash_2_5 instead of siphash_2_4.

Ok, sounds good.

I don't quite follow. Why couldn't I buy a proof from the bulk
provider just prior to the end of the requisite time period? At that
point the bulk provider can insert proofs of existence of any desired
time in the past.

If the bulk provider can't generate a proof for you in near instant time, the presumption is that you will buy an authorization from someone else (for a marginally higher price) that can. The alternative instant method provides this option. Knowing proofs of existence from the past does not enable the bulk provider to generate proofs more quickly. Rather, it forces precomputed PoWs to expire.

In order for the bulk provider to generate the proof quickly enough, they must have vastly superior hardware (which raising the difficulty threshold can potentially address) or they must have actually precomputed it. To use a precomputed value means to trust the bulk provider to have securely generated and stored your private keys -- and to destroy them once they hand over the proof to you. Furthermore, the bulk provider must ensure that enough users show up to purchase the precomputed proofs they've already spent resources on -- or they will considered be losses as the proof of existence mechanism forces them to expire.

from cuckoo.

tromp avatar tromp commented on August 28, 2024

from cuckoo.

dlongley avatar dlongley commented on August 28, 2024

But perhaps that is too much overhead for them given the tiny proceeds...

That's the gamble. If we begin to see that that gamble was wrong then we can adjust the difficulty threshold. Using a big GPU mining farm for this task is probably a dramatic misappropriation of resources. I would expect using them to mine cryptocurrency would yield a more reasonable reward -- and I believe we have the levers to make that true if we begin to find out otherwise.

from cuckoo.

tromp avatar tromp commented on August 28, 2024

from cuckoo.

dlongley avatar dlongley commented on August 28, 2024

do you mind I mention your project at the bottom of the Cuckoo Cycle
project page?

Not at all. Thanks again for creating Cuckoo Cycle!

Link: Veres One

from cuckoo.

tromp avatar tromp commented on August 28, 2024

from cuckoo.

dlongley avatar dlongley commented on August 28, 2024

@tromp,

I tried to find mention of Cuckoo Cycle anywhere on the website,
in vain:-(
Is Veres One still using Cuckoo Cycle?

We still plan on using some variant of Cuckoo Cycle, but we're working out which one. We haven't updated the Veres One site in while as we've been heads down doing a big development push (on many items, not just related to PoW). We saw that there are now Cuckatoo and Cuckaroo variants in the works. We've written some libraries to add proof chaining for our use case here:

https://github.com/digitalbazaar/cuckoo-cycle/tree/dev

And some node.js wrappers for your solver engine here:

https://github.com/digitalbazaar/cuckoo-cycle-engine-tromp/tree/dev

And we're putting together an implementation to make some variant of cuckoo cycle a Linked Data Proof here:

https://github.com/digitalbazaar/cuckoo-ldp/tree/implementation

At the moment, the main thing that's actually driving which variant we'll use is how well supported we think the code base will be and how easily we can run that code on consumer systems and via platforms like node.js. We were doing some work in our fork to make the code more library friendly here, but my understanding is that some of that may be getting incorporated already:

https://github.com/digitalbazaar/cuckoo/tree/lib

Anyway, all of this has us trending away from "cuckoo" and towards either "cuckatoo" or "cuckaroo". I think we'd probably prefer "cuckaroo" as it targets GPUs, but it seems like we may use "cuckatoo" to begin with since the code appears to be more ready given our timeline.

Our model has changed slightly to allow for more agility with respect to which PoW we go with, which is especially vital given the recent flux we're seeing with cuckoo cycle. We will certainly have updates to our Veres One site once we get through our current development push. We will include details on what PoW we've chosen -- and will certainly give attribution to you!

from cuckoo.

tromp avatar tromp commented on August 28, 2024

from cuckoo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.