Git Product home page Git Product logo

Comments (6)

burdges avatar burdges commented on July 18, 2024

I presume you're talking about verifier time, not aggregation time? I'd think aggregation time should be good, although I never explored the question, and doubt anyone cares too much.

There are no actual exponential-time algorithms in this crate or afaik any of its dependencies. We've some code in https://github.com/w3f/bls/blob/master/src/verifiers.rs that actually reduces asymptotic performance, like O(n log n) vs O(n), but should always improve verifier performance on realistic sizes.

Any verifier code might've bugs of course since we never really polished this stuff. And our trait hierarchy might fail to be zero cost somehow. You can swap those verifiers to check their performance.

We'd double performance with good multiscalar multiplication ala Pippenger of course.

You can also point me to code that shows the problem or even better send me a pull request with benchmarks ala https://github.com/w3f/schnorrkel/tree/master/benches ;)

Insert all usual caveats about release builds, etc. And 500 might exceed some CPU cache line.

from bls.

NN-Binary avatar NN-Binary commented on July 18, 2024

Thank you so much for the details, i'll post up everything soon in case we can't fix it yet.

To be sure about what you mean, what you are saying is with BLS (or your crate specifically), if we verify a signature aggregated with 500, and now we verify one with 5000 aggregated, the verification time should be exactly 10 times higher and not exponentially higher right?

from bls.

burdges avatar burdges commented on July 18, 2024

All depends upon your problem, but roughly yes.

If you aggregate distinct messages with distinct signers, then aggregation buys you little, so you should consider not using BLS. In this case, all my fancy verifiers would technically degrade your asymptotic performance from O(n) to O(n log n), but only on very fact code paths, so you should never witness this next to the O(n) but slow crypto.

If you aggregate one messages with distinct signers like by using BitSignedMessage then we have SignerTables that cost O(n^2) due to https://github.com/w3f/bls/blob/master/src/bit.rs#L106 but one can easily implement a SignerTable that costs only O(n log n). Again, you should never witness this next to the O(n) but slow crypto.

At some point, we'll improve performance and security by upgrading to zcash's refactored crypto crates, but not rushing that.

from bls.

NN-Binary avatar NN-Binary commented on July 18, 2024

Thank you.

From what I understood from your comments, if we don't have distinct message (hash + pubkey), then we are subject to rogue key attack and BLS offers 0 security anymore.

My company is an insurance company, there is about 5000 computers that will all sign "facts and documents (as a hash)" every 100ms or so, 5000 computers could become higher.

We are doing hash (same hash for every signers) + pubkey (ours, individual per "signer") and then verifying by doing :

    pub fn verify_aggregated(
        &self,
        public_keys: impl IntoIterator<Item = PublicKey>,
        context: &[u8],
        message: &[u8],
    ) -> Option<bool> {
        let mut dms = public_keys
            .into_iter()
            .try_fold(DistinctMessages::new(), |dm, pk| {
                let message = message_with_appended_public_key(context, message, &pk);
                dm.add_message_n_publickey(message, pk.0)
            })
            .ok()?;
        dms.add_signature(&self.0);

        Some(dms.verify())
    }

However, this scheme looks like if we have 5000, and then suddently 50,000 it's about 5000x slower when it should be only 10x slower.

from bls.

burdges avatar burdges commented on July 18, 2024

From what I understood from your comments, if we don't have distinct message (hash + pubkey), then we are subject to rogue key attack and BLS offers 0 security anymore.

No. Rogue key attack would normally be defeated by either delinearizing the public keys or having previously checked proofs-of-possession, ala https://github.com/w3f/bls/blob/master/src/bit.rs

You can do distinct message aggregation of course, but you only really save disk space and bandwidth. You still have one miller_loop iteration for every signature, which run very slowly.

It's possible you've hit upon some bug in zcash's implementation of the miller_loop optimization, well they only ever have three, not 5000. It's also possible you're simply experiencing just how slow pairing based crypto runs, perhaps compounded by the CPU cache.

It's also possible one should not be using the miller_loop optimization beyond some input size that exceed the CPU cache, i.e. your pessemization comes entirely from the repeated memory fetches from pairs in https://github.com/zkcrypto/pairing/blob/master/src/bls12_381/mod.rs#L79-L96

In any case, your verify_aggregated is only doing batch verification, not aggregation. You can do batch verification with Schnorr signatures like Ed25519 too https://github.com/dalek-cryptography/ed25519-dalek/blob/master/src/ed25519.rs#L89 It only gave a 50% speedup previously, but maybe now more with Pippenger.

Aggregation means you share the aggregated signature over the network somehow. As an example, BitSignedMessage in https://github.com/w3f/bls/blob/master/src/bit.rs serializes as a single curve point and a bit field for all the signers, so say under 700 bytes to express up to 5000 possible signatures on the same message, and only two miller_loop iterations for verification.

from bls.

NN-Binary avatar NN-Binary commented on July 18, 2024

Amazing explanation, I learned a lot!

However, the whole point of BLS for my company is to save the bandwidth and storage cost (which was 110TB of signatures with ED25519), BLS reduce this cost so much.

What we do exactly:
1 computer send his signature to about 5000 other computers.
Those 5000 computers reply with their signature.

Then we aggregate them and if someone connect to this API, we give him the BLS aggregated signature.

Is our way the best way? We need the signature as light as possible but with some "decent" speed when verifying.

from bls.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.