Git Product home page Git Product logo

Comments (18)

maybellineboon avatar maybellineboon commented on May 6, 2024 1

Hi @gota0 ,

Reports from Private Aggregation such as protected-audience should be supported in Production now.

from trusted-execution-aggregation-service.

csharrison avatar csharrison commented on May 6, 2024

Thanks for posting @michal-kalisz . As far as I know this is a known problem with the Aggregation Service's support for Private Aggregation API and the team is working on it. Let me cc @ghanekaromkar and ruclohani@ who might know more about a planned fix for this.

from trusted-execution-aggregation-service.

ruclohani avatar ruclohani commented on May 6, 2024

Thanks for posting @michal-kalisz. Aggregation support for Private Aggregate API is currently under development. The Private Aggregation API is available for testing in Chrome M107+ Canary and Dev with aggregation service support coming soon for testing.

from trusted-execution-aggregation-service.

michal-kalisz avatar michal-kalisz commented on May 6, 2024

Thanks for information @ruclohani
I was wondering if you know when we could expect Private Aggregation API in stable version in OT?
Currently, traffic volume in canary/dev is too low to perform wider tests.

from trusted-execution-aggregation-service.

alexmturner avatar alexmturner commented on May 6, 2024

Hi @michal-kalisz, we plan to roll the OT out to Beta as soon as we see some non-trivial usage on Canary/Dev. This is just to ensure that there are no major stability issues (i.e. crashes).

Chrome has a much higher stability bar for the stable channel, so we'd need to see some substantial testing before we're able to roll to Stable. While we don't have a specific timeline, we'd also like to do this as soon as we can.

from trusted-execution-aggregation-service.

michal-kalisz avatar michal-kalisz commented on May 6, 2024

Thanks Alex, below short summary from our tests:

We wanted to test the following scenarios:

  • Calculate avg of bid value.
    and compare it with information retrieved by standard event level report sendReportTo
  • Calculate number of auction that we participate in (how often generateBid is called)
    and compare it with forDebuggingOnly report

We added 2 histograms in both generateBid and in reportWin functions.

The results.
From 2022-12-08 till 2022-12-18

chrome_ver uniq_reports uniq_ips
Chrome/107.0.0.0 4348 61
Chrome/108.0.0.0 10490 127
Chrome/109.0.0.0 31564 324
Chrome/110.0.0.0 337536 2238
Chrome/111.0.0.0 15133 56

Let’s focus on Chrome/110 and single day:
We observed that the accuracy for bid is 99.9% , but for report win is 86.9%.
It seems that not all reports have been delivered. First - here we only checked debug reports (as scheduled/postponed upload introduces too much noise). Second - we checked the budget and it seems that it wasn’t exceeded for any user.
In the same time maybe we missed something and chrome behaviour is correct.

From a testing perspective, additional debug information would be very useful. Maybe something similar to ARA verbose debugging would be possible? Have you consider it?

Please let me know if have any questions regarding out tests. Of course I am also curious if you observe any crushes or others issues.

from trusted-execution-aggregation-service.

michal-kalisz avatar michal-kalisz commented on May 6, 2024

Hi @alexmturner ,
I don't know if you had time to read my previous comment. Maybe it is not the best place for it ?(if so please let me know :)

One more thing we observed: ratio debug reports vs "normal" reports can significantly vary:

ratio_chrome110
ratio_chrome111

In the chart above:

  • debug is a number of debug reports from generateBid
  • normal is a number of normal (scheduled/postponed) reports from generateBid
  • bids is a number debug calls (forDebuggingOnly) from generateBid

from trusted-execution-aggregation-service.

alexmturner avatar alexmturner commented on May 6, 2024

Hi! Sorry for the delayed response -- I was out of office for the holidays.

Thanks for the feedback! Agreed that it seems odd the number of debug reports doesn't match the forDebuggingOnly reports. Let me follow up with the team to see if anyone has any thoughts. In the mean time, do let us know if you're able to reproduce such a case locally (e.g. by enabling the experiment flag, see https://developer.chrome.com/docs/privacy-sandbox/private-aggregation/#test-this-api).

from trusted-execution-aggregation-service.

alexmturner avatar alexmturner commented on May 6, 2024

Ah sorry I misinterpreted the chart. We would expect debug and bids to be near identical, which seems to be mostly true (but not sure what the discrepancy is on days like 26 December).

We do expect some discrepancy for 'normal' reports relative to debug reports. For example, if a user doesn't open Chrome for a while, their reports would have additional delay. Reports could also be lost in the case they delete their browsing history. How long are you waiting for normal reports in your experiment?

from trusted-execution-aggregation-service.

michal-kalisz avatar michal-kalisz commented on May 6, 2024

Hi @alexmturner , thanks for your reply.
I totally agree that it's possible, especially since it's canary/dev version (I assume, that more users do some experiments).
In this experiment - we don't reject any reports (even if we receive after few days from bid)

On previous chart the X axis was the day when raport was received.
Let's consider other perspective:
X axis is the day of bid (debugKey=bidTime, we match normal/debug report by reportID)
On the chart below we present all events received between 2022-12-23 -> 2023-01-02 for chrome/11[01]

chrome_110_111_report_types

  • some normal reports were received after 23-12, but for bidding before 23-12 (which is fine and expected)
  • What seems strange: between 2022-12-23 and 2023-01-02 there were cases (~7K) where we received ONLY normal report: without DEBUG.

I would treat it as an input to discussion if you notice some problems on the browser site.
If you haven't observed any - maybe you consider extend tests for beta chrome version?

from trusted-execution-aggregation-service.

alexmturner avatar alexmturner commented on May 6, 2024

Hi @michal-kalisz , thanks for the further detail!

Re normal reports without debug: I think it also expected that there is a small amount of debug reports that fail to be sent due to network failures or the user closing their browser at an inopportune time. Note that debug reports are not stored for later retries if they fail to send (unlike normal reports).

It does seem like a significant fraction of normal reports aren't ever being sent, though. As discussed, we do expect some reports to be lost and it's also possible that this number is inflated on Canary/Dev populations. So it might make sense to wait for broader testing before estimating this fraction.

Still if you're happy to share, I'm just curious for comparing results whether your testing population a random fraction of all users or if it restricted in some way? (Other than by Chrome version and Canary/Dev.) For example, do you test on only desktop or only mobile users?

Thanks again for sharing these results!

from trusted-execution-aggregation-service.

michal-kalisz avatar michal-kalisz commented on May 6, 2024

Hi @alexmturner,
Yes, we test on all chrome that support Fledge. I split chart by four device types (based on UA) and added (in bracket) number of uniq IP + UserAgent Pair
Desktop Linux
Desktop Mac
android

Desktop Win

For Mac and linux - we can see, that small amount of user can have significant impact on this summary,

I also observed , that on 26.12 50% debug reports came from 6 IPs (if we remove them from the report still ratio debug/normal is low)

Please let me know if you need some more information from our site.

from trusted-execution-aggregation-service.

alexmturner avatar alexmturner commented on May 6, 2024

Hi @michal-kalisz ,

Just wanted to let you know that we've increased the Origin Trial population to include Beta users now (https://groups.google.com/a/chromium.org/g/blink-dev/c/Vi-Rj37aZLs/m/cCuQksVPAAAJ). I hope this helps with testing volume

Thanks for the break down! It looks like maybe the different platforms are actually not too different in the fraction of debug only. (Except for low volume data points.)

from trusted-execution-aggregation-service.

dmdabbs avatar dmdabbs commented on May 6, 2024

Thank you for sharing PAA client OT progress to Beta!

@ruclohani ...with aggregation service support (for PAA) coming soon for testing.

With client advancing is service support soon to follow?

from trusted-execution-aggregation-service.

alexmturner avatar alexmturner commented on May 6, 2024

Sorry for the delay in responding -- we've just released a new version of the aggregation service local testing tool that support Private Aggregation API reports. More details available here: https://groups.google.com/a/chromium.org/g/shared-storage-api-announcements/c/Qabo00MwTXM.

Please let us know if you have any questions!

from trusted-execution-aggregation-service.

dmdabbs avatar dmdabbs commented on May 6, 2024

Thank you @alexmturner. Will check it out.

from trusted-execution-aggregation-service.

maybellineboon avatar maybellineboon commented on May 6, 2024

Hi @dmdabbs ,

Closing this out. Do let us know if you still have further questions on this.

Thanks!

from trusted-execution-aggregation-service.

gota0 avatar gota0 commented on May 6, 2024

Hi @maybellineboon, @alexmturner just wanted to check, is aggregation for reports from Private Aggregation such as protected-audience reports still supported only by the local testing tool and not in production?

from trusted-execution-aggregation-service.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.