Comments (18)
Hi @gota0 ,
Reports from Private Aggregation such as protected-audience should be supported in Production now.
from aggregation-service.
Thanks for posting @michal-kalisz . As far as I know this is a known problem with the Aggregation Service's support for Private Aggregation API and the team is working on it. Let me cc @ghanekaromkar and ruclohani@ who might know more about a planned fix for this.
from aggregation-service.
Thanks for posting @michal-kalisz. Aggregation support for Private Aggregate API is currently under development. The Private Aggregation API is available for testing in Chrome M107+ Canary and Dev with aggregation service support coming soon for testing.
from aggregation-service.
Thanks for information @ruclohani
I was wondering if you know when we could expect Private Aggregation API in stable version in OT?
Currently, traffic volume in canary/dev is too low to perform wider tests.
from aggregation-service.
Hi @michal-kalisz, we plan to roll the OT out to Beta as soon as we see some non-trivial usage on Canary/Dev. This is just to ensure that there are no major stability issues (i.e. crashes).
Chrome has a much higher stability bar for the stable channel, so we'd need to see some substantial testing before we're able to roll to Stable. While we don't have a specific timeline, we'd also like to do this as soon as we can.
from aggregation-service.
Thanks Alex, below short summary from our tests:
We wanted to test the following scenarios:
- Calculate avg of bid value.
and compare it with information retrieved by standard event level report sendReportTo - Calculate number of auction that we participate in (how often generateBid is called)
and compare it with forDebuggingOnly report
We added 2 histograms in both generateBid and in reportWin functions.
The results.
From 2022-12-08 till 2022-12-18
chrome_ver | uniq_reports | uniq_ips |
---|---|---|
Chrome/107.0.0.0 | 4348 | 61 |
Chrome/108.0.0.0 | 10490 | 127 |
Chrome/109.0.0.0 | 31564 | 324 |
Chrome/110.0.0.0 | 337536 | 2238 |
Chrome/111.0.0.0 | 15133 | 56 |
Let’s focus on Chrome/110 and single day:
We observed that the accuracy for bid is 99.9% , but for report win is 86.9%.
It seems that not all reports have been delivered. First - here we only checked debug reports (as scheduled/postponed upload introduces too much noise). Second - we checked the budget and it seems that it wasn’t exceeded for any user.
In the same time maybe we missed something and chrome behaviour is correct.
From a testing perspective, additional debug information would be very useful. Maybe something similar to ARA verbose debugging would be possible? Have you consider it?
Please let me know if have any questions regarding out tests. Of course I am also curious if you observe any crushes or others issues.
from aggregation-service.
Hi @alexmturner ,
I don't know if you had time to read my previous comment. Maybe it is not the best place for it ?(if so please let me know :)
One more thing we observed: ratio debug reports vs "normal" reports can significantly vary:
In the chart above:
- debug is a number of debug reports from generateBid
- normal is a number of normal (scheduled/postponed) reports from generateBid
- bids is a number debug calls (forDebuggingOnly) from generateBid
from aggregation-service.
Hi! Sorry for the delayed response -- I was out of office for the holidays.
Thanks for the feedback! Agreed that it seems odd the number of debug reports doesn't match the forDebuggingOnly reports. Let me follow up with the team to see if anyone has any thoughts. In the mean time, do let us know if you're able to reproduce such a case locally (e.g. by enabling the experiment flag, see https://developer.chrome.com/docs/privacy-sandbox/private-aggregation/#test-this-api).
from aggregation-service.
Ah sorry I misinterpreted the chart. We would expect debug and bids to be near identical, which seems to be mostly true (but not sure what the discrepancy is on days like 26 December).
We do expect some discrepancy for 'normal' reports relative to debug reports. For example, if a user doesn't open Chrome for a while, their reports would have additional delay. Reports could also be lost in the case they delete their browsing history. How long are you waiting for normal reports in your experiment?
from aggregation-service.
Hi @alexmturner , thanks for your reply.
I totally agree that it's possible, especially since it's canary/dev version (I assume, that more users do some experiments).
In this experiment - we don't reject any reports (even if we receive after few days from bid)
On previous chart the X axis was the day when raport was received.
Let's consider other perspective:
X axis is the day of bid (debugKey=bidTime, we match normal/debug report by reportID)
On the chart below we present all events received between 2022-12-23 -> 2023-01-02 for chrome/11[01]
- some normal reports were received after 23-12, but for bidding before 23-12 (which is fine and expected)
- What seems strange: between 2022-12-23 and 2023-01-02 there were cases (~7K) where we received ONLY normal report: without DEBUG.
I would treat it as an input to discussion if you notice some problems on the browser site.
If you haven't observed any - maybe you consider extend tests for beta chrome version?
from aggregation-service.
Hi @michal-kalisz , thanks for the further detail!
Re normal reports without debug: I think it also expected that there is a small amount of debug reports that fail to be sent due to network failures or the user closing their browser at an inopportune time. Note that debug reports are not stored for later retries if they fail to send (unlike normal reports).
It does seem like a significant fraction of normal reports aren't ever being sent, though. As discussed, we do expect some reports to be lost and it's also possible that this number is inflated on Canary/Dev populations. So it might make sense to wait for broader testing before estimating this fraction.
Still if you're happy to share, I'm just curious for comparing results whether your testing population a random fraction of all users or if it restricted in some way? (Other than by Chrome version and Canary/Dev.) For example, do you test on only desktop or only mobile users?
Thanks again for sharing these results!
from aggregation-service.
Hi @alexmturner,
Yes, we test on all chrome that support Fledge. I split chart by four device types (based on UA) and added (in bracket) number of uniq IP + UserAgent Pair
For Mac and linux - we can see, that small amount of user can have significant impact on this summary,
I also observed , that on 26.12 50% debug reports came from 6 IPs (if we remove them from the report still ratio debug/normal is low)
Please let me know if you need some more information from our site.
from aggregation-service.
Hi @michal-kalisz ,
Just wanted to let you know that we've increased the Origin Trial population to include Beta users now (https://groups.google.com/a/chromium.org/g/blink-dev/c/Vi-Rj37aZLs/m/cCuQksVPAAAJ). I hope this helps with testing volume
Thanks for the break down! It looks like maybe the different platforms are actually not too different in the fraction of debug only. (Except for low volume data points.)
from aggregation-service.
Thank you for sharing PAA client OT progress to Beta!
@ruclohani ...with aggregation service support (for PAA) coming soon for testing.
With client advancing is service support soon to follow?
from aggregation-service.
Sorry for the delay in responding -- we've just released a new version of the aggregation service local testing tool that support Private Aggregation API reports. More details available here: https://groups.google.com/a/chromium.org/g/shared-storage-api-announcements/c/Qabo00MwTXM.
Please let us know if you have any questions!
from aggregation-service.
Thank you @alexmturner. Will check it out.
from aggregation-service.
Hi @dmdabbs ,
Closing this out. Do let us know if you still have further questions on this.
Thanks!
from aggregation-service.
Hi @maybellineboon, @alexmturner just wanted to check, is aggregation for reports from Private Aggregation such as protected-audience reports still supported only by the local testing tool and not in production?
from aggregation-service.
Related Issues (20)
- Invalid value for member: issue when trying to deploy Aggregation Service to GCP HOT 4
- GCP Build container fails to build due to hanging apt-get install HOT 3
- Build Feature: GCP Build to upload zips to GCS HOT 2
- Aggregation service setup notes, snags & suggestions. HOT 2
- Clarification on aggregated report batching and privacy budget exhaustion HOT 7
- Job status is always RECEIVED HOT 9
- Feedback on consolidating Coordinator Services
- Could someone help me validate if I am collecting the reports correctly (attribution-report NODE JS version) HOT 4
- Aggregation job failing in AWS with error DECRYPTION_KEY_NOT_FOUND HOT 4
- Staging environments PRIVACY_BUDGET_AUTHORIZATION_ERROR HOT 2
- How to generate output_domain.avro when the values that make up your bucket are dynamic (example: creative id) HOT 8
- getting service error without explanation when using aggregation service HOT 11
- Job status is always RECEIVED (Terraform AWS) HOT 2
- AggregationService always returning PRIVACY_BUDGET_EXHAUSTED error after test with "debug_privacy_epsilon" HOT 2
- Aggregation Service scaling needs HOT 2
- Aggregation service job showed "Service Error" HOT 4
- How to mock up Hundreds of thousands reports for Stress Test
- Help: Aggregation-service returning metric incompatible with the number of conversions even with the scaled value HOT 17
- Terraform plan always shows changes HOT 4
- Allow multiple prefixes in aggregation jobs
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aggregation-service.