As we started looking through the event-level reports from our testing of Chrome’s Privacy Sandbox Attribution Reporting API (ARA), one thing was immediately clear: ARA wasn’t as bad as recent industry opinion led us, and many marketers, to believe. Far from it, in fact.
From our initial analysis, we’ve drawn a few conclusions about what ARA solves, lacks, and changes for marketers and agencies. In this blog, we share our learnings so far so that you:
Background
Chrome’s most recent cookie deprecation delay extends the testing period for its cookieless solutions, Privacy Sandbox, until at least early 2025. For many marketers, however, this latest delay simply pushes Sandbox testing further to the bottom of their to-do lists.
To date, there’s been little published around Sandbox results and insights, outside of articles like this. Reflecting on these unknowns, we’re now sharing what we’ve learned so far from analyzing ARA’s event-level reports.
There are two types of ARA report:
Event-level reports associate individual ad clicks and views with attributed conversion events. These row-level reports can contain an auction ID, thus giving them all the same contextual richness of a cookie-based conversion log. Trade-offs of event-level reports include noise (i.e. a small number of fictional conversion events) and delays for added privacy. In theory, these reports should support granular optimization, as long as noise and reporting delays can be managed.
Summary reports provide aggregated conversion data for a more complete view of performance, including more detail about results such as cart contents and the monetary value of conversions. Summary reports have a limit (or budget) on the granularity of reporting, which constrains the richness of the data for optimization.
What ARA solves: privacy-first optimization
In our first wave of testing, ARA reported on 84.9% of the same unique converters as cookies, with an additional 3.7% that cookies did not capture (some of which will be noise). Given that event-level reports capture the same richness of data as legacy conversion pixels, this is a highly viable optimization dataset. A few things to note:
There’s a healthy scope for ARA to improve in the coming months. If more browsers start to adopt it, it could even produce more data scale than cookies at some point in the future.
In short, ARA appears to have clear value for optimization, and dismissing it will likely place campaigns at a strategic disadvantage come 2025.
What ARA lacks: a complete measurement dataset
As mentioned, ARA event-level reports see a high % of Chrome converters but, on a campaign running across multiple browsers and devices, see a lower % of total conversions than cookies. This is problematic if this data were to be used in its raw state - fewer conversions means lower ROI. Again there are some important things to note here:
The bottom line here is that ARA data is usable for measurement, but modeling will be required in order to accurately represent the true ROI of campaigns. Given the prevalence of modeled conversions within tools such as Campaign Manager 360, this doesn’t feel like a blocker to ARA’s adoption; ultimately a combination of summary and event-level reports will likely be needed in order to most accurately model out true campaign performance.
What ARA changes: campaign optimization best practice
With the amount of data collectable with cookies, brands and partners often resorted to a ”track everything and sift out what we need later” approach. That doesn’t stick with ARA. Reporting decisions now need to be made up-front, with brands and operational teams deciding which data or conversions they’d like to prioritize in the trigger priority settings, and making calculated tradeoffs between speed, accuracy, and detail.
Partners such as MiQ will need to strike the right balance between the number of conversion events that they want to receive and the amount of noise included in reports, with more conversion events leading to more noise. In our testing, we limited tracking to a single conversion event per impression or click but ARA’s trigger priority settings enable us to continue recording different types of conversion.
Finding the right balance across various settings will become much harder once the testing period ends, when cookie-based comparisons disappear and the noise level is no longer revealed. That’s arguably the most compelling reason to start testing now
We think that this change will also support more strategic trading as brands work with partners to make considered choices around the pixels actually needed for a campaign, rather than adding a pixel to every page.
At MiQ, we’re looking forward to the improved focus that we see this change forcing upon conversion tracking, with the potential for positive knock-on effects across trading, campaign setup, and deeper brand-partner relationships.
ARA is… almost there
We think ARA will benefit the industry, the consumer, and the marketer, with more focused data, privacy-first optimization, and more strategic approaches to conversion tracking. That said, we view ARA as one of many powerful solutions which can be used in combination to deliver the best campaign outcomes.
With at least another year of testing to go, we’re expecting increased industry collaboration around Privacy Sandbox which we hope will lead to improved adoption rates, support for ARA across more device types, and better measurement of total converters leading to more accurate reporting.
Iterating and improving upon Privacy Sandbox APIs is only possible through continued testing, which marketers can and should get involved with now, for the benefit of both their campaigns and the wider industry.
Join our Privacy Sandbox Early-Access Testing Program for priority invitations to all upcoming Sandbox tests. You’ll receive:
Methodology
This analysis uses data from 6 brands across 4 different markets. Conversions were measured as visits to landing and conversion pages.
Stay tuned for our ARA summary-level report analysis, where we’ll dive deeper into the available conversion data.