How To Maximize Paid Media Performance With Privacy-Centric Ad Products Google’s PMax and Meta’s ASC

Google’s Performance Max (PMax) and Meta’s Advantage+ Shopping Campaigns (ASC) are routinely credited with being today’s most effective performance advertising products coming out of privacy-centric advertising. The rise of these ML-driven, broad-targeting ad products is rapidly changing marketing behavior. I sat down with Jonathan Yantz, Managing Partner at M&C Saatchi Performance, to talk through these changes and how advertisers can remain effective amid this changing world.

— Adam Landis, Head of Growth at Branch


Hi Jonathan, thank you for sitting down with me today. So, I know M&C Saatchi Performance has authority in this space. Can you share a little bit about the work you’re doing with customers?

Sure! Thanks for having me, Adam. We’re actively helping both established and challenger brands navigate the ever-changing media landscape. Whether that be across SKAN 4 (soon to be 5) and GA4 adoption, measurement in a cookieless world (it is possible), or how to get the most out of PMax, ASC, and similar campaign types, it’s clearly a dynamic time in media. But when is it not?

Today I really wanted to talk about the broad-based machine learning ad products offered by Meta and Google. Can you give us a very high-level idea of these products and how (and why) they work?

PMax is a comprehensive solution that leverages Google’s advanced machine learning algorithm and a single campaign to reach multiple channels (YouTube, SEM, display, etc.). Think of it as a one-stop shop to reach consumers across most of Google’s ecosystem, letting Google find them with minimal human intervention and direction. Similarly, Meta’s Advantage+ Shopping Campaigns (A+ or ASC) also put the algorithm in the driver’s seat, leveraging broad targeting and the best possible creative permutations based on the assets you provide to find the right consumer at the right time. Both simplify the overall process. I see Google’s key benefit being tapping into their plethora of inventory sources, while Meta’s strongest benefit is the ML-enabled audience targeting. 

These solutions are great for advertisers just starting out or trying to gauge effectiveness at lower budget levels, since combining everything into one campaign is meant to help exit the “learning phase” faster than with manually segmented campaigns. However, these campaigns offer a limited “look under the hood” at how and why decisions are made. Therefore, they require trust from the advertiser and typically provide limited ability to apply deeper learnings to other channels or campaigns. Notably, both companies state they are working on solutions to offer more transparency, where possible.

I’ve heard this from clients. It’s a very different process, and I appreciate you sharing your experience. Readers may already know, but the genesis of these products stems from the shift towards user privacy, which degrades the signal and granularity of advertising. From what I understand, it’s a pretty radical shift in how performance marketing historically operates and requires a drastic change in approach and workflow — and on top of all of this, it reduces the insights into what’s actually working. How are performance advertisers responding?

The past decade has shown a major shift from hyper-segmentation and manual “hands-on-keyboard” control to nearly full ML-led campaigns that aim to simplify things for advertisers. After all, aren’t these algorithms equipped to make real-time shifts quickly and at a larger scale when compared to humans attempting to optimize activity 24/7? Well, yes and no. A lot of factors are at play here. Marketers give up certain levels of transparency in exchange for faster optimizations, which requires a mental shift for advertisers. However, the algorithms can only be as effective as the data signals they are receiving, and experienced manual inputs are very much still required, especially when it comes to creative optimization. 

For example, Meta can optimize effectively off its pixel or SDK, but people are required to make sure these optimizations are in line with a brand-specific ultimate source of truth, whether that be GA4, a mobile measurement partner (MMP) like Branch, or their own data warehouse. As another example, we work with a few brands in heavily regulated industries such as sports betting and financial services, and in these instances, ML-driven creative permutations, GenAI creative, and even audience targeting need to be strictly monitored. These are just some reasons why campaigns like these can’t just be “set and forget” and do require expertise from people well-versed in managing campaigns to ensure success.

I remember when Google’s UAC (Universal App Campaigns) rolled out. We heard many complaints because it forced advertisers to run on YouTube, which back then wasn’t testing very well. Nowadays, we’re hearing similar challenges because PMax is designed to take away advertisers’ choice of channels.

I hadn’t considered regulation coming into play. Can you tell me a little bit more about how that affects advertisers? Is it messaging compliance?

Messaging compliance is a big part of it. Since you can’t see all possible permutations that the system generates, much like with Google App Campaigns (GAC), you may lose confidence in your ability to validate everything being served. For example, Meta encourages brands to try Advantage+ Creative, which brings a whole suite of dynamic optimizations. These can include visual touch-ups, text improvements, music, 3D animation, and image expansion. If you’re in a heavily regulated industry, you may be unable to take advantage of this. Another example: Some of our clients are in the entertainment space where they promote the IP of a different brand (think: audiobook services, CTV platforms, etc.), and contractually they need to ensure their partner logos, IP, etc. are accurately represented in all advertising. In these cases, these new approaches could be tricky to successfully implement, particularly around creative.

That’s a really good point. So, these categories, whether it’s brand safety or regulation, are just prohibited from operating with these new products?

As of now, only to a certain degree. In both cases of Meta and Google, the creative enhancements — or Google’s version within PMax, which takes it a step further as “automatically created assets” — are entirely opt-in. You can still benefit from the machine learning that comes with automatic bidding, automatic placements, etc., as long as your data and measurement infrastructure is properly set up. 

Right, and GenAI could invent something totally new.

Exactly. And that seems to be where the industry is heading, with Google’s PMax really trying to push ahead with Google AI. While PMax is trying to push better asset combination transparency compared to GAC, marketers need to be aware of the best ways to set up campaigns when leveraging Gen AI tools. That’s because there are some limitations, such as campaign-level asset reporting being the best view (versus ad set-level) when using automatically created assets. It is important to review new iterations frequently, and there is the ability to manually pause inappropriate ones. However, when it comes to automated bidding strategies, targeting, etc., these platforms are less transparent in terms of decision-making, which is something to consider.

That’s actually a super interesting point. Eric Suefert has a good related point, which he brings up in a recent article about these types of products. Essentially, these platforms have a negative incentive to offer targeting transparency. They are only incentivized to meet your overall targets and not to expose any of the targeting or buys that are falling well short. So, where a human operator might stop a buy when performance starts to dip — but is still well above the target — the algorithms will continue to buy negatively until the return on ad spend (ROAS) sinks to that target. So what do you do? Do you juice the targets?

That is a consideration, as the systems will seek the most efficient consumers or segments. Marketers should have a holistic strategy in place that will enable them to find other avenues for incremental growth. That’s why it’s important to think carefully and differently about the shift toward ML automation and pure algorithmic-led buying. At a surface level, it just sounds easier, and a big part of that is true, but if you’re not careful, you could miss out on significant revenue over time.

That reminds me of when UAC campaigns were a brand-new concept. People started a lot of little campaigns, which allowed them to kind of tweak the levers a little bit more. The problem is this kind of goes directly in the face of how machine learning works, which is that it needs a lot of data to target effectively. So, I don’t know if breaking apart little campaigns would actually rob the machine learning algorithm of its ability to succeed.

Finding a healthy balance that allows “test and learn” while the models continue to develop is one approach. It’s clear that hyper-segmentation isn’t the way to go either, at least not for most brands. The volume of data points needed, especially to exit the learning phase, is a great reason for this. Some brands we work with can easily clear the learning phase early on with an install + registration + free trial (on GAC, specifically), while others with either a higher price point, longer consumer journey, or limited awareness will rarely exit the learning phase when optimizing toward their ultimate goal. 

So that’s where a lot of testing and a watchful eye on cost per action (CPA), ROAS, or whatever your revenue metric is comes into play. For an entertainment brand, we tried a variety of strategies to drive ROI-positive results from PMax but were unable to achieve the goal. While this is a somewhat rare case, upon reverting to running our main Google activity separately (YouTube, SEM, display), YouTube quickly became the most efficient aside from our low-level branded SEM. It came in under our CPA goal at scale despite launching as a brand new campaign. So you really just need to be willing to constantly try something new. 

So, would YouTube work where PMax doesn’t? Wouldn’t they be using the same tools and methodologies? 

Although it may seem counterintuitive, we were able to drive success on YouTube by using bid modifiers for age and gender. With PMax, we could either choose “all” or segment into different campaigns, but we found that the best consumers by age and gender weren’t consistent across Google’s channels. 

Ah, so here’s a real-world example where targeting is helping drive success. In this case, you know the audience but are driving too few conversions to teach the machine learning systems to figure it out. So, what is the outcome of this for ML-enabled buying? Do you need to back away from the ideal outcome because it happens too late or too infrequently and try to predict what will turn into a conversion? Or do you need to use an ad product that you can force targeting because you know better than the algorithm? Because I assume we’re not always going to have the ability to target with future products.

That might help, but it’s not the entire picture; it’s possible that a big part of this example was driven by creative. The videos we had at our disposal spoke to a different audience than our display assets or search copy. In this sense, PMax was struggling to find consumers across the board who would convert within our bid levels and ultimately only spent a fraction on video. Sometimes, it can be more valuable to have the right type of creative versus focusing purely on volume of creative. Ideally, you’d have quality and quantity. But to your point, when struggling to exit the learning phase, one of the few recourses you typically have is to move the optimization event up the funnel to something that will drive more event fires. 

This is actually very similar to what we run into when optimizing signals from SKAN outcomes. Effectively, out of 100 users, it’s much better to have 10 “directionally good” outcomes than one “ideal” outcome because it allows the algorithms to collect more signals that serve as inputs for the model. So it’s a similar outcome; you need to figure out how to get early optimal output of the campaign, even if it’s not perfect, to give feedback to the statistical model or machine learning model that’s driving.

So, how do you get started with finding “directionally good” signals?

You can’t just go on day one trying to optimize toward the most expensive conversion or, say, 10 steps down the user journey. You need to go back to the early days of GAC when you would start optimizing for the install, then graduate into optimizing toward an action, potentially giving it multiple actions of similar importance. Get people in the door, start building up signals in general, and then slowly go down the funnel. Granted, not all brands have the time or budget for this, so you can either look to “directionally good” signals from your organic behavior and other paid channels or create a testing approach starting with events assumed to show consumer intent that are fairly frictionless.

Thank you, Jonathan. We could go on for hours, but can you leave us some parting thoughts on what your firm has learned?

Sure, it’s obvious the AI revolution has and will continue to significantly impact the advertising industry and drive new technological enhancements for marketers. New product enhancements like PMax and ASC are exciting developments that will open up opportunities for testing, but there will always be a need for experienced people overseeing campaigns. Working with experienced partners enables results to be evaluated for learnings and campaigns updated depending on the results. Marketers should set long-term goals, such as user retention, as well as acquisition targets to avoid being too short-term in their approach.