Earlier this month, I wrote about our 10-year anniversary as a Vertical AI-powered TV outcomes company — both the lonely road we traveled in our early years and the experience of watching many jump on the bandwagon.
It’s been gratifying to see so many in our industry acknowledge the value of what we do. Suddenly, everyone is talking about AI-powered mid-funnel outcomes as the “new” solution for measuring TV advertising. “New?!” We’ve been here for a decade.
But how do you know whether your outcomes solution is actually worthy of your trust for high-stakes decisions? Here are 3 questions to discern the real investment-grade Convergent TV intel from the AI imitators.
1. Does your model measure propensity?
One of the biggest misconceptions about outcomes measurement is that you just have to figure out what consumers did after your ads aired. But that isn’t quite right. First, you need to know what consumers would have been doing anyway. What is their baseline engagement with your brand over time? What factors drive that baseline up or down over a day, week, month, etc.?
That’s why effective outcomes solutions measure propensity, which identifies how likely different kinds of consumers are to engage with your brand or category at any given time — regardless of your advertising efforts.
When you know which consumers are most likely to engage with your brand, you can increase your ad effectiveness by targeting the right Designated Market Areas (DMAs), engaging the right households, and setting frequency caps that maximize efficiency while reducing waste. Or, you can use this data to identify high-potential DMAs where consumers frequently engage with advertisers in your category but are not yet engaged with your brand.
At EDO, we employ Vertical AI solutions to detect the myriad factors that drive baseline behaviors for tens of thousands of brands, and we match these data to millions of consumer households’ distinct propensity profiles. The result is a robust engine for helping marketers maximize effectiveness by targeting their most engaged viewers — one that wouldn’t be possible with human analysis alone.
2. Does your model measure incrementality?
Propensity is a necessary precondition for another essential aspect of an investment-grade TV outcomes model: incrementality. Incrementality isolates the true impact of a campaign by subtracting a viewer’s baseline propensity to engage from the engagement the brand earns after showing the consumer its ad.
With incrementality, marketers are able to a) measure what the consumer did after seeing an ad, b) model what they would have done if they hadn’t, and c) identify the impact generated exclusively by the ad. Without it, you’ll be attributing campaign success to factors that have nothing to do with your ad.
For example, pizza brands typically see a distinctive bump in engagement around 10pm (some might call it the “munchies bump”). So if you’re running a pizza ad on Cartoon Network late at night, it’s not enough to know if consumers searched your brand or made a mobile order — pizza ads tend to do well then and there. The real questions are whether you drove incremental lift above what you would have otherwise, and whether this lift was driven by the audience, media, or creative?
Our Vertical AI model simulates test-and-control experiments to isolate how outcomes fluctuate in response to the factors that marketers can act on. The result? An AI model that helps brands drive true incremental performance.
3. How stable is your model?
Predictive outcomes are only predictive if they are stable — meaning they easily integrate new data and protect against wild, inexplicable swings in results. Without regular stability testing, your outcomes provider could be measuring noisy correlations rather than real causal signals.
At EDO, our data scientists consistently run each iteration of our Vertical AI models through in-sample vs. out-of-sample tests and measure actual performance against the performance we predicted. As a result, we spot unexplained variance and systematically improve our AI model’s ability to learn whatever exogenous variables may undermine predictiveness.
This consistent “stability testing” enables us to confidently make causal inferences about the real factors driving or undermining ad performance. And more importantly enables marketers to confidently make planning and optimization decisions.
Everyone’s got an AI solution these days. Only some are worthy of real investment and confident decisions.
While there are many AI pretenders in the fast-growing world of outcome measurement, the true measure of these solutions is whether they power smarter, more reliable marketing decisions.
With the right propensity modeling, incrementality measurement, and stability in your model, you’ll have investment-grade data you can use to clearly assess performance and derive clear, actionable conclusions about what to do next. Without these elements, you’re left relying on guesswork to guide hundreds of millions of dollars in Convergent TV spend.
As TV becomes even more fragmented — YouTube’s already big on the big screen and TikTok is just a matter of when — it will only become more important for marketers to have reliable, predictive outcomes that are comparable across platforms. Just make sure you can spot the solutions that deliver.