A comprehensive new research study from Informs, due to be released later this month, will identify challenges for current measurement practices for online advertising, while calling for better ways to address so-called “unobservable” factors. The key takeaway, according to its authors, is that current “observational” approaches to digital advertising measurement may not be as effective as large-scale randomized control trials. Researchers from Northwestern University (Evanston, Illinois) and Facebook are set to publish, later this March, new research in the Informs Journal Marketing Science that sheds light on whether common approaches for online advertising measurement are as reliable and accurate as the “gold standard” of large-scale, randomized experiments. The study is titled A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook.
“Our findings suggest that commonly used observational approaches that rely on data usually available to advertisers, often fail to accurately measure the true effect of advertising,” said Gordon. Observational approaches are those that encompass a broad class of statistical models that rely on the data ‘as they are,’ generated without explicit manipulation through a randomized experiment. “We found a significant difference in the ad effectiveness obtained from randomized control trials and those observational methods that are frequently used by advertisers to evaluate their campaigns,” Zettelmeyer added. “Generally, the current and more common methods overestimate ad effectiveness relative to what we found in our randomized tests. Though, in some cases, they significantly underestimate effectiveness.”
Measuring the effectiveness of advertising remains an important problem for many businesses, including commercial printers and advertising/marketing agencies. A key question is whether an advertising campaign produced incremental outcomes. In other words, did more consumers purchase because they saw an ad, or would many of those consumers have purchased even in the absence of the ad? Obtaining an accurate measure of incremental outcomes (i.e. conversions) helps advertisers calculate the return on investment (ROI) of a campaign.
How the data was obtained. “Digital platforms that carry advertising, such as Facebook, have created comprehensive means to assess ad effectiveness, using granular data that links ad exposures, clicks, page visits, online purchases and even offline purchases,” said Gordon. “Still, even with this data, measuring the causal effect of advertising requires the proper experimentation platform.” The study authors used data from 15 U.S. advertising experiments at Facebook comprising 500 million user-experiment observations and 1.6 billion ad impressions. Facebook’s “conversion lift” experimentation platform provides advertisers with the ability to run randomized, controlled experiments to measure the causal effect of an ad campaign on consumer outcomes.
These experiments randomly allocate users to a control group (who are never exposed to the ad) and to a test group (who are eligible to see the ad). Comparing outcomes between the groups provides the causal effect of the ad because randomization ensures that the two groups are, on average, equivalent – except for advertising exposures in the test group. The experimental results from each ad campaign served as a baseline with which to evaluate common observational methods.
Comparison of outcomes and conclusion. Observational methods compare outcomes between users who were exposed to the ad to users who were unexposed. These two groups of users tend to differ systematically in many ways, such as age and gender. These differences in characteristics may be observable because the advertiser (or its advertising platform) often has access data on these characteristics and others. For example, in addition to knowing the gender and age of an online user, it’s possible to observe the type of device being used, the location of the user, how long it’s been since the user last visited a website, and so on.
However, the tricky part is that the exposed and unexposed groups may also differ in ways that are extremely difficult to measure – such as the users underlying affinity for the brand. To say that the ad “caused” an effect requires the research to be able to account for both observed and unobserved differences between the two groups. Observational methods use data on the characteristics of the users that are observed in an attempt to adjust for both the observable and unobservable differences. “We set out to determine whether, as commonly believed, current observational methods using comprehensive, individual-level data are ‘good enough’ for ad measurement,” said Zettelmeyer. “What we found was that even fairly comprehensive data proved inadequate to yield reliable estimates of advertising effects. In principle, we believe that using large-scale, randomized controlled trials to evaluate advertising effectiveness should be the preferred method for advertisers whenever possible.”
About Informs and Marketing Science. Marketing Science is a premier, peer-reviewed scholarly marketing journal focused on research using quantitative approaches to study all aspects of the interface between consumers and businesses. It’s published by Informs, the leading international association for operations research and analytics professionals. More information is available at www.informs.org where the study, once completed, is scheduled to be published later this month.