3 MINUTE READ | April 4, 2016
Google's Causal Impact: Part 1 - Intro
Incrementality. It’s one of the toughest things to tackle in digital analytics. “How many incremental conversions did that branding campaign drive?” “How many additional clicks did we drive by implementing that new Adwords feature?”
The most common solution is a simple lift analysis: calculate the difference between the average conversions before and after the intervention. This method falls short of the mark because it assumes all differences between performance before and after the interaction are due strictly to that interaction. Seasonality is something that can throw a wrench into this plan. Even those that try to account for seasonality by performing the lift analysis using a forecast using the before data to the actual after data can fail to account for other business factors such as unforeseen press or promotions.
Google set out to help digital marketers with this problem. The result was the creation of the Causal Impact R package. While the inner workings are extremely complex, the overall idea is simple. You submit your response variable, conversions, clicks, etc., over time for a test and control group. (Note, this will require you to have a test group that was exposed to the interaction and a control group that wasn’t.) Causal Impact uses the control group to forecast where the test group would have ended up had the interaction not taken place. (This forecast is more accurate as the control group would have been exposed to any of the unforeseen business factors mentioned earlier.) It then calculates the cumulative difference between the forecasted and actual test group values to determine the incremental lift of the interaction.
To demonstrate, I’ll run through the simple example that Google put together and that every blog post on the package uses. We generate random dummy data and then introduce a substantial “interaction” at some point.
Default Causal Impact example. Shows actuals against forecast, pointwise difference, and cumulative difference.
The default plot Causal Impact spits out does a great job of showing the actual test values against the forecast, the pointwise difference between the two, and finally the cumulative difference, or the lift. This plot does a great job of visualizing what is happening in the analysis.
Stay in touch
Subscribe to our newsletter
Causal Impact is extremely useful when trying to show incrementally. The logic behind the methodology is sound and the intuitive visualization gives analysts an easy way to explain/demonstrate the results. We have already used it for a couple of analyses and have created a Shiny app based around this package so our marketers can run this analysis whenever they want. Unfortunately, nothing is perfect. In part 2 we will take a look at one of the issues we have seen when using this tool in the real world.
Posted by: Preston Smith
4 MINUTES READ | August 30, 2021
2 MINUTES READ | February 4, 2020
2 MINUTES READ | September 4, 2019