When working on an SEM team that faces a fixed budget every month there is always some degree of uncertainty and maybe some anxiety that comes with monitoring spend pacing. Changing how you’re monitoring can dramatically reduce that uncertainty and save time and decision making capacity for other things. In this post I’ll discuss the common (traditional) way and the improved way using layers of data. I’ll even show you results of how effective it can really be.
The Common (Traditional) Way
Do you, like many, monitor pacing by comparing MTD percent of budget spent against the percent of days completed in the month? If so you likely have discussions like “Well we’re pacing slow because the 4th of July fell in those first couple of days but to be sure let’s look to see if this follows the same pattern as last year” or “Yesterday’s spend seems a little hot, something may have changed… but it was a Monday which is usually higher spend so maybe we’re okay” All of these discussions, while important to have sometimes, they are in many cases unnecessary because the historical data is almost always available to bake into a process that shows how far you are actually off.
The Improved Way Using Layers of Data
Without historical data layered into the budget pacing calculation you are essentially comparing your spend for the month against an unnaturally flat benchmark which means most of the time you will not appear to be pacing to target and so must guess whether or not you are actually pacing okay for the month. Adding the the following layers significantly reduces guess work and, as I’ll show you below, actually can predict with fairly great accuracy the natural shifts and curves to expect each day. The following layers that most teams can add are as follows:
1) Recency window for rest of month forecast
When you’re 3 weeks into the month and have been adjusting spend each week to bring pacing in line with target it isn’t fair to assume that the first 21 or so days are an accurate indicator of pacing for the remainder of the month. A more recent window is a better indicator of where you will finish for the month. We find that the last 7 days are a fairer estimate of where spend is pacing for the remainder of the month. This is the easiest change to implement. Perhaps it’s not truly a layer but it’s the first thing you should change if it’s not in your pacing calculation.
2) DoW index
Everyone sees that spend typically follows a pattern each week with some days of week always being the highest spend and others being the lowest. With historical data you can build out a spend index for day of week fluctuations. From this you can apply multipliers to your pacing tracker to predict with a good deal of accuracy how spend will vary by day of week. The chart below shows what that pattern might look like. This is by far the most valuable layer. With this layer you will spend far less time guessing whether a spend fluctuation was due to day of week or if something changed.
3) Week forecast based on last 2+ years
Seasonality is the other big factor that everyone deals with. We want to weight our spend according to when people are wanting to purchase. We also want to make sure that we don’t skew spend too high or too low for any one week which can make it difficult to correct later on in the month. Using more than last years data is recommended. Blending several years can help smooth out any issues with the data that we would want to ignore such as site issues, data issues, or promotions. The chart below shows the expected pattern for weeks with days that fall in July. We see July 4th week is low and spend is expected to jump in week 28 but then falls about 5% each week thereafter.
Adding this to your pacing expectations for the month keeps you pacing along with seasonality. Running out of budget towards the end of the month only helps competitors and makes for a shaky start at the beginning of the next month. So add this if you can.
The closeness of the bars in the chart below shows just how accurate this process can be. There was only one day where adjustments were made to keep pacing on target. The blue bars show the initial forecast for the month with DoW and Week adjustments applied. The actual spend is shown in orange. The gray bars are the forecast adjusted based on prior days spend and show how much is needed to spend each day for the rest of the month to hit budget.
We see the first several days of July with the expectation of low spend which indeed happened so no surprise there. Thereafter actual spend follows the day of week spend forecast remarkably close. There’s a slight downward trend for the month (shown by dotted linear trendline) which means more spend should be concentrated toward week 2 of the month. The fact that the adjusted daily target is even to the original monthly forecast says the process is working just as planned.
It’s still not perfect though. Some holidays change weeks which can make it tricky to determine what to expect on a weekly basis. Site issues and promotions in historical data can skew the weekly and DoW. The output is only as good as the input and predictability is key for accuracy. There’s always going to be something that needs to be manually adjusted for but can’t be automated so it’s best to leave some functionality that allows for manual tweaking to account for one off factors or to correct outliers in historical data.
I hope this helps you monitor budget pacing for your team and encourage you to reach out if you have any questions.