Why accuracy matters
The purpose of a forecast is to inform decisions. When the sales forecast says Q2 revenue will be £2 million and the actual result is £1.6 million, every decision based on that forecast -- hiring, investment, cash management -- was made on faulty information.
Measuring forecast accuracy is not about punishing the forecaster. It is about creating a feedback loop that makes each forecast better than the last.
How to measure it
Mean Absolute Percentage Error (MAPE). The most common metric. For each forecast period, calculate |actual - forecast| / actual, then average across periods. A MAPE of 10% means your forecasts are, on average, within 10% of actuals.
Weighted MAPE. Standard MAPE treats a £1,000 line item the same as a £1 million line item. Weighted MAPE weights each line item by its actual value, so material accounts have proportionally more impact on the overall score.
Bias. MAPE measures magnitude but not direction. A forecast that is always 15% too high has a systematic bias that MAPE alone does not reveal. Track the signed error (actual minus forecast) to identify persistent optimism or pessimism.
Forecast horizon analysis. Measure accuracy at different horizons: one month out, three months out, six months out. Accuracy should degrade predictably as the horizon lengthens. If your one-month forecast is no more accurate than your six-month forecast, your reforecasting process is not adding value.
What good looks like
Benchmark accuracy depends on the line item and the industry, but as general guidance for established businesses:
- Revenue: MAPE under 5% at one month, under 10% at three months
- COGS: MAPE under 5% at one month (closely tied to revenue)
- OpEx: MAPE under 8% at one month
- Overall P&L: MAPE under 7% at one month
Early-stage companies will naturally have higher error rates. The goal is improvement over time, not perfection from day one.
Improving accuracy systematically
1. Decompose the error. When a forecast misses, identify which assumption drove the variance. Was the volume assumption wrong? The pricing assumption? The timing? Each root cause suggests a different fix.
2. Use driver-based models. Forecasts built from operational drivers are typically more accurate than trend-based forecasts because the drivers are observable and adjustable.
3. Shorten the forecast cycle. Monthly reforecasts outperform quarterly reforecasts because assumptions are fresher. Continuous planning outperforms periodic forecasts for the same reason.
4. Involve the business. Revenue forecasts built solely by finance miss the commercial context that sales and operations teams have. Build a structured process for incorporating business intelligence into the forecast.
5. Track and publish accuracy. When forecast accuracy is measured and visible, it improves. People take more care with inputs when they know the output will be evaluated.
The accuracy trap
Beware of pursuing accuracy for its own sake. A forecast that takes three weeks to produce but is 2% more accurate than one that takes three days is not a better forecast -- it is a slower one. The goal is to be accurate enough to support good decisions, delivered fast enough to be relevant.