Home

Forecast Error Is the Only Honest KPI

Point accuracy tells you almost nothing about forecast quality. Error distributions do.

Updated

Most FP&A organizations evaluate forecasts by how close they land to actuals. This seems reasonable. It is also misleading.

Point accuracy can look good for the wrong reasons and bad for the right ones. What matters is not whether the number was close, but whether the forecast system understands its own uncertainty.

Forecast error is the only metric that exposes this.

Error is a distribution, not a number

A forecast without an error distribution is incomplete. It provides a number without context and invites false confidence. When leadership asks, “How accurate is the forecast?”, the common response is a single percentage or a directional claim. Both are evasions. Accuracy is not a scalar. It is a distribution.

Chart placeholder: Error distribution histogram (Actual minus Forecast) with a vertical zero line.
Replace this box with a simple SVG or an image once you generate the chart.
Forecast accuracy evaluated as a distribution of errors. Bias appears as skew, dispersion increases with horizon, and point accuracy collapses this information into a single misleading outcome.

Error reveals structure

Error exposes bias, volatility, and regime shifts. Bias shows up as skew. Volatility shows up as dispersion. Regime change shows up as a shift over time. None of this is visible when forecasts are judged solely by the final variance.

This is why single period variance explanations are so often unhelpful. They improve the story after the fact, but rarely improve the forecast system.

What error forces you to ask

Error analysis demands specific questions: how wide was the uncertainty we implied, how often did actuals fall outside that range, whether misses were symmetric or directional, and how error changes as the horizon extends. These questions surface incentive driven bias, structural blind spots, and overconfidence in weak drivers.

What good forecasting functions do

High performing forecasting functions treat error as a first class signal. They track it over time. They segment it by horizon, business unit, and driver. They do not aim to eliminate error. They aim to understand it.

A forecast that acknowledges uncertainty and quantifies its error supports better decisions than a precise number that cannot defend itself.


Scientific foundations: forecast evaluation and predictive accuracy research in econometrics and time series forecasting, including work on error measures, bias, and comparative predictive accuracy.

Related: chrisalbertbaker.com