Improving judgmental adjustment of model-based forecasts

https://doi.org/10.1016/j.matcom.2012.11.007Get rights and content

Abstract

Many managers have access to statistical model-based forecasts and can use these to create their own forecasts. To the analyst, who aims to evaluate forecast accuracy, it is usually unknown to what extent managers use those model forecasts. Moreover, in other situations the analyst may additionally not even have access to those model-based forecasts. The present survey reports on recent developments within this context which concern an understanding of the creation of such managers’ forecasts given that model forecasts are potentially incorporated and the evaluation of their accuracy. A variety of further research topics are given.

Introduction

Many managers create their own forecasts of, for example, sales, and usually these forecasts are taken aboard for decision making. There is large literature on how individuals or managers create forecasts, where typically the focus is on judgmental forecasting. An excellent review of the literature is presented in [17]. An earlier study of Fildes and Hastings [20] presents a summary of what is known in this area.

The present paper deals with a specific situation that is usually called “judgmental adjustment”, see a short review of the literature in Section 5.6 of [17]. In this situation there is a statistical model-based forecast and the manager can use this forecast to create her own forecast. Typically, there is an analyst who aims to evaluate the quality of this manager's forecast. There are two important features in this context, and these have been addressed in the recent literature. First, the analyst usually does not know whether the manager has actively considered the model forecast. So, it is known that there is a model forecast, but whether the manager has incorporated it (and how) is unknown to the analyst. Second, it may occur that the analyst does not have access to the model forecast at all. One thing that he can do then is to try to approximate the model forecast using data on potentially relevant variables. The key premise in both situations is that it is important for appreciating the quality of managers’ forecasts to correctly deduce the contribution of the manager from the final forecast. In mathematical terms: when EFt is the manager's forecast (for, say, sales at t + 1 made at origin t) and MFt is the associated model forecast, the analyst can hypothesize the relation

EFt=α+βMFt+Itwhere α and β are unknown parameters and where It is the unpredictable part that may be called “Intuition”. To appreciate the (origins of the) quality of a manager's forecast, the analyst needs to know α and β, and hence It. And, when MFt is not available, the analyst may try to reproduce MFt by some approximate model.

There is substantial literature that examines the forecast accuracy of managers’ (or experts’) forecasts, given that managers have access to model forecasts. This literature typically focuses on accuracy statistics and on the potential bias in managers’ forecasts. There is some evidence that managers’ forecasts can improve upon pure model-based forecasts [18], but there is also evidence that managers’ forecasts are much less accurate [7], [8], [9], [11].

Further, there are many studies that more closely examine the interaction of models and managers when it comes to forecasting, see [14] for a survey. The spectrum of examination covers the improvement of managers’ forecasts by providing feedback using statistical models; see for example [15], [19]. It also covers the inclusion of managerial information in model-based forecasts; see for example [23], and the judgmental adjustment of model-based forecasts; see for example [1], [24]. Finally, there are studies on the integration of the information that can be combined to give combined forecasts; see for example [21], [22].

A key issue in the interaction of models and managers is that somehow an analyst should know how this interaction works. There are at least three important reasons for that. First, when statistical models are properly specified, they give unbiased forecasts. It is well documented that managers’ forecasts can be biased [7], [8], [9], [11]. This bias is not necessarily bad as forecast accuracy may be improved, but what should be known is the size of the bias and the source of the bias. Second, forecast errors from statistical models can be used to improve the statistical (econometric) model. That is, when systematic patterns in forecast errors exist, one can change the model by including additional variables. When it is not known how managers create their forecasts, there is thus no opportunity to learn from forecast errors. At the same time, when managers’ forecasts do improve on statistical model forecasts, the analyst also cannot learn what causes this improvement and what can make the models better. Indeed, it would be beneficial to learn about the expert's knowledge that the manager has. Third, to evaluate the quality of the managers’ forecasts, one has to rely on statistical tools to see if any forecast gain is systematic or just luck. For this, again, one needs to approximately know what it is that the manager does.

The outline of this paper is as follows. In Section 2, I will present a simple but useful framework to analyze managers’ forecasts, which builds on the above mathematical expression, and which is new to the literature. It is rather stylized, and may differ across various practical situations, but it does help to understand potential managerial behavior. In Section 3, I review the evidence obtained from interviewing managers who actually create their forecasts (when they have access to model forecasts) and the empirical evidence from analyzing actual forecasts. Section 4 deals with the situation where the analyst does not have access to the statistical model forecast and seeks to approximate this with his own model for the particular setting. A detailed and new empirical example concerning airline revenues illustrates the main idea. Section 5 gives a discussion of the items on the future research agenda, and it concludes this paper with a summary of the main findings.

Section snippets

What would be the ideal situation?

This section deals with a stylized description of the interaction between a model and a manager, where the focus is on creating one-step-ahead forecasts. In this ideal situation it can be retrieved what it is that the manager does, and also the evaluation of the managers’ forecasts is then rather straightforward.

Suppose there is a variable Y, with observations yt, for which t runs from 1, 2,…, n, and suppose the interest is in forecasting yn+1 from forecast origin n. Although all kinds of

What does the empirical evidence tell so far?

In this section, I review some recent evidence on what managers actually seem to do when they create their forecasts, in case the manager and the analyst have access to those model forecasts. First, I deal with interviews with managers, and next, I summarize recent evidence obtained from actually comparing managers’ forecasts with model forecasts.

What can an analyst to when he does not have the model forecasts?

When model-based forecasts are not available, the analyst can try to replicate the model component by assuming it could have been based on publicly available information [13]. Hence, one can consider regressions like

yˆn+1|n=Wn+1δ+ξn+1where Wn+1 contains publicly available information and where yˆn+1|n is the manager's forecast.

To illustrate this methodology, I rely on a unique database. It contains the monthly airline revenues data spanning April 2004 to and including December 2008 for KLM

Future research agenda and conclusions

Interviews and detailed empirical analysis of (the quality of) managers’ forecasts, when it is known that these managers have access to model-based forecasts, suggest two key conclusions. The first is that managers deviate very often and quite substantially from model forecasts. The second is that this behavior does not lead to much better forecast accuracy, although in specific cases it can.

It is now of interest to study managers’ behavior itself, that is, why do managers do what they do? One

References (24)

  • R. Blattberg et al.

    Database models and managerial intuition: 50% model + 50% manager

    Management Science

    (1990)
  • Y. Boulaksil et al.

    Experts’ stated behavior

    Interfaces

    (2009)
  • Cited by (5)

    • Use of contextual and model-based information in adjusting promotional forecasts

      2023, European Journal of Operational Research
      Citation Excerpt :

      The effects of expert adjustments can be substantial and are currently poorly understood. To this end, a number of studies have attempted to investigate the effect of judgmental adjustments using company data (Fildes et al., 2009; Franses, 2013; Franses & Legerstee, 2011; Sanders & Graman, 2009; Trapero, Pedregal, Fildes, & Kourentzes, 2013; Van den Broeke, De Baets, Vereecke, Baecke, & Vanderheyden, 2019), focusing on their effectiveness and possible weaknesses, in particular, their behavioural characteristics. For instance, Fildes et al. (2009) analysed data from four companies (three manufacturers and one retailer) and found that negative adjustments of statistical forecasts on average increased final accuracy, while positive ones did not (especially in the case of the retailer).

    • The human factor in supply chain forecasting: A systematic review

      2019, European Journal of Operational Research
      Citation Excerpt :

      Contextual information is particularly advantageous when data variability in a time series is high (unstable) and discontinuities can be more adequately handled by human judgment than statistical models (Sanders & Ritzman, 1992; Webby & O'Connor, 1996). The term domain knowledge is also used to define the knowledge gained through experience (Franses, 2013; Franses & Legerstee, 2011b, 2013; Lawrence et al., 2006). Franses and Legerstee (2011b) suggest that the quality of such knowledge is dependent on the horizon (i.e., short/medium/long range) in which the forecast is being made.

    • Expert adjustments of model forecasts: Theory, practice and strategies for improvement

      2014, Expert Adjustments of Model Forecasts: Theory, Practice and Strategies for Improvement
    • Modelling and simulation: An overview

      2013, Mathematics and Computers in Simulation

    I thank Sander Demouge (Organon BV) and Pieter Boomsma (KLM Royal Dutch Airlines) for helpful discussions and Rianne Legerstee for useful comments.

    View full text