Forecasting software design

For most organisations their forecasts are delivered through software – forecasting support systems. These systems embody a data base, statistical models (which are often limited) and expert judgments provided by users to incorporate the many factors omitted in the statistical models. Research in the Centre has examined how such systems should be designed to improve accuracy.

Some of the limitations of expert judgment in forecasting are discussed under the heading of Value and Efficiency of Judgmental Forecasts. A key issue this research highlights is how judgments can be improved in the operational settings that face the organisational forecaster. The EPSRC funded research project on the design of such systems funded a number of experiments. Lee et al. (2009) proposed a number of approaches to summarising on-screen promotional information so as to help the expert judge promotional effects. These proved successful with the ‘best’ statistical summary proving to be the most helpful. Because we found evidence of mis-weighting evidence under certain circumstances (such as responding to positive information) Goodwin et al. (2011) examined whether guidance, offering advice as to when and how to adjust, and restriction, preventing particular adjustments, led to improved performance. These two approaches were tested in experiments using students as participants. Guidance proved helpful though it was often ignored. Restrictiveness in contract, whilst acceptable to the participants, led to poorer performance as participants gamed the restricted forecasting system.

A current collaboration with Goodwin considers how information is used and whether forecasters are motivated to be optimistic. Preliminary results have been presented at ISF2012, Boston.

An important aspect of the research on model selection and supply chain forecasting has been the development of a software package by Crone aimed at analysing and comparing large numbers of time series according to best research practice. This has allowed staff and students associated with the Centre to work with organisational data to test out, for example the ‘value added’ judgments being made.

A full list of publications can be found here.