Robert Fildes and Ivan Svetunkov organised a successful three-day forecasting stream during this year’s OR60 conference at Lancaster University. Various members from the Centre for Marketing Analytics and Forecasting also contributed with different talks:
The timing of the 60th anniversary OR Society conference offered an opportunity to look back at the history of forecasting. With this in mind, John Boylan, together with Aris Syntetos, provided “A 60 Year Retrospective on Intermittent Demand Inventory Forecasting”. In the early days of commercial forecast implementations, all Stock Keeping Units were treated in the same way. However, this failed to take into account the special nature of ‘intermittent’ demand items with periods of zero demand. John’s presentation covered the development of models in this area and reviewed the empirical evidence on their application to retail demand forecasting. He also highlighted open research questions in this field and progress towards answering these questions. Last but not least, the talk stressed the opportunities for software enhancements, in an attempt to bridge the gap between theory and practice in this important area of application.
Similarly, Ivan Svetunkov with Nikos Kourentzes focused on “Forecasting using exponential smoothing: the past, the present, the future”. Exponential smoothing has been known in both theoretical and practical forecasting for more than 60 years. It has evolved substantially from a simple exponential smoothing method, aiming at dealing with level data to a state-space framework, covering various time series characteristics. In his presentation, Ivan discussed the key milestones in the development of exponential smoothing, showing the connections between exponential smoothing and other forecasting models and, finally, proposed a more general framework that can potentially encompass all the existing forecasting models, called "Generalised univariate model".
Robert Fildes with Shaohui Ma and Stephan Kolassa provided an “an overview of retail forecasting”. Based on a literature review and survey evidence the talk reviewed the limited research on different aspects of retail forecasting, in particular, the different problems that must be faced, which include: the volume of the category, brand and SKU-level sales at distribution centre and store as well as the volume of store sales by location. Promotional events and online behaviour are important influences affecting demand. Such different types of problem have seen surprisingly little research focussed on operational problems retailers face. Robert and his co-authors identified the gaps, presented the evidence on effectiveness and acted as a stimulant to researching these important problems where small improvements in accuracy can have major financial benefits.
The presentation of Sasan Barak elaborated on “Deep Neural Networks for Forecasting Model Selection - An Image Recognition Approach to Classify Time Series Patterns from Graphs”. Traditionally, expert-based forecasting model specification utilises tools such as time series graphs, seasonal plots and autocorrelation functions to identify the time series components, outliers and structural breaks. While visual data exploration allows accurate forecasting model selection, it does not facilitate large-scale automation of model selection. Sasan presented a novel way where training a deep learning neural network (DNN) on the patterns of the level, trend, seasonality, and trend-seasonality enables a successful selection of Exponential Smoothing (ETS) models. The results improve accuracy over statistical tests and wrappers while being more efficient in computing resources.
Nikos Kourentzes gave a talk on model uncertainty in hierarchical forecasting. For many years, hierarchical forecasting was based on ad-hoc approaches such as top-down or bottom-up. Nowadays, the model-based optimal combination approach provides a more flexible and accurate hierarchical forecasting framework. However, this approach does not consider the uncertainty the analyst has on each forecast and underlying model for each node. Intuitively, one would expect that it is reasonable to weigh more forecasts that one feels confident about over forecasts that are less certain. Nikos proposed such a weighing framework and demonstrated the power of his approach by examining two variants, one that is computationally efficient but has strict modelling requirements, and one that trades computational efficiency for minimal assumptions hierarchies.
Oliver Schaer gave a talk on “Forecasting with Pre-Release Search Traffic Profiles”. The talk is based on a case using the relatively easy to measure Google Trends information for forecasting video game sales. In contrast to existing methods which solely build on regression of historical sales, the two-stage approach with clustering and classifying methods has the advantage of not only providing better forecasting results, but also assessing homogenous competitor products.
In the session dedicated to judgmental forecasting, Anna Sroginis provided a talk, which elaborated on how algorithmic and qualitative information are interpreted when making judgmental forecast adjustments. To investigate these question, Anna conducted experiments that simulate a typical supply chain forecasting process that additionally provided qualitative information in the presence of promotions. She found that participants tend to focus on several anchors: the last promotional uplift, current statistical forecast and contextual statements for the forecasting period. At the same time, participants ignored the past baseline promotional uplifts and domain knowledge about the past promotions. Anna also discounted statistical models with incorporated promotional effects, hence showing lack of trust in algorithms.
Last but not least, Ivan Svetunkov presented a work lead by Sergey Svetunkov on forecasting demand using complex-valued autoregressive models. There exist many pairs of products that are either interchangeable or complementary. While it is possible to use vector autoregressive models (VAR) which take those features into account, they can be cumbersome and prone to overfitting. Sergey and Ivan proposed a complex-valued autoregressive model (CAR), which can be considered as a parsimonious version of VAR. They demonstrated how to estimate this model, discussed the properties of the complex-valued autoregressive models and proposed the order selection mechanism. Finally, they also demonstrated the performance of CAR compared to VAR on real and simulated data.
If you require accessible versions of the slides or documents within this news story, please contact us.Back to News