Session: New Operations Management Research Areas

Chair: Robert Boute (KU Leuven)

Title: Supervised Learning Techniques for Anomaly Detection and Predictive Maintenance for Reefer Containers
Speaker: Claudio Ciancio, Vrije Universiteit Amsterdam

Abstract: Inspired by a real-life case in reefer container maintenance, this paper examines the use of signals collected from multiple sensors to extract patterns for detecting pending failures in a system. Various machine learning techniques have been presented in the last decades to perform anomaly detection. In logistics and operations settings, there are typically only limited anomaly labels available making unsupervised methods more appropriate. However, unsupervised learning methods are only able to distinguish normal from anomalous situations without giving indications about the nature of the anomaly in the system. In many operations environments, however, additional information on the anomaly is needed to bring the system back to normal condition in a short time.

In this paper, an alternative framework based on supervised learning with limited anomaly data is presented. In particular, three neural network architectures to detect a set of known anomaly types on reefer containers are proposed and compared. Moreover, a predictive model is developed to predict future sensor values which are used to identify anomaly conditions in advance. The numerical tests show the ability of the proposed framework to identify the subset of sensors correlated to specific anomaly types and predicting anomalous situations of the system with good accuracy.
____________________________________________________________________________________________________________________________________________

Title: Deep Reinforcement Learning – Closing the gap towards implementation in practice
Speaker: Joren Gijsbrechts (KU Leuven)

Abstract: The performance of deep learning algorithms has increased rapidly during past years. This has resulted in many implementations of deep supervised machine learning algorithms in industry applications. For instance, smartphones leverage deep learning applications such as face and speech recognition. Deep learning models have also found their way into various operations contexts, improving forecasts of sales, lead times or estimating failure times. Recently, however, another branch of machine learning that relies on deep learning has been gaining attention: Deep Reinforcement Learning (DRL). DRL models learn policies by interacting with an environment and have achieved excellent performance across a variety of problems such as board or computer games. They provide a promising way to tackle intractable problems within the field of operations.  Yet, in contrast to supervised learning, applications of DRL in practice remain rather limited. Training DRL algorithms is inherently harder and its performance is poorly understood. This presentation will provide perspective on the pros and contras of state-of-the-art RL algorithms developed within the field of computer science and how we can contribute as an operations community to making these algorithms perform better to ultimately close the gap towards adoption of DRL in practice.
____________________________________________________________________________________________________________________________________________

Title: Reinforcement learning in logistics: Optimizing transshipments in dynamic spare parts networks
Speaker: Willem van Jaarsveld (TU/e)

Abstract: Algorithms have long been used to optimize decision making in logistics and supply chain. Traditional approaches yield excellent results when the eventual outcome of decisions can be accurately predicted, but struggle in dynamic environments where decision outcomes crucially depend on unknown future events and decisions. Reinforcement learning (RL) has been shown to have huge potential for optimizing decisions in such dynamic environments. As an example, we consider an OEM that operates a worldwide spare parts distribution network to enable quick deliveries to reduce costly downtime at customers. For expensive parts, inventory is limited, and good coverage can only be guaranteed by dynamically rebalancing inventories in response to demand variations and customer (dis)satisfaction. This is a very challenging environment for traditional approaches, but our preliminary results show that RL performs well.