Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Combining Retrospective Optimization and Gradient Search for Supply Chain Optimization

Abstract

In initial work, we found a version of Retrospective Optimization, in which we optimize over a single randomly generated long sample path, is often effective for optimizing policy parameters in relatively simple stochastic supply chains. In these applications, the optimization problem is frequently an integer program. However, preliminary efforts to directly extend this methodology to more complex supply chains, and to optimize risk mitigation strategies, were in many cases too slow to be effective.

To address this limitation, we first develop a two-stage algorithm that uses Retrospective Optimization over a relatively short time horizon to provide starting points for stochastic approximation gradient search. We perform extensive computational experiments to compare this approach to Retrospective Optimization without gradient search on a sequence of increasingly complex supply chains.

In a data-driven setting where the policy parameters are set using available past data rather than a randomly generated sample path, the resulting MILP formulation presents similar computational challenges to those found in our Retrospective Optimization approaches. This observation motivates us to modify and test our algorithm in a data-driven setting, using sales data from a major European grocery chain store. We focus on a setting in which policy paramters can be a function of exogenous factors such as the day of the week and the outside temperature. We examine complex inventory management models that involve perishable inventory. and show that with suitable modifications, the same two-stage algorithm is effective for determining the data-driven inventory policy parameters.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View