Approximate Q-Learning for Stacking Problems with Continuous Production and Retrieval

Judith Scagnetti, Andreas Beham, Stefan Wagner, Michael Affenzeller

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

This paper presents for the first time a reinforcement learning algorithm with function approximation for stacking problems with continuous production and retrieval. The stacking problem is a hard combinatorial optimization problem. It deals with the arrangement of items in a localized area, where they are organized into stacks to allow a delivery in a required order. Due to the characteristics of stacking problems, for example, the high number of states, reinforcement learning is an appropriate method since it allows learning in an unknown environment. We apply a Sarsa (λ) algorithm to real-world problem instances arising in steel industry. We use linear function approximation and elaborate promising characteristics of instances for this method. Further, we propose features that do not require specific knowledge about the environment and hence are applicable to any stacking problem with similar characteristics. In our experiments we show fast learning of the applied method and it’s suitability for real-world instances.

Original languageEnglish
Pages (from-to)68-86
Number of pages19
JournalApplied Artificial Intelligence
Volume33
Issue number1
DOIs
Publication statusPublished - 2 Jan 2019

Fingerprint

Dive into the research topics of 'Approximate Q-Learning for Stacking Problems with Continuous Production and Retrieval'. Together they form a unique fingerprint.

Cite this