Approximate Q-Learning for Stacking Problems with Continuous Production and Retrieval

Judith Scagnetti, Andreas Beham, Stefan Wagner, Michael Affenzeller

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

3 Zitate (Scopus)

Abstract

This paper presents for the first time a reinforcement learning algorithm with function approximation for stacking problems with continuous production and retrieval. The stacking problem is a hard combinatorial optimization problem. It deals with the arrangement of items in a localized area, where they are organized into stacks to allow a delivery in a required order. Due to the characteristics of stacking problems, for example, the high number of states, reinforcement learning is an appropriate method since it allows learning in an unknown environment. We apply a Sarsa (λ) algorithm to real-world problem instances arising in steel industry. We use linear function approximation and elaborate promising characteristics of instances for this method. Further, we propose features that do not require specific knowledge about the environment and hence are applicable to any stacking problem with similar characteristics. In our experiments we show fast learning of the applied method and it’s suitability for real-world instances.

OriginalspracheEnglisch
Seiten (von - bis)68-86
Seitenumfang19
FachzeitschriftApplied Artificial Intelligence
Jahrgang33
Ausgabenummer1
DOIs
PublikationsstatusVeröffentlicht - 2 Jän. 2019

Fingerprint

Untersuchen Sie die Forschungsthemen von „Approximate Q-Learning for Stacking Problems with Continuous Production and Retrieval“. Zusammen bilden sie einen einzigartigen Fingerprint.

Zitieren