Watching a Language Model Learning Chess

Publikation: Beitrag in Buch/Bericht/TagungsbandKonferenzbeitragBegutachtung

3 Zitate (Scopus)

Abstract

We analyse how a transformer-based language model learns the rules of chess from text data of recorded games. We show how it is possible to investigate how the model capacity and the available number of training data influence the learning success of a language model with the help of chess-specific metrics. With these metrics, we show that more games used for training in the studied range offers significantly better results for the same training time. However, model size does not show such a clear influence. It is also interesting to observe that the usual evaluation metrics for language models, predictive accuracy and perplexity, give no indication of this here. Further examination of trained models reveals how they store information about board state in the activations of neuron groups, and how the overall sequence of previous moves influences the newly-generated moves.
OriginalspracheEnglisch
TitelInternational Conference Recent Advances in Natural Language Processing, RANLP 2021
UntertitelDeep Learning for Natural Language Processing Methods and Applications - Proceedings
Redakteure/-innenGalia Angelova, Maria Kunilovskaya, Ruslan Mitkov, Ivelina Nikolova-Koleva
ErscheinungsortHeld Online
Herausgeber (Verlag)INCOMA Ltd.
Seiten1369-1379
Seitenumfang11
ISBN (elektronisch)9789544520724
DOIs
PublikationsstatusVeröffentlicht - 1 Sep. 2021

Publikationsreihe

NameInternational Conference Recent Advances in Natural Language Processing, RANLP
ISSN (Print)1313-8502

Zitieren