Watching a Language Model Learning Chess

Research output: Chapter in Book/Report/Conference proceedingsConference contributionpeer-review

Abstract

We analyse how a transformer-based language model learns the rules of chess from text data of recorded games. We show how it is possible to investigate how the model capacity and the available number of training data influence the learning success of a language model with the help of chess-specific metrics. With these metrics, we show that more games used for training in the studied range offers significantly better results for the same training time. However, model size does not show such a clear influence. It is also interesting to observe that the usual evaluation metrics for language models, predictive accuracy and perplexity, give no indication of this here. Further examination of trained models reveals how they store information about board state in the activations of neuron groups, and how the overall sequence of previous moves influences the newly-generated moves.
Original languageEnglish
Title of host publicationInternational Conference Recent Advances in Natural Language Processing, RANLP 2021
Subtitle of host publicationDeep Learning for Natural Language Processing Methods and Applications - Proceedings
EditorsGalia Angelova, Maria Kunilovskaya, Ruslan Mitkov, Ivelina Nikolova-Koleva
Place of PublicationHeld Online
PublisherINCOMA Ltd.
Pages1369-1379
Number of pages11
ISBN (Electronic)9789544520724
DOIs
Publication statusPublished - 1 Sep 2021

Publication series

NameInternational Conference Recent Advances in Natural Language Processing, RANLP
ISSN (Print)1313-8502

Cite this