Analyzing the Innovative Potential of Texts Generated by Large Language Models: An Empirical Evaluation

Oliver Krauss, Michaela Jungwirth, Marius Elflein, Simone Sandler, Christian Altenhofer, Andreas Stoeckl

Publikation: Beitrag in Buch/Bericht/TagungsbandKonferenzbeitragBegutachtung

Abstract

As large language models (LLMs) revolutionize natural language processing tasks, it remains uncertain whether the text they generate can be perceived as innovative by human readers. This question holds significant implications for innovation management, where the generation of novel ideas from extensive text corpora is crucial. In this study, we conduct an empirical evaluation of 2170 generated idea texts, containing product and service ideas in current trends for specific companies, focusing on three key metrics: innovativeness, context, and text quality. Our findings show that, while not universally applicable, a substantial number of LLM-generated ideas exhibit a degree of innovativeness. Remarkably, only 97 texts within the entire corpus were identified as highly innovative. Moving forward, an automated evaluation and filtering system to assess innovativeness could greatly support innovation management by facilitating the pre-selection of generated ideas.
OriginalspracheEnglisch
TitelDatabase and Expert Systems Applications - DEXA 2023 Workshops - 34th International Conference, DEXA 2023, Proceedings
Redakteure/-innenGabriele Kotsis, Ismail Khalil, Atif Mashkoor, Johannes Sametinger, A Min Tjoa, Bernhard Moser, Maqbool Khan
ErscheinungsortCham
Herausgeber (Verlag)Springer
Seiten11-22
Seitenumfang12
ISBN (Print)9783031396885
DOIs
PublikationsstatusVeröffentlicht - Aug. 2023

Publikationsreihe

NameCommunications in Computer and Information Science
Band1872 CCIS
ISSN (Print)1865-0929
ISSN (elektronisch)1865-0937

Zitieren