Analyzing the Innovative Potential of Texts Generated by Large Language Models: An Empirical Evaluation

Oliver Krauss, Michaela Jungwirth, Marius Elflein, Simone Sandler, Christian Altenhofer, Andreas Stoeckl

Research output: Chapter in Book/Report/Conference proceedingsConference contributionpeer-review

Abstract

As large language models (LLMs) revolutionize natural language processing tasks, it remains uncertain whether the text they generate can be perceived as innovative by human readers. This question holds significant implications for innovation management, where the generation of novel ideas from extensive text corpora is crucial. In this study, we conduct an empirical evaluation of 2170 generated idea texts, containing product and service ideas in current trends for specific companies, focusing on three key metrics: innovativeness, context, and text quality. Our findings show that, while not universally applicable, a substantial number of LLM-generated ideas exhibit a degree of innovativeness. Remarkably, only 97 texts within the entire corpus were identified as highly innovative. Moving forward, an automated evaluation and filtering system to assess innovativeness could greatly support innovation management by facilitating the pre-selection of generated ideas.
Original languageEnglish
Title of host publicationDatabase and Expert Systems Applications - DEXA 2023 Workshops - 34th International Conference, DEXA 2023, Proceedings
EditorsGabriele Kotsis, Ismail Khalil, Atif Mashkoor, Johannes Sametinger, A Min Tjoa, Bernhard Moser, Maqbool Khan
Place of PublicationCham
PublisherSpringer
Pages11-22
Number of pages12
ISBN (Print)9783031396885
DOIs
Publication statusPublished - Aug 2023

Publication series

NameCommunications in Computer and Information Science
Volume1872 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Keywords

  • Artificial Intelligence
  • Data Quality
  • Decision Support
  • Large Language Models

Cite this