TY - GEN
T1 - Analyzing the Innovative Potential of Texts Generated by Large Language Models: An Empirical Evaluation
AU - Krauss, Oliver
AU - Jungwirth, Michaela
AU - Elflein, Marius
AU - Sandler, Simone
AU - Altenhofer, Christian
AU - Stoeckl, Andreas
N1 - Funding Information:
and Funding. Funding was provided by the Austrian Research Promotion Agency (FFG) under the Project Explainable Creativity (EACI, project number 892004). We thank AnyIdea (https://anyidea.ai/) for their provision of data sets used in this work. We thank the project partner Cloudflight (https://www.cloudflight.io/), especially Michael Weissenböck, Anna Hausberger and Rine Rajendran, for their valuable contributions to this work. We thank Michaela Jungwirth and Marius Elflein for the conceptualization and the organization of the evaluation, and the reviewers of the texts for their valuable contribution to this work.
Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023/8
Y1 - 2023/8
N2 - As large language models (LLMs) revolutionize natural language processing tasks, it remains uncertain whether the text they generate can be perceived as innovative by human readers. This question holds significant implications for innovation management, where the generation of novel ideas from extensive text corpora is crucial. In this study, we conduct an empirical evaluation of 2170 generated idea texts, containing product and service ideas in current trends for specific companies, focusing on three key metrics: innovativeness, context, and text quality. Our findings show that, while not universally applicable, a substantial number of LLM-generated ideas exhibit a degree of innovativeness. Remarkably, only 97 texts within the entire corpus were identified as highly innovative. Moving forward, an automated evaluation and filtering system to assess innovativeness could greatly support innovation management by facilitating the pre-selection of generated ideas.
AB - As large language models (LLMs) revolutionize natural language processing tasks, it remains uncertain whether the text they generate can be perceived as innovative by human readers. This question holds significant implications for innovation management, where the generation of novel ideas from extensive text corpora is crucial. In this study, we conduct an empirical evaluation of 2170 generated idea texts, containing product and service ideas in current trends for specific companies, focusing on three key metrics: innovativeness, context, and text quality. Our findings show that, while not universally applicable, a substantial number of LLM-generated ideas exhibit a degree of innovativeness. Remarkably, only 97 texts within the entire corpus were identified as highly innovative. Moving forward, an automated evaluation and filtering system to assess innovativeness could greatly support innovation management by facilitating the pre-selection of generated ideas.
KW - Artificial Intelligence
KW - Data Quality
KW - Decision Support
KW - Large Language Models
UR - http://www.scopus.com/inward/record.url?scp=85171565451&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-39689-2_2
DO - 10.1007/978-3-031-39689-2_2
M3 - Conference contribution
SN - 9783031396885
T3 - Communications in Computer and Information Science
SP - 11
EP - 22
BT - Database and Expert Systems Applications - DEXA 2023 Workshops - 34th International Conference, DEXA 2023, Proceedings
A2 - Kotsis, Gabriele
A2 - Khalil, Ismail
A2 - Mashkoor, Atif
A2 - Sametinger, Johannes
A2 - Tjoa, A Min
A2 - Moser, Bernhard
A2 - Khan, Maqbool
PB - Springer
CY - Cham
ER -