Determinanten von Vertrauen und Akzeptanz von Large Language Models im Unternehmenskontext

  • Michael Ziegl

    Student thesis: Master's Thesis

    Abstract

    There is considerable interest in advances in the field of large language models due to their potential to support complex tasks in a business context. Applications such as ChatGPT have reached a large user group in a short period of time. At the same time, there are weaknesses, such as the creation of so-called hallucinations, which is understood to mean the generation of inaccurate or illogical content. These characteristics make it difficult to gain trust and thus acceptance in a business environment. Trust and acceptance are considered essential prerequisites for the successful introduction of new technologies, as it has been proven that corresponding attitudes influence willingness to use them. Against this background, this master's thesis systematically examines factors influencing trust and acceptance of large language models in a business context and analyzes their effect on willingness to use them. This thesis is divided into a theoretical and an empirical part. The theoretical section explains the basics of artificial intelligence and large language models, European guidelines for trustworthy AI, and theories on trust and acceptance. Based on this, a research model was developed that integrates technical and functional requirements, social influencing factors, and ethical aspects as variables influencing trust and acceptance. Methodologically, a two-stage approach was implemented: First, a systematic literature review was conducted to assess the current state of research. This was followed by a quantitative online survey of 172 employees with experience in working with large language models. The evaluation was carried out using statistical analyses and linear regressions to test the formulated hypotheses. The empirical results show that functional aspects are determinants of the acceptance of large language models. Demographic characteristics such as age or gender do not prove to be relevant moderators of these relationships. In addition, trust is included in the model as an influencing factor. Higher trust in a large language model correlates significantly with a higher intention to use it. It is clear that functional expectations and trust-building conditions are crucial for acceptance in a corporate context. Ethical factors are therefore important for trust in large language models. A high perceived respect for human autonomy significantly increases trust. Similarly, perceived fairness, understood as the absence of discriminatory outputs, has a positive effect on trust. The analysis shows that ethical principles such as autonomy and fairness positively influence employees' trust in large language models and thus form a basis for the successful introduction of this technology.
    Date of Award2025
    Original languageGerman (Austria)
    Awarding Institution
    • Johannes Kepler University Linz
    SupervisorRené Riedl (Supervisor)

    Studyprogram

    • Digital Business Management

    Cite this

    '