Explainability Challenges in Continuous Invisible AI for Self-Augmentation

Dinara Talypova, Philipp Wintersberger

Research output: Contribution to journalConference articlepeer-review

Abstract

Despite the substantial progress in Machine Learning in recent years, its advanced models have often been considered opaque, offering no insight into the precise mechanisms behind their predictions. Consequently, engineers today try to implement the explainability factors into the developed models, essential for trust and adoptancy of the system. Still, there are several blocks in Explainable Artificial Intelligence (XAI) research that cannot follow the standard design methods and guidelines for providing transparency and ensuring maintaining human objectives. In this position paper, we attempt to chart various AI blocks from the perspective of Human-Computer Interaction field and identify potential gaps requiring further exploration. We suggest three-level dimension classification: relations with humans (replacing vs augmenting), interaction complexity (discrete vs. continuous), and the object of application (external world or users themselves).

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume3712
Publication statusPublished - 2023
Event2023 Workshops on Making a Real Connection and Interruptions and Attention Management, MuM-WS 2023 - Vienna, Austria
Duration: 3 Dec 2023 → …

Keywords

  • Attention Management System
  • Continuous AI
  • HCI
  • Seamless Technology
  • XAI

Fingerprint

Dive into the research topics of 'Explainability Challenges in Continuous Invisible AI for Self-Augmentation'. Together they form a unique fingerprint.

Cite this