Explainability Challenges in Continuous Invisible AI for Self-Augmentation

Dinara Talypova, Philipp Wintersberger

Publikation: Beitrag in FachzeitschriftKonferenzartikelBegutachtung

Abstract

Despite the substantial progress in Machine Learning in recent years, its advanced models have often been considered opaque, offering no insight into the precise mechanisms behind their predictions. Consequently, engineers today try to implement the explainability factors into the developed models, essential for trust and adoptancy of the system. Still, there are several blocks in Explainable Artificial Intelligence (XAI) research that cannot follow the standard design methods and guidelines for providing transparency and ensuring maintaining human objectives. In this position paper, we attempt to chart various AI blocks from the perspective of Human-Computer Interaction field and identify potential gaps requiring further exploration. We suggest three-level dimension classification: relations with humans (replacing vs augmenting), interaction complexity (discrete vs. continuous), and the object of application (external world or users themselves).

OriginalspracheEnglisch
FachzeitschriftCEUR Workshop Proceedings
Jahrgang3712
PublikationsstatusVeröffentlicht - 2023
Veranstaltung2023 Workshops on Making a Real Connection and Interruptions and Attention Management, MuM-WS 2023 - Vienna, Österreich
Dauer: 3 Dez. 2023 → …

Fingerprint

Untersuchen Sie die Forschungsthemen von „Explainability Challenges in Continuous Invisible AI for Self-Augmentation“. Zusammen bilden sie einen einzigartigen Fingerprint.

Zitieren