Scenario-Based Requirements Elicitation for User-Centric Explainable AI: A Case in Fraud Detection

Douglas Cirqueira, Dietmar Nedbal, Markus Helfert, Marija Bezbradica

Research output: Chapter in Book/Report/Conference proceedingsConference contributionpeer-review

43 Citations (Scopus)

Abstract

Explainable Artificial Intelligence (XAI) develops technical explanation methods and enable interpretability for human stakeholders on why Artificial Intelligence (AI) and machine learning (ML) models provide certain predictions. However, the trust of those stakeholders into AI models and explanations is still an issue, especially domain experts, who are knowledgeable about their domain but not AI inner workings. Social and user-centric XAI research states it is essential to understand the stakeholder’s requirements to provide explanations tailored to their needs, and enhance their trust in working with AI models. Scenario-based design and requirements elicitation can help bridge the gap between social and operational aspects of a stakeholder early before the adoption of information systems and identify its real problem and practices generating user requirements. Nevertheless, it is still rarely explored the adoption of scenarios in XAI, especially in the domain of fraud detection to supporting experts who are about to work with AI models. We demonstrate the usage of scenario-based requirements elicitation for XAI in a fraud detection context, and develop scenarios derived with experts in banking fraud. We discuss how those scenarios can be adopted to identify user or expert requirements for appropriate explanations in his daily operations and to make decisions on reviewing fraudulent cases in banking. The generalizability of the scenarios for further adoption is validated through a systematic literature review in domains of XAI and visual analytics for fraud detection.

Original languageEnglish
Title of host publicationMachine Learning and Knowledge Extraction - 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Proceedings
EditorsAndreas Holzinger, Andreas Holzinger, Peter Kieseberg, A Min Tjoa, Edgar Weippl, Edgar Weippl
PublisherSpringer
Pages321-341
Number of pages21
ISBN (Print)9783030573201
DOIs
Publication statusPublished - 2020
Event4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference for Machine Learning and Knowledge Extraction, CD-MAKE 2020 - Dublin, Ireland
Duration: 25 Aug 202028 Aug 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12279 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference for Machine Learning and Knowledge Extraction, CD-MAKE 2020
Country/TerritoryIreland
CityDublin
Period25.08.202028.08.2020

Keywords

  • Domain expert
  • Explainable artificial intelligence
  • Fraud detection
  • Requirements elicitation

Fingerprint

Dive into the research topics of 'Scenario-Based Requirements Elicitation for User-Centric Explainable AI: A Case in Fraud Detection'. Together they form a unique fingerprint.

Cite this