Shedding Light on the Black Box: Explainable AI for Predicting Household Appliance Failures

Research output: Chapter in Book/Report/Conference proceedingsConference contributionpeer-review

Abstract

The lack of transparency in outcomes of advanced machine learning solutions, such as deep learning (DL), leads to skepticism among business users about using them. Particularly, when the output is used for critical decision-making or has financial impacts on the business, trust and transparency is crucial. Explainable Artificial Intelligence (XAI) has been widely utilized in recent years to convert the black box of DL techniques into understandable elements. In this research, we implement Long Short-Term-Memory (LSTM) networks to predict repair needs for geographically distributed heating appliances in private households. To conduct our analysis, we use a real-word dataset of a maintenance service company with more than 350.000 records over the time span of five years. We employ the SHAP (SHapley Additive exPlanations) method for global interpretation, describing overall model behavior, and – for local interpretation – providing explanations for individual predictions. The results of the DL model and the additional XAI outputs were discussed with practitioners in a workshop setting. Results confirm that XAI increases the willingness to use DL for decision making in practice and boosts the explainability of such models. We also found that the willingness to trust and follow XAI predictions depends on whether explanations conform with mental models. In total, XAI was found to represent an important addition to DL models and fosters their utilization in practice. Future research should focus on applying XAI on additional models, in different use cases or conduct broader evaluations with several company partners.

Original languageEnglish
Title of host publicationHCI International 2023 – Late Breaking Papers - 25th International Conference on Human-Computer Interaction, HCII 2023, Proceedings
EditorsHelmut Degen, Stavroula Ntoa, Abbas Moallem
PublisherSpringer
Pages69-83
Number of pages15
ISBN (Print)9783031480560
DOIs
Publication statusPublished - 2023
Event25th International Conference on Human-Computer Interaction, HCII 2023 - Copenhagen, Denmark
Duration: 23 Jul 202328 Jul 2023

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14059 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference25th International Conference on Human-Computer Interaction, HCII 2023
Country/TerritoryDenmark
CityCopenhagen
Period23.07.202328.07.2023

Keywords

  • Business Analytics
  • Decision Making
  • Deep Learning
  • Explainable AI
  • LSTM
  • Model Interpretability

Fingerprint

Dive into the research topics of 'Shedding Light on the Black Box: Explainable AI for Predicting Household Appliance Failures'. Together they form a unique fingerprint.

Cite this