TY - GEN
T1 - Shedding Light on the Black Box
T2 - 25th International Conference on Human-Computer Interaction, HCII 2023
AU - Falatouri, Taha
AU - Nasseri, Mehran
AU - Brandtner, Patrick
AU - Darbanian, Farzaneh
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - The lack of transparency in outcomes of advanced machine learning solutions, such as deep learning (DL), leads to skepticism among business users about using them. Particularly, when the output is used for critical decision-making or has financial impacts on the business, trust and transparency is crucial. Explainable Artificial Intelligence (XAI) has been widely utilized in recent years to convert the black box of DL techniques into understandable elements. In this research, we implement Long Short-Term-Memory (LSTM) networks to predict repair needs for geographically distributed heating appliances in private households. To conduct our analysis, we use a real-word dataset of a maintenance service company with more than 350.000 records over the time span of five years. We employ the SHAP (SHapley Additive exPlanations) method for global interpretation, describing overall model behavior, and – for local interpretation – providing explanations for individual predictions. The results of the DL model and the additional XAI outputs were discussed with practitioners in a workshop setting. Results confirm that XAI increases the willingness to use DL for decision making in practice and boosts the explainability of such models. We also found that the willingness to trust and follow XAI predictions depends on whether explanations conform with mental models. In total, XAI was found to represent an important addition to DL models and fosters their utilization in practice. Future research should focus on applying XAI on additional models, in different use cases or conduct broader evaluations with several company partners.
AB - The lack of transparency in outcomes of advanced machine learning solutions, such as deep learning (DL), leads to skepticism among business users about using them. Particularly, when the output is used for critical decision-making or has financial impacts on the business, trust and transparency is crucial. Explainable Artificial Intelligence (XAI) has been widely utilized in recent years to convert the black box of DL techniques into understandable elements. In this research, we implement Long Short-Term-Memory (LSTM) networks to predict repair needs for geographically distributed heating appliances in private households. To conduct our analysis, we use a real-word dataset of a maintenance service company with more than 350.000 records over the time span of five years. We employ the SHAP (SHapley Additive exPlanations) method for global interpretation, describing overall model behavior, and – for local interpretation – providing explanations for individual predictions. The results of the DL model and the additional XAI outputs were discussed with practitioners in a workshop setting. Results confirm that XAI increases the willingness to use DL for decision making in practice and boosts the explainability of such models. We also found that the willingness to trust and follow XAI predictions depends on whether explanations conform with mental models. In total, XAI was found to represent an important addition to DL models and fosters their utilization in practice. Future research should focus on applying XAI on additional models, in different use cases or conduct broader evaluations with several company partners.
KW - Business Analytics
KW - Decision Making
KW - Deep Learning
KW - Explainable AI
KW - LSTM
KW - Model Interpretability
UR - http://www.scopus.com/inward/record.url?scp=85178581950&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-48057-7_5
DO - 10.1007/978-3-031-48057-7_5
M3 - Conference contribution
AN - SCOPUS:85178581950
SN - 9783031480560
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 69
EP - 83
BT - HCI International 2023 – Late Breaking Papers - 25th International Conference on Human-Computer Interaction, HCII 2023, Proceedings
A2 - Degen, Helmut
A2 - Ntoa, Stavroula
A2 - Moallem, Abbas
PB - Springer
Y2 - 23 July 2023 through 28 July 2023
ER -