TY - GEN
T1 - Voice Assistants' Accountability through Explanatory Dialogues
AU - Alizadeh, Fatemeh
AU - Tolmie, Peter
AU - Lee, Minha
AU - Wintersberger, Philipp
AU - Pins, Dominik
AU - Stevens, Gunnar
N1 - Publisher Copyright:
© 2024 ACM.
PY - 2024/7/8
Y1 - 2024/7/8
N2 - As voice assistants (VAs) become more advanced leveraging Large Language Models (LLMs) and natural language processing, their potential for accountable behavior expands. Yet, the long-term situational effectiveness of VAs' accounts when errors occur remains unclear. In our 19-month exploratory study with 19 households, we investigated the impact of an Alexa feature that allows users to inquire about the reasons behind its actions. Our findings indicate that Alexa's accounts are often single, decontextualized responses that led to users' alternative repair strategies over the long term, such as turning off the device, rather than initiating a dialogue about what went wrong. Through role-playing workshops, we demonstrate that VA interactions should facilitate explanatory dialogues as dynamic exchanges that consider a range of speech acts, recognizing users' emotional states and the context of interaction. We conclude by discussing the implications of our findings for the design of accountable VAs.
AB - As voice assistants (VAs) become more advanced leveraging Large Language Models (LLMs) and natural language processing, their potential for accountable behavior expands. Yet, the long-term situational effectiveness of VAs' accounts when errors occur remains unclear. In our 19-month exploratory study with 19 households, we investigated the impact of an Alexa feature that allows users to inquire about the reasons behind its actions. Our findings indicate that Alexa's accounts are often single, decontextualized responses that led to users' alternative repair strategies over the long term, such as turning off the device, rather than initiating a dialogue about what went wrong. Through role-playing workshops, we demonstrate that VA interactions should facilitate explanatory dialogues as dynamic exchanges that consider a range of speech acts, recognizing users' emotional states and the context of interaction. We conclude by discussing the implications of our findings for the design of accountable VAs.
UR - http://www.scopus.com/inward/record.url?scp=85199563507&partnerID=8YFLogxK
U2 - 10.1145/3640794.3665557
DO - 10.1145/3640794.3665557
M3 - Conference contribution
AN - SCOPUS:85199563507
T3 - Proceedings of the 6th Conference on ACM Conversational User Interfaces, CUI 2024
BT - Proceedings of the 6th Conference on ACM Conversational User Interfaces, CUI 2024
PB - Association for Computing Machinery, Inc
T2 - 6th Conference on ACM Conversational User Interfaces, CUI 2024
Y2 - 8 July 2024 through 10 July 2024
ER -