TY - JOUR
T1 - Team at Your Service
T2 - Investigating Functional Specificity for Trust Calibration in Automated Driving with Conversational Agents
AU - Wintersberger, Philipp
N1 - Publisher Copyright:
© 2023 Taylor & Francis Group, LLC.
PY - 2023
Y1 - 2023
N2 - Functional specificity describes the degree to which operators can successfully calibrate their trust toward different subsystems of a machine. Only a few works have addressed this issue in the context of automated vehicles. Previous studies suggest that drivers have issues distinguishing between different subsystems, which leads to low functional specificity. To counter, this article presents a prototypical design where different in-vehicle subsystems are portrayed by independent conversational agents. The concept was evaluated in a user study where participants had to supervise a level 2 automated vehicle while reading and communicating with the conversational agents in the car. It was hypothesized that a clear differentiation between subsystems could allow drivers to better calibrate their trust. However, our results, based on subjective trust scales, monitoring, and driving behavior, cannot confirm this assumption. In contrast, functional specificity was high among participants of the study, and they based their situational and general trust ratings mainly on the perceptions of the driving automation system. Still, the experiment contributes to issues of trust and monitoring and concludes with a list of relevant findings to support trust calibration in supervisory control situations.
AB - Functional specificity describes the degree to which operators can successfully calibrate their trust toward different subsystems of a machine. Only a few works have addressed this issue in the context of automated vehicles. Previous studies suggest that drivers have issues distinguishing between different subsystems, which leads to low functional specificity. To counter, this article presents a prototypical design where different in-vehicle subsystems are portrayed by independent conversational agents. The concept was evaluated in a user study where participants had to supervise a level 2 automated vehicle while reading and communicating with the conversational agents in the car. It was hypothesized that a clear differentiation between subsystems could allow drivers to better calibrate their trust. However, our results, based on subjective trust scales, monitoring, and driving behavior, cannot confirm this assumption. In contrast, functional specificity was high among participants of the study, and they based their situational and general trust ratings mainly on the perceptions of the driving automation system. Still, the experiment contributes to issues of trust and monitoring and concludes with a list of relevant findings to support trust calibration in supervisory control situations.
KW - automated vehicles
KW - driving simulator study
KW - functional specificity
KW - SAE level 2 driving
KW - trust calibration
KW - Trust in automation
UR - http://www.scopus.com/inward/record.url?scp=85163677101&partnerID=8YFLogxK
U2 - 10.1080/10447318.2023.2219952
DO - 10.1080/10447318.2023.2219952
M3 - Article
AN - SCOPUS:85163677101
SN - 1044-7318
VL - 39
SP - 3254
EP - 3267
JO - International Journal of Human-Computer Interaction
JF - International Journal of Human-Computer Interaction
IS - 16
ER -