TY - GEN
T1 - Human-Centered Explainable AI (HCXAI)
T2 - 2023 CHI Conference on Human Factors in Computing Systems, CHI 2023
AU - Ehsan, Upol
AU - Wintersberger, Philipp
AU - Watkins, Elizabeth A.
AU - Manger, Carina
AU - Ramos, Gonzalo
AU - Weisz, Justin D.
AU - Daumé, Hal
AU - Riener, Andreas
AU - Riedl, Mark O.
N1 - Publisher Copyright:
© 2023 Owner/Author.
PY - 2023/4/19
Y1 - 2023/4/19
N2 - Explainability is an essential pillar of Responsible AI that calls for equitable and ethical Human-AI interaction. Explanations are essential to hold AI systems and their producers accountable, and can serve as a means to ensure humans' right to understand and contest AI decisions. Human-centered XAI (HCXAI) argues that there is more to making AI explainable than algorithmic transparency. Explainability of AI is more than just "opening"the black box - who opens it matters just as much, if not more, as the ways of opening it. In this third CHI workshop on Human-centered XAI (HCXAI), we build on the maturation through the first two installments to craft the coming-of-age story of HCXAI, which embodies a deeper discourse around operationalizing human-centered perspectives in XAI. We aim towards actionable interventions that recognize both affordances and potential pitfalls of XAI. The goal of the third installment is to go beyond the black box and examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we emphasize "operationalizing."Within our research agenda for XAI, we seek actionable analysis frameworks, concrete design guidelines, transferable evaluation methods, and principles for accountability.
AB - Explainability is an essential pillar of Responsible AI that calls for equitable and ethical Human-AI interaction. Explanations are essential to hold AI systems and their producers accountable, and can serve as a means to ensure humans' right to understand and contest AI decisions. Human-centered XAI (HCXAI) argues that there is more to making AI explainable than algorithmic transparency. Explainability of AI is more than just "opening"the black box - who opens it matters just as much, if not more, as the ways of opening it. In this third CHI workshop on Human-centered XAI (HCXAI), we build on the maturation through the first two installments to craft the coming-of-age story of HCXAI, which embodies a deeper discourse around operationalizing human-centered perspectives in XAI. We aim towards actionable interventions that recognize both affordances and potential pitfalls of XAI. The goal of the third installment is to go beyond the black box and examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we emphasize "operationalizing."Within our research agenda for XAI, we seek actionable analysis frameworks, concrete design guidelines, transferable evaluation methods, and principles for accountability.
UR - http://www.scopus.com/inward/record.url?scp=85158123516&partnerID=8YFLogxK
U2 - 10.1145/3544549.3573832
DO - 10.1145/3544549.3573832
M3 - Conference contribution
AN - SCOPUS:85158123516
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2023 - Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
Y2 - 23 April 2023 through 28 April 2023
ER -