TY - GEN
T1 - Human-Centered Explainable AI (HCXAI)
T2 - 2024 CHI Conference on Human Factors in Computing Sytems, CHI EA 2024
AU - Ehsan, Upol
AU - Watkins, Elizabeth A.
AU - Wintersberger, Philipp
AU - Manger, Carina
AU - Kim, Sunnie S.Y.
AU - Van Berkel, Niels
AU - Riener, Andreas
AU - Riedl, Mark O.
N1 - Publisher Copyright:
© 2024 Owner/Author.
PY - 2024/5/11
Y1 - 2024/5/11
N2 - Human-centered XAI (HCXAI) advocates that algorithmic transparency alone is not sufficient for making AI explainable. Explainability of AI is more than just "opening"the black box - who opens it matters just as much, if not more, as the ways of opening it. In the era of Large Language Models (LLMs), is "opening the black box"still a realistic goal for XAI? In this fourth CHI workshop on Human-centered XAI (HCXAI), we build on the maturation through the previous three installments to craft the coming-of-age story of HCXAI in the era of Large Language Models (LLMs). We aim towards actionable interventions that recognize both affordances and pitfalls of XAI. The goal of the fourth installment is to question how XAI assumptions fare in the era of LLMs and examine how human-centered perspectives can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we emphasize "operationalizing."We seek actionable analysis frameworks, concrete design guidelines, transferable evaluation methods, and principles for accountability.
AB - Human-centered XAI (HCXAI) advocates that algorithmic transparency alone is not sufficient for making AI explainable. Explainability of AI is more than just "opening"the black box - who opens it matters just as much, if not more, as the ways of opening it. In the era of Large Language Models (LLMs), is "opening the black box"still a realistic goal for XAI? In this fourth CHI workshop on Human-centered XAI (HCXAI), we build on the maturation through the previous three installments to craft the coming-of-age story of HCXAI in the era of Large Language Models (LLMs). We aim towards actionable interventions that recognize both affordances and pitfalls of XAI. The goal of the fourth installment is to question how XAI assumptions fare in the era of LLMs and examine how human-centered perspectives can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we emphasize "operationalizing."We seek actionable analysis frameworks, concrete design guidelines, transferable evaluation methods, and principles for accountability.
UR - http://www.scopus.com/inward/record.url?scp=85194136511&partnerID=8YFLogxK
U2 - 10.1145/3613905.3636311
DO - 10.1145/3613905.3636311
M3 - Conference contribution
AN - SCOPUS:85194136511
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2024 - Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Sytems
PB - Association for Computing Machinery
Y2 - 11 May 2024 through 16 May 2024
ER -