Human Centered Interpretations for Complex Algorithmic Decision Making

  • Michael Wachert-Rabl

    Student thesis: Master's Thesis

    Abstract

    This thesis explores the application of Explainable Artificial Intelligence (XAI) in the
    domain of fraud detection within the banking sector, addressing the critical need for
    transparency in increasingly complex machine learning models. As financial institutions
    shift from traditional rule-based systems to more advanced machine learning algorithms,
    the lack of interpretability in these models has emerged as a significant challenge, particularly in understanding and mitigating false positives and negatives.
    This thesis combines quantitative and qualitative research to assess the effectiveness of
    various fraud detection classifiers, demonstrating that the XGBoost algorithm delivers
    superior performance. Additionally, it qualitatively evaluates the usability of explainable AI (XAI) techniques, including LIME and SHAP, in enhancing the interpretability
    of opaque models.
    The research question guiding this thesis is: “How can algorithmic fraud detection models overcome their lack of interpretability?” The central hypothesis is whether modelagnostic XAI algorithms can surpass the expertise of human domain experts in fraud
    detection. The findings reveal that while machine learning models offer superior accuracy
    over traditional rule-based systems, their “black-box” nature poses significant challenges
    for fraud analysts. XAI tools like LIME and SHAP enhance the interpretability of these
    models, but the trade-off between model complexity and transparency remains a critical
    issue. The thesis concludes with insights into the limitations of current XAI methods
    and suggests directions for future research to address these challenges.
    Date of Award2024
    Original languageEnglish (American)
    SupervisorBogdan Burlacu (Supervisor)

    Cite this

    '