Testing the calibration of classification models from first principles

Stephan Dreiseitl, Melanie Osl

Research output: Chapter in Book/Report/Conference proceedingsConference contributionpeer-review

9 Citations (Scopus)

Abstract

The accurate assessment of the calibration of classification models is severely limited by the fact that there is no easily available gold standard against which to compare a model's outputs. The usual procedures group expected and observed probabilities, and then perform a χ(2) goodness-of-fit test. We propose an entirely new approach to calibration testing that can be derived directly from the first principles of statistical hypothesis testing. The null hypothesis is that the model outputs are correct, i.e., that they are good estimates of the true unknown class membership probabilities. Our test calculates a p-value by checking how (im)probable the observed class labels are under the null hypothesis. We demonstrate by experiments that our proposed test performs comparable to, and sometimes even better than, the Hosmer-Lemeshow goodness-of-fit test, the de facto standard in calibration assessment.

Original languageEnglish
Title of host publicationProceedings of the AMIA Annual Fall Symposium 2012
Pages164-169
Number of pages6
Volume2012
Publication statusPublished - 2012
EventAMIA Annual Fall Symposium 2012 - Chicago, IL., United States
Duration: 3 Nov 20127 Nov 2012

Conference

ConferenceAMIA Annual Fall Symposium 2012
Country/TerritoryUnited States
CityChicago, IL.
Period03.11.201207.11.2012

Fingerprint

Dive into the research topics of 'Testing the calibration of classification models from first principles'. Together they form a unique fingerprint.

Cite this