AI

A Unifying Theory of Distance from Calibration

1 Mins read

We study the fundamental question of how to define and measure the distance from calibration for probabilistic predictors. While the notion of perfect calibration is well-understood, there is no consensus on how to quantify the distance from perfect calibration. Numerous calibration measures have been proposed in the literature, but it is unclear how they compare to each other, and many popular measures such as Expected Calibration Error (ECE) fail to satisfy basic properties like continuity. We present a rigorous framework for analyzing calibration measures, inspired by the literature on property testing. We propose a ground-truth notion of distance from calibration: the distance to the nearest perfectly calibrated predictor. We define a consistent calibration measure as one that is a polynomial factor approximation to the this distance. Applying our framework, we identify three calibration measures that are consistent and can be estimated efficiently: smooth calibration, interval calibration, and Laplace kernel calibration. The former two give quadratic approximations to the ground truth distance, which we show is information-theoretically optimal. Our work thus establishes fundamental lower and upper bounds on measuring distance to calibration, and also provides theoretical justification for preferring certain metrics (like Laplace kernel calibration) in practice.


Source link

Related posts
AI

Collective Monte Carlo Tree Search (CoMCTS): A New Learning-to-Reason Method for Multimodal Large Language Models

3 Mins read
In today’s world, Multimodal large language models (MLLMs) are advanced systems that process and understand multiple input forms, such as text and…
AI

Solutions for Common Proxy Errors and Troubleshooting Tips

4 Mins read
Proxy errors occurs when a proxy server fails to connect to the internet or a target server, often due to connectivity issues,…
AI

Unveiling Privacy Risks in Machine Unlearning: Reconstruction Attacks on Deleted Data

3 Mins read
Machine unlearning is driven by the need for data autonomy, allowing individuals to request the removal of their data’s influence on machine…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *