Paper

Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI

10 July 2020 Richard Tomsett, Alun Preece, Dave Braines, Federico Cerutti, Supriyo Chakraborty, Mani B. Srivastava, Gavin Pearson, Lance M. Kaplan

This paper starts from a practical problem: AI can help with security decisions, but if it does not show where it is uncertain and why it produced a recommendation, people may trust it too much. The authors call this rapid trust calibration: users should be able to judge quickly how much confidence the system deserves in a given situation.

The key idea is not only to explain an answer, but to build systems that can also reveal their own limits. In cybersecurity, that matters when alerts, clues, and risky situations must be assessed without treating AI as an infallible authority.