Paper

Uncertainty-Aware Deep Classifiers Using Generative Models

03 April 2020 Murat Sensoy, Lance M. Kaplan, Federico Cerutti, Maryam Saleki

The paper tackles a familiar weakness of deep models: they can sound highly confident even when the input is unlike anything seen during training. The authors combine classification with generative modelling so the system can better represent both epistemic and aleatoric uncertainty, including out-of-distribution and adversarial cases.

This matters for cybersecurity because many detection and triage pipelines fail not when they see ordinary traffic, but when an attacker forces them outside their comfort zone. The work is therefore relevant both to AI security and to using AI inside security tooling, since it pushes toward models that expose uncertainty instead of hiding fragility behind confident predictions.