Paper

Supporting Trustworthy Artificial Intelligence via Bayesian Argumentation

01 January 2021 Federico Cerutti

This paper explores Bayesian argumentation as a way to support trustworthy AI, making it easier to represent reasons, dependencies between evidence, and uncertainty in a structured way.

That matters for cybersecurity because many security decisions require more than a score. Analysts need systems that can justify alerts, expose assumptions, and make uncertainty explicit before those outputs are used operationally.