Interactive explainability: Black boxes, mutual understanding and what it would really mean for AI systems to be as explainable as people

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

‘Black box’ AI systems often based on deep neural networks (DNNs) are being developed with astonishing speed. But in critical real-world contexts, such as providing legal, financial or medical advice or assistance, their deployment faces formidable practical and legal barriers. Users, and especially regulators, will demand explainability: that AI systems can provide justifications for their statements, recommendations, or actions. Typically, both regulators and AI researchers have adopted an internal view of explainability: the emerging field of X-AI aims to ‘open the black box’, designing systems whose workings are transparent to human understanding. We argue that, for DNNs and related methods, this vision is both unachievable and misconceived. Instead, we note that AI need only be as explainable as humans and the human brain is itself effectively a black box in which a tangle of 1011 neurons connected by 1014 synapses carry out almost unknown computations. We propose a very different notion, Interactive Explainability: the ability of a system, whether human or AI, to coherently justify and defend its statements and actions to a human questioner. Interactive Explainability requires local, contextually-specific responses built on “mutual understanding” between an AI system and the questioner (e.g., of commonly held background beliefs and assumptions). We outline what mutual understanding involves, and why current AI systems seem far from achieving such understanding. We propose that Interactive Explainability should be a key criterion for regulators, and a central objective for AI research.

Related articles

Related articles are currently not available for this article.