07/04/2025
In the spring of 2025, we fielded a global executive survey yielding 1,221 responses to learn the degree to which organizations are addressing responsible AI. In our first article this year, we focused on accountability for general-purpose AI producers. In this piece, we examine the relationship between explainability and human oversight in providing accountability for AI systems.
In the context of AI governance, explainability refers to the ability to provide humans with clear, understandable, and meaningful explanations for why an AI system made a certain decision. It is closely related to, but broader than, the more technical notion of interpretability, which focuses on understanding the inner workings of how a model’s inputs influence its outputs. Both concepts seek to improve the transparency of increasingly complex and opaque AI systems and are also reflected in recent efforts to regulate them. For example, the European Union’s AI Act requires that high-risk AI systems be designed to enable effective human oversight and grants individuals a right to receive “clear and meaningful explanations” from the entity deploying the system. South Korea’s comprehensive AI law introduces similar requirements for “high-impact” AI systems (in sectors like health care, energy, and public services) to explain the reasoning behind AI-generated decisions. Companies are responding to these requirements by launching commercial governance solutions, with the explainability market alone projected to reach $16.2 billion by 2028.
The growing focus on explainability and oversight leads to the natural question of whether one can exist without the other, which is why we asked our panel to react to the following provocation: Effective human oversight reduces the need for explainability in AI systems.
Read the full report >> https://mitsmr.com/3FX8pCf