Black Box Dilemma? Cue Artificial Intelligence with an Explanation: XAI

Shreya, Blogger at RoundSqr

By Shreya Utekar | 16th April, 2021

The Fourth Industrial Revolution, or Industry 4.0, has been a blessing for mankind since its inception a few years ago. It is the ongoing automation of traditional manufacturing and industrial practices, using modern smart technology. Right from Artificial Intelligence, Internet of Things (IoT) and Cloud Computing to Autonomous Vehicles, Drones and Robots – there is no doubt that this industrial revolution aims to propel humans alongside technology into achieving the impossible. One of the many aspects of Industry 4.0 – Artificial Intelligence has successfully become such a significant part of our lives already that its future implications cannot be comprehended by human intelligence anymore.

Healthcare, Education, Manufacturing, and Finance are some of the top industries where artificial intelligence has been well integrated into their systems. AI’s significant use has made it paramount to trust its decision making. During the early adoption of AI, understanding why a certain output was produced by the model was secondary to the output matching our expectations. As long as output was produced, the hunt for an explanation never began.

Not knowing seemed to work just fine for simpler & easier applications of AI.  Systems like the Product recommendation systems did not entail a life or death decision. However,  when it comes to autonomous vehicles or medical diagnosis systems – the need for an explanation begins to rise. Consequently, it became a ‘black box dilemma.’ Only the inputs (in the form of raw data) and output (in the form of predictions) were visible. To place our trust in the model, it is important for humans to understand why a particular conclusion is drawn. 

First, we use data for training the black box, which results in the learning of a particular function. Then, it is followed by feeding inputs & receiving an output from the model – without any justification. Although there are times when the outputs sit parallel to our expectations but being able to interpret the reason behind its conclusion could possibly shed light on many unknown things.

What is XAI?

What is XAI

One of the emerging fields in Artificial Intelligence & Machine Learning is Explainable AI (XAI). It aims to transparent the black box of AI by explaining every step involved in its decision making.  Not only will it enable us to understand & interpret the outputs, but also ease the debugging process to improve the model’s performance. Characterizing trust into an artificially intelligent system is XAI’s sole purpose.

Principles of XAI:

The National Institute of Standards & Technology, USA has developed four principles that portray XAI’s fundamental properties. They are as follows:

Explanation: This obligates the artificially intelligent model to put forth an explanation or reasoning for every outcome. The crux need not be correct; the principle states that the system must be capable of explaining why for all the outputs.

Meaningful: Once the model has provided an explanation, we evaluate if it’s meaningful enough for the user to comprehend. The term ‘user’ is a broad category in itself. The explanation may or may not be understood by a certain group of users. For eg: a programmer may understand explanations a medical practitioner wouldn’t. One size doesn’t always fit all.

Explanation Accuracy: There’s no machine learning without accuracy. While the first two principles simply demanded that the model must have a comprehensive explanation, it is the third principle that entails the description’s accuracy. Since users can belong to different groups, a set of different metrics are used to determine the explanation’s precision.

Knowledge Limits: According to this principle, models must operate within their knowledge limits to be able to identify queries they are not trained for. This will ensure that an answer is not provided for inappropriate inputs. For example,

    • When a user inputs ‘xyz’ in an ‘abc’ trained model,
    • When the input is partially comprehensible by the model to fetch an answer.
      Identification & declaration of the model’s limit will allow for users to rely on the outputs.

Solving the black box dilemma by replacing it with a white box will prove to be highly valuable. The transparency will help change people’s perception of technology & Artificial Intelligence by bridging the trust gap. Indeed the black box’s contribution has been paramount. However, the white box will pave the path for a more reliable, accepted, and transparent future. The next big thing in Industry 4.0 for mankind is not only their empowerment to take rational actions & make corrective decisions based on the explanations supplied by XAI, but also the teaming up of humans & machines. Head over to to know more!