Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Explainable AI: Looking Inside the Black Box

Many skeptics still reject AI-based technologies on the grounds of the “black-box” problem, or the lack of transparency on how AI arrives at a particular decision. However, the black box problem is just a convenient scapegoat. Many business applications of AI use simple models that are fully interpretable. It’s true that some AI systems do use highly complex models, and their interpretation can be challenging. However, there is nothing inherently uninterpretable about any of the commonly accepted black box models. 

Although black-box models have a large number of model parameters (as opposed to a simple linear model that only has), each and every one of those model parameters is accessible and can be examined. Moreover, these parameters can be collected, tabulated, and analyzed just like any other data set. It is even possible to trace the evolution of every single input and watch how it evolves and transforms into the output by the model along with its parameters. 

So in reality, black-box models are merely hard to interpret – and while it may take some (or in some cases a lot of) work to make sense of what the model is doing, it’s definitely feasible. The black box problem is inherently just a complexity problem.

Top AI ML Insights: How to Leverage AI to Power Adtech Impact While Respecting Consumer Privacy

Interpretability vs Explainability: There is a Difference within Explainable AI Landscape

Simple models (e.g. linear, logistics, decision trees, additive models, etc.) are interpretable because we can directly examine the model parameters and infer how these models transform their inputs into output. Therefore, these models are self-explanatory and do not need to be further explained. In short, interpretability implies self-explainability.

In contrast, black-box models (e.g. deep neural networks, random forest, generative adversarial networks, gradient boosted machines, etc.) are not interpretable (not self-explanatory). These models require explainability—meaning they need to be explained retrospectively after the model is trained. The degree of explainability is directly related to the spectrum of complexities of the black boxes. More complex models are less explainable (meaning they are harder to explain and require more work to decipher).

The bottom line is, the boundaries between what is labeled interpretable and uninterpretable are fuzzy. In my first article, I mentioned that one of the necessary criteria for a model to be considered a black box is the number of model parameters it has. But we can’t just draw a line and say that models with more than a million parameters are uninterpretable, because models with 999,999 parameters don’t suddenly become interpretable. 

Therefore, model interpretability and model complexity are actually two directly related continuums. And black boxes are more like gray boxes with different shades of gray. The important point is that, with enough work, all black boxes (even those with stochasticity and randomness) are explainable!

When is Interpretability and Explainability Required?

If you have business problems that require the learning capacity of a black box, you must meet two criteria in order to utilize the black box effectively. 

The first is training data volume because black box models require huge amounts of training data. There are no hard and fast rules for how much data is needed to train a black-box model because the precise training data volume required is not only data dependent but problem-specific. However, a general rule of thumb is that the number of data samples should exceed the number of model parameters. Remember, black box models have a huge number of parameters that can easily go over millions or billions.

Related Posts
1 of 7,326

Use of AI ML in Industry: How Manufacturers Can Use AI/ML to Improve Operations and Predict the Future

The second criteria to consider is computing resources. Due to the large number of parameters they have, black box models also take a long time to train. In practice, distributed computing resources (often with GPU) are required to make using black boxes feasible.

If you can meet these two criteria, then you should leverage the power and benefits that come with the black boxes. One should never shy away from using black-box models just because they are “uninterpretable”. If a magic black box is able to tell you which stock to buy and its recommendations are consistently beating the market, it shouldn’t matter if it’s uninterpretable or not. In many situations, it is only when the model’s performance is inconsistent (e.g. when it fails or behaves unexpectedly) that businesses are interested in interpreting the model to understand what it’s doing. Don’t let uninterpretability stand in the way of making good business decisions. 

With that said, however, some industries and business problems require explainability for compliance and regulatory obligations. For example, the lending industry has strict requirements for explanations when a loan applicant is denied a loan. Telling the applicant that they didn’t score high enough by some mystery black box is not sufficient, and it’s a good way to get your business operation shut down. This is when we need to explain the uninterpretable.

Explaining the Uninterpretable within

Although humans are superior in handle many cognitive tasks, including critical thinking, creativity, empathy, and subjectivity, we are not great at handling complexity. Psychologists have found that humans can only keep track of about 7±2 things in our working memory. But machines (e.g. a computer) can keep track of millions and billions of items (limited only by the size of their RAM) and still operate with little performance degradation. Since the black box problem is merely a complexity problem, we can use machine-aided analysis or other machine learning (ML) algorithms to explain the black boxes. 

Because many of the black box models are well established statistical tools for decades, their interpretability and explainability are not new problems. Today, we’ve merely scaled up many of these models due to the availability of data and computing resources to train them, making them even more complex. This has created a need for explainable AI (XAI).

Recommended Blog: How to Classify Documents With OCR and Machine Learning

Although XAI is an active area of research today, the XAI community has been around for as long as the black boxes—neural network (NN) was invented back in the 1950s. In, fact, I was a part of this XAI community, and I’ve developed a technique that explains NN trained to mimic the visual processing within our brain. Today, there exists a myriad of methods developed in specialized domains to explain different types of black boxes. There are even open-source tools for domain-agnostic and model-agnostic black box explanation (e.g. LIME and Shapley value). Both of these are popular XAI techniques, and my teams are currently using them in our R&D work. 

Glass Box ML Frameworks

The XAI methods above are all posthoc analyses. They are extra steps performed after the model is trained (i.e. when all the model parameters are determined). When XAI methods are applied to black box models, the black boxes are turned into glass boxes, or models that have greater transparency and interpretability. However, since these methods are applied posthoc, often through another ML, the explanations are approximations at best.

In an attempt to obtain more precise explanations to the black boxes more directly, several glass box ML approaches have been developed recently. From the theory of ensemble learning, it’s been proven that simple models can be combined in a specific way to produce arbitrarily complex models to fit any data. Glass box ML leverages this principle to construct complex models by ensembling together a series of simple interpretable models. Currently, we are also using the glass-box approach to develop a multi-stage model for demand forecasting. 

Ultimately, while black-box models are difficult to interpret by the human brain, they are all explainable with the help of analytics and algorithms. The growing number of methods and ML frameworks developed by the XAI community is allowing us to look inside the black boxes, turning them into glass boxes. Today, XAI is proving that the infamous “black box problem” is really not a problem after all. Business leaders who continue to scapegoat the use of AI due to its black-box nature are essentially foregoing an efficient and reliable way to optimize their business decisions for something that is only A problem of the past.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.