Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

A Deep Dive into AI Interpretability

Artificial intelligence (AI), specifically Generative AI, is increasingly being adopted across business lines. Generative AI’s ability to generate content is on track to transform the way companies conduct daily operations. As a result of this, one topic executives cannot ignore is AI interpretability, also known as explainability.

Interpretable Machine Learning Models

The loud calls for “interpretable” machine learning (ML) models from some parts of the public and the scientific community are driven by higher level motivations. Given the increasing ubiquity of ML-based technology, there is genuine and valid concern around model fairness and trustworthiness. Consider the use of ML algorithms for producing individual credit scores, to understand the importance that model developers get this right.

An instinctive reaction  is jumping to the conclusion that simple parsimonious statistical models are more “interpretable” than very large deep neural network models. A simple logistic regression model may involve five inputs which are multiplied by weights (with a bias added), summed and then transformed using a simple activation function. In contrasting, a modern deep neural network model may involve millions or billions of weights applied to raw inputs like pixels (for image) or audio samples (for speech processing). But just because the model is simpler, does this make it fairer or more trustworthy?

Top AI ML News: Wallaroo.AI and Ampere Computing Collaborate to Bring Energy-Efficient, Low-Cost Machine Learning Inferencing to the Cloud

And, is the simpler model interpretable at all?

Simple vs. Complex Models

Let’s take the latter question: “Is the simpler model interpretable at all?”

Related Posts
1 of 13,484

There may be some higher level motivations related to understanding the complete set of operations performed by a model. Perhaps, this is related to some legal auditing of models. On first inspection of the five-input logistic regression model, introduced above, the operations of the model are quite straightforward. However, one must ask, “where did these inputs come from?”

It is likely that the model developer started with a large number of potential inputs and performed feature engineering (which can be quite complex) to produce features that are strongly related to the phenomenon they are trying to classify (e.g., whether or not to issue a loan to a bank customer). Model developers often perform dimensionality reduction techniques (e.g., Principal Component Analysis) so that they have fewer inputs to their model. Such transformations effectively remove interpretable characteristics of the features being used. Further, the model developer may have applied some sophisticated feature selection techniques like Lasso or Elastic Net to further reduce the number of highly engineered input features. The result of this may be a five-input simple logistic regression model to determine whether or not to issue a loan. However, one could easily argue that this model is not “interpretable” or “transparent” at all.

The Need for Accuracy and Fairness

Let’s consider the call for trustworthy or fair models. If you have a complex mapping problem of inputs to outputs (e.g., mapping spoken audio to text), you simply cannot do this to any degree of accuracy using a simple, parsimonious model. Speech scientists and signal processing researchers spent decades attempting this, and even before the resurgence and acceleration of modeling using deep neural networks, those researchers were already using non-transparent modeling approaches (e.g., combining Gaussian Mixture Models with Hidden Markov Models). Moreover, until recently, speech recognition models have exhibited incredibly high levels of bias and unfairness toward female speakers.

Do people want models to be somehow inspectable at the cost of huge reductions in accuracy and fairness? Within the new paradigm of ML using neural networks, there are new techniques for identifying and mitigating unfairness. Such techniques are not easily applicable to traditional statistical modeling approaches.

Bridging the Scientific Knowledge Gap

At the same time, it is true that computer scientists do not fully understand why generative large language models (LLMs), for instance, work so well by just stacking lots of self-attention-based Transformer layers on top of each other and training them to simply predict the next word in a text sequence. It is concerning to have a technology so powerful, but yet so poorly understood, being used for an ever-increasing number of applications. There is, however, a rich vein of scientific research currently being carried out on understanding the structure and behavior of deep neural networks which aims to bridge this knowledge gap.

In summary, the increasing presence of ML-based technology is raising genuine concerns. It is essential that model developers construct algorithms and testing protocols that ensure models are developed in a trustworthy and demographically fair manner, and that they are not used for unethical purposes. However, when one picks a little beneath the surface, the contention that, “The more interpretable the models are, the easier it is for someone to comprehend and trust the model,” is not as obviously correct as it may first appear.

[To share your insights with us, please write to sghosh@martechseries.com]

 

Comments are closed.