Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

TruEra Open Sources TruLens, Neural Network Explainability for ML Models

  • TruLens provides explainability for image recognition, natural language processing, and other deep learning machine learning models

TruEra, which provides the first suite of AI Quality solutions announced the availability of TruLens, an open source explainability software tool for machine learning models that are based on neural networks. TruLens is the only library for deep neural networks that provides a uniform API for explaining Tensorflow, Pytorch, and Keras models. The software is freely available for download, and comes with documentation and a developer community to further its development and use.

TruLens: a powerful explainability solution for neural networks

TruLens is a cross-framework library for deep learning explainability. TruLens provides a uniform abstraction layer over a number of different model frameworks, including TensorFlow, Pytorch, and Keras.

The library provides a coherent, consistent approach to explaining deep neural networks drawing on published research. It natively supports internal explanations that surface important concepts learnt by network units, e.g. showing what visual concepts within images a facial recognition model uses to identify people or a radiology diagnostic model uses to identify medical conditions.

The library draws on a series of published academic papers. A key set of ideas stems from the paper Influence-Directed Explanations for Deep Convolutional Networks authored by the creators of the library at Carnegie Mellon University. The library also provides support for a set of other popular explainability techniques created by the research community, including Saliency Maps, Integrated Gradients, and SmoothGrad, that are extensively used in computer vision and natural language processing use cases.

Recommended AI News: Daily AI Roundup: The 5 Coolest Things on Earth Today

Related Posts
1 of 41,064

TruLens has been in use across a wide range of real-world use cases to explain deep learning models. Use cases for neural network models include:

  • Computer vision: identifying an individual person, animal, or object in a series of photos; categorizing types of damage for insurance claims or reviewing medical images
  • Natural language processing: identifying malicious speech, social media post analytics, predictive text, or smart assistants
  • Forecasting: using multiple inputs, including text and numerical inputs, to forecast future events, such as financial outcome probabilities
  • Personalized recommendations: use of past behavior to predict a user’s interest in other products

Recommended AI News: World’s Biggest CRM Maker Launches Streaming Service for Live Experiences

Explainability drives AI quality in the lab and in real world use

TruLens provides the ability to explain precisely how these models are performing, which allows developers to better understand and refine their models in the development phase, as well as to assess ongoing model performance and fix models once they are put to real world use.

“TruLens reflects the over eight years of explainability research that this team has developed both at Carnegie Mellon University and at TruEra,” said Anupam Datta, co-founder, President, and Chief Scientist, TruEra. “This means that it starts as a robust, targeted solution with a strong lineage. There is also a team of deeply knowledgeable people standing by to help out developers as they explore the use of TruLens. We are looking forward to building an active developer community around TruLens.”

“Explainability for neural-network based models is at the forefront of AI quality initiatives for machine learning models,” said Matt Fredrikson, Associate Professor, Carnegie Mellon University. “While I expect there to be intense academic interest in TruLens, there are also forward-thinking companies that already have business use cases involving machine learning for image recognition, text analytics, and so on. TruLens would be a strong addition for data science teams that want to ensure the effectiveness and transparency of their models, not only in the development phase but also in production.”

“Image recognition and text recognition machine learning models are both highly in demand and have a lot of consumer wariness about them, due to highly publicized stories about error or possible misuse,” said Shayak Sen, co-founder and CTO, TruEra. “The recent European Commission regulations specifically listed cautions around machine learning models and how they deal with personal data or images. So there is a huge need for explainability for these types of models, to ensure that they are effective, but also compliant and easily explained to a concerned society. We feel strongly about the ethical use of AI, and wanted to make TruLens freely available to the world to help ensure responsible adoption of AI for uses like image recognition.”

Recommended AI News: Drift and SalesLoft Partner to Accelerate Revenue via Hyper-Relevant Customer Engagement

Comments are closed.