Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Arthur Releases the First NLP Model Monitoring Solution To Serve Soaring Enterprise Adoption

In its mission to enable companies to have greater visibility into their AI models, Arthur releases the first complete monitoring solution for NLP models

Arthur’s new NLP feature set includes performance monitoring for model inputs/outputs, bias detection, and token-level explainability features

The NLP feature set is model and platform-agnostic, allowing enterprises to integrate the solution into any AI stack

Natural language processing is quickly becoming one of the most widely adopted machine learning technologies in the enterprise, used in everything from customer support chatbots to automated document processing systems. But organizations often struggle to find the right tools to monitor these models—until now. Arthur, the machine learning model monitoring company, is releasing a suite of new tools and features for monitoring natural language processing models.

Recommended AI News: AiDANT Selects Core Scientific to Provide AI-Powered GPU-as-a-Service

Related Posts
1 of 41,133

The Arthur platform now offers advanced performance monitoring for NLP models, including tracking data drift—one of the most pernicious issues when it comes to maintaining model performance—bias detection, and prediction-level model explainability. Monitoring NLP models for data drift involves comparing the statistical similarity of new input documents to the documents used to train the model. The Arthur platform monitors NLP models for data drift and automatically alerts you when your input documents or output text starts drifting beyond pre-configured thresholds.

Arthur now also offers bias detection capabilities for NLP models, allowing data science teams to uncover differences in accuracy and other performance measures across different subgroups to identify—and fix—unfair model bias. The Arthur platform offers performance-bias analysis for natural language and tabular models, as well as the ability to partition by multiple attributes at a time to provide you more granular insights into potential biases.

Recommended AI News: VIDIZMO Partners with NEC Networks & System Integration Corporation (NESIC) to Deliver Enterprise Video Content Management Solutions in Japan

The Arthur team has also released a new set of explainability tools for NLP models, providing token-level insights for language models. Organizations can now understand which specific words within a document contributed most to a given prediction, even for black-box models.

Arthur’s customers, including Fortune 100 companies like Humana and AI-driven startups like Expel and Truebill, are using the platform to ensure that they can catch and fix any issues with their production AI systems before they become billion-dollar problems. With Arthur’s new NLP monitoring capabilities, the platform can now offer advanced support for an entirely new class of models that are rapidly becoming fixtures in the enterprise.

Recommended AI News: SlashNext Launches Secure Risk Assessment to Detect Unseen Next Generation 2.0 Phishing Threats

Comments are closed.