Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Striveworks Partners with Neural Magic to Add Fast GPU-less Model Deployment Options in Chariot MLOps Platform

Striveworks and Neural Magic announced their partnership. Striveworks, a pioneer in responsible MLOps for national security and other highly regulated spaces, will integrate Neural Magic’s core offerings into its Chariot MLOps platform’s training and model services.

“This is a really exciting integration for Striveworks and our customers,” said Eric Korman, Striveworks’ Chief Science Officer. “Chariot’s flexible deployment means customers can access its ML capabilities where they need it, including in austere environments where GPUs are scarce. This partnership with Neural Magic offers those customers the ability to exclusively use CPUs to achieve necessary inference speeds. And cloud-deployed customers will see model serving costs decrease as well.”

Recommended AI: aelf Announces the form of aelf DAO, Enhancing Decentralization of Governance and Ecosystem Growth

Chariot is an end-to-end MLOps software platform that provides a factory floor where data scientists, analysts, Subject Matter Experts (SMEs), and others can all work, develop, train, deploy, monitor, retrain, and redeploy custom models and custom workflows on a variety of datasets and data sources. GPUs are typically necessary to deploy models with operationally relevant inference times. However, this integration will provide Chariot users a no-code/low-code way to deploy these models to CPUs while getting GPU-like speeds.

Related Posts
1 of 41,068

Combining Neural Magic’s SparseML libraries and DeepSparse Inference Runtime with Chariot will result in machine learning models that are smaller in size, equally as accurate, and many times more performant than even expensive hardware–accelerated compute platforms.

Recommended AI: GoodFirms Unlocks the Best Task Management Software with Rich Features

“Our goal at Neural Magic is to allow people to run machine learning workloads wherever they want to, without the need for specialized hardware,” said Jay Marshall, VP of Business Development at Neural Magic. “Our partnership with Striveworks helps accelerate this goal by providing great performance at a lower cost on readily available CPUs on the Striveworks platform.”

Recommended AI: Adelante Enhances Capabilities with Zendesk Setup Solution

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.