Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Finch Computing Accelerates its Natural Language Processing Capabilities

Leverages Just-Released AWS Inf2 Accelerator Technology for Production Workloads of Deep Learning NLP Models

Finch Computing, developers of innovative natural language processing technology used throughout the Federal government and among financial firms, content aggregators and others in the private sector, announced at the AWS re:Invent Conference that it is using the just-released AWS Inf2 instances to power production workloads of its deep learning NLP models.

Recommended AI: Top 10 Martech Platforms Every Marketing Team Love Having in their Stack

Finch Computing creates software that reads and understands text like a human. Its product portfolio includes the real-time NLP solution Finch for Text®, the dashboard solution Finch Analyst®, and a suite of data-as-a-service products under the FinchDaaS umbrella. The company is a pioneer in state-of-the-art NLP and entity-driven intelligence capabilities such as text summarization and entity relationship discovery.

Related Posts
1 of 41,009

“Each of these capabilities requires using large deep learning models, and performing them at scale on real-time, global data feeds requires a massive amount of computing power,” Finch Computing President Scott Lightner said.

“Using traditional CPU and GPU computing for these tasks quickly becomes cost-prohibitive,” Finch Computing Chief Architect Franz Weckesser added. “With Inf1 instances on production NLP workloads, we were able to achieve 80% cost savings over GPUs. AWS’s new Inf2 platform provides greater performance and allows us to move more Deep Learning models to the platform faster. We already have entity relationship extraction running in Inf2, and entity coreference resolution and text summarization are next.”

Recommended AI: Microsoft 365 Security Features Protect Business Data from Evolving Threats

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.