Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

OmniML Together with Intel Unlocks True Potential of Hardware-Efficient AI

OmniML in collaboration with Intel has delivered hardware-efficient AI on the latest 4th Gen Intel Xeon processor family to speed up language model performance drastically

OmniML, an enterprise artificial intelligence (AI) software company, announced a new strategic partnership with Intel to accelerate the development and deployment of AI applications for enterprises of all sizes. The two companies will collaborate on community and customer growth opportunities via the Intel Disruptor Initiative to provide greater access to OmniML’s pioneering software platform,

Natural Language Processing Capabilities : Finch Computing Accelerates its Natural Language Processing Capabilities

OmniML’s software platform: Unlocking the true potential of AI on Intel hardware

The usage of AI has now become ingrained in many people’s lives from helping us drive safer, automating mundane tasks, and providing better security. However, getting responsible, accurate, and efficient applications to work in production is still a major challenge for most organizations. One of the major reasons is the increasingly large gaps between machine learning (ML) model training and ML model inferencing, making it difficult to design models that fully utilize the available resources on inference hardware.

To get all the components running smoothly, the ML model design and underlying hardware need to work in sync to deliver superior performance. OmniML and Intel have teamed up to bridge the dividing gap between model training and inferencing by incorporating hardware-efficient AI development from the outset.

Related Posts
1 of 41,071

To kick off this collaboration, OmniML demonstrated superior performance for one of the most popular language models on Intel platforms. OmniML, using its Omnimizer platform and 4th Gen Intel Xeon Scalable processors and integrated acceleration via Intel Advanced Matrix Extensions (Intel AMX) technology, achieved over 10x speedup in processing words per second over a multi-language DistilBERT1.

Data Science NewsMosaic Data Science Develops Innovative AI-Text Generation Tool That Summarizes Content for Specific Audiences

“Intel is one of the most forward-looking semiconductor companies in the world. OmniML’s strengths lie in our deep understanding of ML model design, optimization, and hardware-aware deployment approach. By bringing together OmniML’s Omnimizer ML platform to work in sync with the latest Intel Xeon processor, we have achieved truly amazing performance results starting with DistilBERT and expanding to larger language models shortly.” – Di Wu, OmniML Co-Founder, and CEO.

“By collaborating with OmniML, we bring together their expertise in ML model design and optimization with Intel’s pioneering processor technology,” said Arijit Bandyopadhyay, CTO – Enterprise Analytics & AI, Head of Strategy – Enterprise & Cloud, Data Platforms Group at Intel Corporation. “Utilizing the AI features built into the new 4th Gen Intel Xeon Scalable processor, OmniML can offer amazing AI performances to help organizations deliver reliable and leading edge products. We are excited about this collaboration and how we can help more customers accelerate the adoption of AI technology.”

AI Insights : XAPP AI Achieves AWS Conversational AI Competency Distinction

 [To share your insights with us, please write to sghosh@martechseries.com] 

Comments are closed.