Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Untether AI Increases Developer Velocity and Adds High-Performance Compute Flow to the imAIgine Software Development Kit

Open, flexible kernel library enables quick iterations of neural network functions;
High-performance compute flow allows development of non-neural network applications such as linear algebra, signal processing, and simulation acceleration

Untether AI, the leader in at-memory computation for artificial intelligence (AI) workloads, announced the availability of the imAIgine Software Development Kit (SDK) version 22.12. The imAIgine SDK provides an automated path to running neural networks on Untether AI’s runAI devices and tsunAImi accelerator cards, with push-button quantization, optimization, physical allocation, and multi-chip partitioning. This release dramatically improves the speed in which developers can create and deploy neural networks or high-performance compute workloads, saving months of development time.

AiThority Interview : AiThority Interview with Hardy Myers, SVP of Business Development & Strategy at Cognigy

“High-performance simulation, signal processing and linear algebra acceleration are a few of the applications that our customers are requesting.”

Increasing Developer Velocity for Custom Neural Networks

“There has been an explosion of neural networks over the last several years,” said Arun Iyengar, CEO of Untether AI. “Keeping up with the support of these new, innovative networks requires an open, flexible tool flow, and with the 22.12 release of the Imagine SDK we’ve made the necessary improvements to allow customers to quickly and easily add support without requiring Untether AI assistance.”

A key innovation with this release is the introduction of flexible kernels, which can automatically adapt to different input and output shapes of neural network layers. Additionally, Untether AI is providing its customers with the source code to the kernels to provide examples of code optimized for at-memory compute. Developers can modify these kernels and register them with the imAIgine compiler so that they can be selected by the compiler in the automatic lowering process. In this manner, customers are free to self-support their neural network development. The imAIgine SDK provides the low-level kernel compiler, code profiler, and cycle-accurate simulator to provide instant feedback to the developer on the performance of their custom kernels.

AI News: An Investment Into Artificial Intelligence as Daktela Buys Coworkers.ai

Introducing the High-Performance Compute Flow

“Customers are seeing the energy-centric benefits of Untether AI’s at-memory compute architecture in other, non-AI applications,” said Mr. Iyengar. “High-performance simulation, signal processing and linear algebra acceleration are a few of the applications that our customers are requesting.”

In response, the 22.12 release introduces a high-performance compute (HPC) design flow in the imAIgine SDK for runAI200 devices. The runAI200 devices have 511 memory banks – each memory bank with its own RISC processor and a two-dimensional array of 512 at-memory processing elements, arranged as a single-instruction multiple-data (SIMD) architecture. With the HPC flow, customers can directly develop “bare metal” kernels for the RISC processors and processing elements in the runAI200 devices. Users can then manually place the kernels in any topology on the memory banks and use pre-defined code for bank-to-bank data transmission. The code profiler tool within the imAIgine SDK shows exactly how the code is running, identifying any compute bottlenecks and data transmission congestions, which can then be rectified through duplication of kernels and re-placement of the kernels in the runAI200 spatial architecture.

Reducing the Learning Curve

Whether using the neural network or the HPC flow, Untether AI provides on-line and downloadable documentation for all of the imAIgine SDK’s tools and procedures to create, quantize, compile, and run neural networks or low-level kernel code on the runAI200 devices. Untether AI also offers a live, instructor-led training program with many tutorials and coding examples included.

 Latest Aithority Insights : Got It AI Announces AutoFlows, a Breakthrough Autonomous Conversational AI

 [To share your insights with us, please write to sghosh@martechseries.com] 

Comments are closed.