Untether AI Unveils Its Second-Generation At-Memory Compute Architecture at HOT CHIPS 2022
speedAI architecture delivers 2 PetaFlops of performance at 30 TeraFlops/W of energy efficiency
Recommended AI News: Aquatic Informatics in Partnership with Hach Launches Rio
Challenges of AI Inference Acceleration
AI is increasingly being deployed in a variety of markets, from financial technology, smart city and retail, natural language processing, autonomous vehicles, and scientific applications. There has been an explosion in the types of neural network architectures as well as compute demand, resulting in increased energy consumption for AI workloads. These demanding applications require increasing levels of accuracy to ensure safety and quality of results. These requirements of flexibility, performance combined with energy efficiency, and accuracy necessitate a new approach to AI acceleration which Untether AI delivers with its speedAI devices.
“The merits of at-memory compute have been proven with the first generation runAI device, and the second generation speedAI architecture enhances the energy efficiency, throughput, accuracy, and scalability of our offering,” said Arun Iyengar, CEO of Untether AI. “speedAI devices offer an ability that is unmatched by any other inference offering in the marketplace.”
Energy efficiency drives performance
Because at-memory compute is significantly more energy efficient than traditional von Neumann architectures, more TFlops can be performed for a given power envelope. With the introduction of the runAI devices in 2020, Untether AI set a new energy efficiency level at 8 TOPs/W for the INT8 datatype. The speedAI architecture dramatically improves upon that, delivering 30 TFlops/W. This energy efficiency is a product of the second-generation at-memory compute architecture, over 1,400 optimized RISC-V processors with custom instructions, energy efficient dataflow, and the adoption of a new FP8 datatype, all of which helps quadruple efficiency compared to the previous generation runAI device. The first member of the family, the speedAI240 device provides 2 PetaFlops of FP8 performance and 1 PetaFlop of BF16 performance. This translates into industry leading performance and efficiency on neural networks like BERT-base, which speedAI240 can run at over 750 queries per second per watt (qps/w), 15x greater than the current state of the art from leading GPUs.
Second-generation memory bank: Designed for flexible, efficient AI acceleration
Each memory bank of the speedAI architecture has 512 processing elements with direct attachment to dedicated SRAM. These processing elements support INT4, FP8, INT8, and BF16 datatypes, along with zero-detect circuitry for energy conservation and support for 2:1 structured sparsity. Arranged in 8 rows of 64 processing elements, each row has its own dedicated row controller and hardwired reduce functionality to allow flexibility in programming and efficient computation of transformer network functions such as Softmax and LayerNorm. The rows are managed by two RISC-V processors with over 20 custom instructions designed for inference acceleration. The flexibility of the memory bank allows it to adapt to a variety of neural network architectures, including convolutional, transformer, and recommendation networks as well as linear algebra models
Recommended AI News: Lingo Management Completes Acquisition of BullsEye Telecom
FP8: The new datatype for accurate inference acceleration
In the search for energy efficiency Untether AI’s research determined that two different FP8 formats provided the best mix of precision, range, and efficiency. A 4-mantissa version (FP8p for “precision”) and a 3-mantissa version (FP8r for “range”) provided the best accuracy and throughput for inference across a variety of different networks. For both convolutional networks like ResNet-50 and transformer networks like BERT-Base, Untether AI’s implementation of FP8 results in less than 1/10th of 1 percent of accuracy loss compared to using BF16 data types, with a fourfold increase in throughput and energy efficiency.
Scalability for large language models
The speedAI240 device is designed to scale to large models. The memory architecture is multi-leveled, with 238MB of SRAM dedicated to the processing elements offering 1 petabyte/s of memory bandwidth, four 1MB scratchpads, and two 64-bit wide ports of LPDDR5, providing up to 32GB of external DRAM. Host and chip-to-chip connectivity is provided by high-speed PCI-Express Gen5 interfaces.
The imAIgine Software Development Kit Supports speedAI Family
The Untether AI imAIgineTM Software Development Kit (SDK) provides a path to running networks at high performance, with push-button quantization, optimization, physical allocation, and multi-chip partitioning. The imAIgine SDK also provides an extensive visualization toolkit, cycle-accurate simulator, and an easily integrated runtime API and is available now.
Recommended AI News: IGEL to Offer Day-One Support for Citrix HDX Plus for Windows 365
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.