Habana Labs Announces The World’s Highest Performance AI Inference Processor
Emerging from stealth mode, Habana Labs introduces its production ready Goya™ HL-1000 processor
Habana Labs, Ltd., today announced it is officially out of stealth mode and is sampling its first AI processor to select customers. A PCIe card based on its Goya HL-1000 processor delivers 15,000 images/second throughput on the ResNet-50 inference benchmark, with 1.3 milliseconds latency, while consuming only 100 watts of power. Habana Labs’ AI processors offer one to three orders of magnitude better performance than solutions commonly deployed in data centers today.
Designed to process various AI inferencing workloads such as image recognition, neural machine translation, sentiment analysis, recommender systems and many other applications, Habana Labs’ Goya platform has been designed from the ground up for deep learning inference. It incorporates a fully programmable Tensor Processing Core (TPC™) along with development tools, libraries and a compiler, that collectively deliver a comprehensive, high performance and power efficient platform.
“Habana Labs has assembled a top-notch team with the goal of transforming the way AI processing is done in the cloud, data centers and other emerging applications. This product milestone is especially outstanding considering the company was just founded in 2016,” said Eitan Medina, Chief Business Officer. “We continue to focus on building a successful, enduring AI processor company to serve the high performance, fast growing AI space over the long run.”
“In the thirty years that I have been involved with teams that delivered the most sophisticated VLSI Devices, I have very rarely seen such high level of execution. The Goya silicon that we received, less than a year from its conception, has been rigorously tested and is ready for production. This amazing achievement, coupled with the platforms that Habana Labs will deliver in the coming quarters, will enable our customers to lead the AI revolution,” said Avigdor Willenz, Habana Labs’ lead investor and Chairman of its Board of Directors.
Habana Labs’ SynapseAI™ software stack analyzes the trained model input and optimizes it for inferencing efficiently on the Goya processor. The software includes a rich kernel library and its toolchain is open for adding proprietary kernels by customers, as it interfaces seamlessly with popular deep learning neural-network frameworks such as TensorFlow and ONNX.
Habana Labs is showcasing a Goya inference processor card in a live server, running multiple neural-network topologies, at the AI Hardware Summit on September 18 – 19, 2018, in Mountain View, CA.
Habana Labs plans to sample its first Gaudi™ training processor in the second quarter of 2019. Gaudi has a 2Tbps (Terabits per second) interface per device and its training performance scales linearly to thousands of processors.