New Linley Group Study Finds Edge Fueling Next Wave in AI Chip Market
Custom AI Architectures Challenge CPUs and GPUs
Deep-learning accelerator (DLA) chips, also known as artificial intelligence (AI) processors, continue to proliferate to meet rising demand. Adoption of deep-learning applications in data centers and automotive markets has been substantial, but the past year has seen more robust growth in edge devices and embedded (IoT) systems. With new entrants and products emerging at a rapid pace, the challenge is to separate the leaders from the laggards in a chip market that has now topped $4 billion. A new report from The Linley Group, “A Guide to Processors for Deep Learning,” provides clear guidance on this dynamic market with analysis of deep-learning accelerators for artificial intelligence, neural networks, and vision processing for inference and training.
Deep-learning technology isn’t just for data centers and automotive; #Edge and #IoT are fueling the next wave in the #AI chip market. New Linley Group study reviews 40 vendors competing in this emerging market. https://www.linleygroup.com/dla #deeplearning #DLA
AI acceleration is quickly spreading from the cloud to the edge as it proliferates in many different deep-learning applications particularly in client devices such as high-end smartphones, voice assistants, smart doorbells, and surveillance cameras. The addition of AI engines in those applications significantly increases their capabilities, bolsters privacy, and adds value by differentiating those products. As the technology proliferates, and demand increases, it will eventually find its way into lower-cost products. Over the past year, edge devices have emerged as the highest-volume application for AI-enhanced processors.
Recommended AI News: WISeKey’s Global Cybersecurity to Protect People’s Privacy and Critical Infrastructures
“Many new companies are starting to address these applications. Most use innovative architectures to improve performance and power efficiency, presenting viable alternatives to traditional CPUs and GPUs for AI,” said Linley Gwennap, principal analyst with The Linley Group. “Because no single processor is suited to all applications, some vendors are developing diverse sets of products to capture a greater share of the market. We’ve analyzed these various architectures and products to determine which will win over time.”
Recommended AI News: AiThority Interview with Taejin In, VP of Product Management at dstillery
The comprehensive report features more than 40 different vendors of AI chips. It provides detailed technical coverage of announced deep-learning accelerator chips from AMD, Cerebras, Graphcore, Groq, Gyrfalcon, Horizon Robotics, Intel (including former Altera, Habana, Mobileye, Movidius, and Nervana technologies), Mythic, Nvidia (including Tegra and Tesla products), Wave Computing, and Xilinx. Other chapters cover Google’s TPU family of ASICs and Tesla’s autonomous-driving ASIC. It also includes shorter profiles of numerous other vendors developing AI chips of all sorts, including large companies such as Marvell and Toshiba, many startups including BrainChip, Hailo, and Syntiant, and cloud-service vendors such as Alibaba and Amazon.
Recommended AI News: Zendesk Relate 2020: Come Join The Most CX-Y Show On The Planet!
Comments are closed, but trackbacks and pingbacks are open.