Mobileye Self-Driving Secret? 200PB Of Data
Powerful computer vision tech and natural language models turn industry’s leading dataset into AV training gold mine.
Mobileye is sitting on a virtual treasure trove of driving data some 200 petabytes worth. When combined with Mobileye’s state-of-the-art computer vision technology and extremely capable natural language understanding (NLU) models, the dataset can deliver thousands of results within seconds, even for incidents that fall into the “long tail” of rare conditions and scenarios. This helps the AV and state-of-the-art computer vision system handle edge cases and thereby achieve the very high mean time between failure (MTBF) rate targeted for self-driving vehicles.
CES 2022 News: Quanergy Unveils 3D IoT LiDAR Solutions For Smart Cities At #CES2022
“Data and the infrastructure in place to harness it is the hidden complexity of autonomous driving. Mobileye has spent 25 years collecting and analyzing what we believe to be the industry’s leading database of real-world and simulated driving experience, setting Mobileye apart by enabling highly capable AV solutions that meet the high bar for mean time between failure.”
― Prof. Amnon Shashua, Mobileye president and chief executive officer
How It Works: Mobileye’s database – believed to be the world’s largest automotive dataset – comprises more than 200 petabytes of driving footage, equivalent to 16 million 1-minute driving clips from 25 years of real-world driving. Those 200 petabytes are stored between Amazon Web Services (AWS) and on-premise systems. The sheer size of Mobileye’s dataset makes the company one of AWS’s largest customers by volume stored globally.
Large-scale data labeling is at the heart of building powerful computer vision engines needed for autonomous driving. Mobileye’s rich and relevant dataset is annotated both automatically and manually by a team of more than 2,500 specialized annotators. The compute engine relies on 500,000 peak CPU cores at the AWS cloud to crunch 50 million datasets monthly – the equivalent to 100 petabytes being processed every month related to 500,000 hours of driving.
Why It Matters: Data is only valuable if you can make sense of it and put it to use. This requires deep comprehension of natural language along with state-of-the-art computer vision, Mobileye’s long-standing strength.
Recommended AI News: Secure Exchange Solutions Achieves HITRUST CSF Certification to Further Manage Risk, Improve Security, and Meet Privacy Compliance Requirements
Every AV player faces the “long tail” problem in which a self-driving vehicle encounters something it has not seen or experienced before. This long tail contains large datasets, but many do not have the tools to effectively make sense of it. Mobileye’s state-of-the-art computer vision technology combined with extremely capable NLU models enable Mobileye to query the dataset and return thousands of results within the long tail within seconds. Mobileye can then use this to train its computer vision system and make it even more capable. Mobileye’s approach dramatically accelerates the development cycle.
What Is Included: Mobileye’s team uses an in-house search engine database with millions of images, video clips and scenarios. They include anything from “tractor covered in snow” to “traffic light in low sun,” all collected by Mobileye and feeding its algorithms. (See sample images).
More Context: With access to the industry’s highest-quality data and the talent required to put it to use, Mobileye’s driving policy can make sound, informed decisions deterministically, an approach that removes the uncertainty of artificial intelligence-based decisions and yields a statistically high mean time between failure rate. At the same time, the dataset hastens the development cycle to bring the lifesaving promise of AV technology to reality more quickly.
Recommended AI News: Hughes and Airtel Form Joint Venture to Provide Satellite Broadband Services in India
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.