Autonomous vehicle simulation platform to integrate diverse and complex datasets for testing and validation in the virtual world
Autonomous driving is more than just replacing the human driver, it’s about creating an AI driver that’s much safer than a human.
According to RAND Corporation, to drive even 20 percent better than a human requires 11 billion miles of validation. That translates to more than 500 years of nonstop driving in the real world with a fleet of 100 cars — an impossible task.
Simulation presents a powerful solution to what has been an insurmountable obstacle. By tapping into the virtual world, developers can safely and accurately test and validate autonomous driving hardware and software.
“The continuous integration process of simulation allows our engineers to have tremendous productivity,” said NVIDIA founder and CEO Jensen Huang on stage at GTC Japan. “Simulation of artificial intelligence systems are vital to success.”
However, for simulation to be an effective tool for developing safe self-driving, it must represent exactly the variety and unpredictability of the real world.
That’s why NVIDIA announced today it is opening up the DRIVE Constellation simulation platform to work with partners to integrate their world models, vehicle models and traffic scenarios.
By incorporating a variety of partners, DRIVE Constellation will become even more comprehensive, enabling diverse and complex testing elements.
Key Ingredients for Effective Simulation
A simulated test environment is more than just a virtual car on a virtual road. It takes model building as intensive as those for movies, and as detailed and accurate as the blueprints for the city roads and highways the car will eventually drive on.
And not only does this world need to look realistic, it must also also obey laws of physics. DRIVE Sim is designed to virtually test any potential environment and driving situation, with the ability to ingest world, vehicle and traffic scenario models.
- World SIM model:
Rome wasn’t built in a day, and neither are virtual cities. For example, to simulate testing in San Francisco, developers must start with a map. Then, that map is populated with buildings, trees and other landmarks — every inch of the city must be represented to create a testing environment just as rigorous as the real world. This virtual world must also simulate real-world conditions, from lighting to weather. A vehicle can drive from a clear sunny morning in Mountain View, Calf., through dense fog in San Francisco without ever leaving the simulation platform.
- Sensor and vehicle SIM models:
Unlike humans, self-driving cars rely on more than just visual information to see the world around them. Radar and lidar sensors complement camera sensors to feed the vehicle data to make decisions. A comprehensive simulation platform should mimic these data feeds, testing how the algorithms react to various sensor inputs.
The virtual car must also be able to behave in simulation as it would in the real world. Actions like braking, accelerating onto a highway or driving over a bumpy road should exhibit the same vehicle dynamics as if they were actually happening to the vehicle.
- Traffic and scenario models:
To create a realistic near-accident situation, developers must first observe and recreate examples from the real world, then create numerous scenarios changing the weather, lighting and road conditions in the simulator. Variation is absolutely necessary for robust validation of the self-driving hardware and software.
These scenarios must also be true to the traffic etiquette of each testing environment — for example, a simulation of driving in Pittsburgh must include cars turning left at a four-way intersection before oncoming traffic passes, a maneuver known as the “Pittsburgh left.”