Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Why Complete Autonomy Requires High Compute, Low Latency, and Low Power

The transition to autonomous vehicle (AV) driving is well under way with major OEMs striving to equip their vehicles with self-driving features with eventual full driving autonomy.

Tesla is notable for providing level 2 autonomous driving, which is hands-free but still requires a significant amount of driver attention. The ultimate goal, however, is full autonomy, where the car can navigate safely and efficiently with no driver intervention.

To achieve this, compute efficiency will play a significant role that secures automakers’ success in the race to autonomy.

The need for “almost perfect” perception

It is essential that a vehicle can see in all directions. This multi-directional awareness and the ability to process environmental information is fundamental to collision avoidance, lane keeping & changing, and maintaining appropriate speeds in various driving conditions, to name a few…

In addition to environmental understanding – AV perception must be robust in all weather circumstances to safely navigate through rain, sleet, snow, and other conditions. This is important for two reasons – braking distance and reaction times. Stopping distance increases as roads become wet or icy…

In addition, vehicles have significantly less time to react than usual in low visibility situations like fog and dust.

Challenge #1: Compute Capacity of a deluge of data

Automakers have fitted vehicles with a plethora of sensory devices in an attempt to mimic human vision to generate”near-perfect” perception…

Vehicles may be equipped with several cameras, LIDARs, and RADARs. Usually, the RADAR or LiDAR component acts as an extra source of information, enhancing the camera data. However, high resolution cameras can recognize small objects at long distances, which reduces the requirement for numerous costly sensory components.

Like humans, cars must understand what they see to function properly. For this, the car uses AI-based perception processing to make sense of the incoming visual data. For instance, there might be a tree branch blocking the road 50 meters away or a car on the highway a few feet away. The camera captures the scene, then artificial intelligence software analyzes it to not only detect, but correctly recognize the objects.

Once perception processing is complete, higher-level AV systems, like path planning, come into play. These systems are necessary to avoid obstacles and prevent collisions..

Deterministic high-performance computing is of paramount importance due to the plethora of incoming sensory data and associated perception processing. For instance, if a truck had eight 8-megapixel cameras positioned on each side, the amount of data produced per hour at a 30 fps rate would be 6.9 petabytes.

Concurrently, each radar or LiDAR component will generate its own data stream.

It is impracticable to send such a large amount of data to the cloud, process it there, and then send the results back to the vehicle in time to perform the required actions. The resulting excessive latency will prevent the car from reacting on time. Required computations must be performed locally for the vehicle to understand its surroundings on an immediate basis and quickly respond to any situations.

Related Posts
1 of 7,258

However, successful execution requires crucial factors that must be overcome, with latency being the first.

Challenge No. 2 : Latency & Jitter

Interpreting incoming visual data is highly computationally demanding.

The AV must execute path planning and decision making by processing all the visual data from a camera in a given set of frames. Since there are multiple processing steps from data capture to the final step of path planning, perception processing should have very low latency to give ample time for vehicle reaction.

Additionally, higher level path planning functions require stable inputs to efficiently perform their tasks. This necessitates having low or no jitter in perception processing where inputs to motion control arrive on the same cadence for every frame.

The issue with jitter is that its unpredictability confuses higher-level systems, such as path planning, because they are unable to accurately and rapidly determine what to do. A deterministic response at consistent intervals is extremely important.

Actionable intelligence via perception processing must be given with minimal jitter. Without it, the motion control systems will get input at wildly varying intervals of time, reducing the effectiveness of self-driving.

Challenge No. 3: Power Consumption

An issue that must be considered is power consumption – a critical topic, especially with the emergence of electric cars (EVs).

As an example – consider  an electric car that is driven six hours a day and has a 60 kilowatt-hour battery. If an AV compute system uses 1000 watts/hour, then six hours of driving will use 6000 watts (or, six kilowatt hours). In other words, just the act of self-driving will use up 10% of the car’s energy. The driver will need to stop to recharge more frequently, which contributes to range anxiety.

Inefficiencies of existing solutions.

Traditionally, autonomous vehicles use general purpose CPUs and GPUs, not ideally suited for vehicle driving computations from a price and power consumption standpoint.

Purpose-built high compute technologies with low power, low latency, and low jitter are required to process numerous streams of high-resolution and fast-moving camera feeds in real-time and detect objects hundreds of meters away while operating under any environmental condition.

Low power consumption can only be attained in the interim by a combination of techniques, including the use of novel computational approaches, the use of on-chip memory to the greatest extent possible to minimize power-draining trips to external memory, and highly optimized convolution acceleration engines.

We’re in a pivotal period.

The automotive industry has made significant inroads in the area of autonomy. However, the AV development process will truly pick up speed when barriers to compute capacity and power consumption are overcome. Purpose-built compute solutions are paving the way to realizing this mission. We are well on the road to achieve driving autonomy and the future is much closer than perceived.

Comments are closed.