Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Event-Based Vision Approaches Commercialization, Says IDTechEx

The increasing adoption of computational image analysis is creating opportunities for many new types of image sensors. These have a diverse range of capabilities, ranging from dramatically lowering the cost of short-wave infrared (SWIR) imaging to large-area thin-film flexible photodetectors. IDTechEx’s new report, titled “Emerging Image Sensor Technologies 2021-2031: Applications and Markets”, comprehensively explores the market for emerging image sensor technologies.

One of the most innovative and commercially promising technologies is event-based vision, which is a bio-inspired method of acquiring visual data that promises multiple advantages including reduced data transfer/processing and greater dynamic range. Indeed, such are these merits that IDTechEx forecast the market for the event-based vision sensor chips alone rising from its primarily pre-revenue status today to $20 million per year over the next 10 years. Furthermore, much of the value is likely to be captured by the software facilitated by event-based vision hardware, leading to a much greater total market.

Recommended AI News: IBM And The University Of Tokyo Unveil Japan’s Most Powerful Quantum Computer

What is event-based vision?

Event-based vision, also known as dynamic vision sensing (DVS), is a method of scene acquisition that mirrors how the human retina/brain acquires information, by recording changes as they occur rather than the entire frame at specified intervals. The approach is entirely targeted at machine vision applications and is not aiming to replace cameras/conventional image sensors.

In a conventional video, an entire image is recorded at a pre-assigned interval, known as the frame rate. For machine vision applications this means that lots of unnecessary data is acquired, whereas the most dynamic part of the image is often under-sampled. Event-based vision sensors resolve this issue by detecting relative intensity changes rather than capturing frames. The pixels report asynchronously, delivering a timestamp whenever the illumination intensity passes a certain threshold. Figure 1 illustrates the difference between conventional frame-based sensing and event-based sensing for a single pixel.

Related Posts
1 of 40,481

SysAdmin Appreciation Day: Top Industry Leaders Share their Insights on IT and Data Ops

Motivation

Event-based vision offers multiple significant advantages over conventional frame-based imaging. Arguably the most significant is that by only detecting changes, subsequent data processing can be much simpler. This is because irrelevant static areas of the image have been ignored, meaning that far less data needs to be transported and processed.

Furthermore, event-based vision offers a higher dynamic range, since individual pixels report a percentage change in intensity rather than making an absolute measurement of light intensity. As such, there is far less risk of saturation. The final benefit is greater temporal resolution, with time-stamped signals from each pixel.

Recommended AI News: Recurrent Ventures Acquires Futurism From Singularity University

Target applications

Event-based vision is highly relevant to recording rapidly changing situations that require immediate data processing (since the volume of data produced is much less). Applications that require high temporal resolution or high dynamic range are especially relevant.

IDTechEx, therefore, perceives the most promising applications as collision avoidance and navigation for autonomous vehicles/ADAS and unmanned ariel vehicles (drones). These markets have huge potential but will require substantial software development and data collection to fully interpret the event-based vision data. As such, IDTechEx believes that smaller markets with much more predictable input data, such as iris-tracking for AR/VR goggles and laser beam profiling, will see the earliest adoption of event-based vision.

Recommended AI News: Comcast Hires Dave Mandapat To Lead Comcast Business Marketing In Washington State

Comments are closed.