Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

TinyML Computer Vision Is Turning Into Reality With microNPUs (µNpus)

Ubiquitous ML-based vision processing at the edge is advancing as hardware costs decrease, computation capability increases significantly, and new methodologies make it easier to train and deploy models. This leads to fewer barriers to adoption and increased use of computer vision AI at the edge.

AIThority Predictions Series 2024 banner

Recommended AI News: Airbnb Using AI to Help Clampdown on New Year’s Eve Parties Globally

Computer vision (CV) technology is at an inflection point, with major trends converging to enable what has been a cloud technology to become ubiquitous in tiny edge AI devices. Technology advancements are enabling this cloud-centric AI technology to extend to the edge, and new developments will make AI vision at the edge pervasive.

There are three major technological trends enabling this evolution. New, lean neural network algorithms fit the memory space and compute power of tiny devices. New silicon architectures are offering orders of magnitude more efficiency for neural network processing than conventional microcontrollers (MCUs). And AI frameworks for smaller microprocessors are maturing, reducing barriers to developing tiny machine learning (ML) implementations at the edge (tinyML).

As all these elements come together, tiny processors at milliwatt scale can have powerful neural processing units that execute extremely efficient convolutional neural networks (CNNs)—the ML architecture most common for vision processing—leveraging a mature and easy-to-use development tool chain. This will enable exciting new use cases across just about every aspect of our lives.

The promise of CV at the edge

Related Posts
1 of 41,052

Digital image processing—as it used to be called—is used for applications ranging from semiconductor manufacturing and inspection to advanced driver assistance systems (ADAS) features such as lane-departure warning and blind-spot detection, to image beautification and manipulation on mobile devices. And looking ahead, CV technology at the edge is enabling the next level of human machine interfaces (HMIs).

HMIs have evolved significantly in the last decade. On top of traditional interfaces like the keyboard and mouse, we have now touch displays, fingerprint readers, facial recognition systems, and voice command capabilities. While clearly improving the user experience, these methods have one other attribute in common—they all react to user actions. The next level of HMI will be devices that understand users and their environment via contextual awareness.

Context-aware devices sense not only their users, but also the environment in which they are operating, all in order to make better decisions toward more useful automated interactions. For example, a laptop visually senses when a user is attentive and can adapt its behavior and power policy accordingly. This is already being enabled by Synaptics’ Emza Visual Sense technology, which  OEMS can use to optimize power by adaptively dimming the display when a user is not watching it, reducing display energy consumption (figure 1). By tracking on-lookers’ eyeballs (on-looker detect) the technology can also enhance security by alerting the user and hiding the screen content until the coast is clear.

Another example: a smart TV set senses if someone is watching and from where, then adapts the image quality and sound accordingly. It can automatically turn off to save power when no one is there. Or an air-conditioning system optimizes power and air flow according to room occupancy to save energy costs. These and other examples of smart energy utilization in buildings are becoming even more financially important with hybrid home-office work models.

There are also endless use cases for visual sensing in industrial fields, ranging from object detection for safety regulation (i.e., restricted zones, safe passages, protective gear enforcement) up to anomaly detection for manufacturing process control. In agritech, crop inspections, and status and quality monitoring enabled by CV technologies, are all critical.

Whether it’s in laptops, consumer electronics, smart building sensors or industrial environments, this ambient computing capability is enabled when tiny and affordable microprocessors, tiny neural networks, and optimized AI frameworks make devices more intelligent and power efficient.

Recommended AI News: Airbnb Using AI to Help Clampdown on New Year’s Eve Parties Globally

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.