Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

WiMi Creates Digital Image Processing Software Based on Visual saliency and Channel Attention Mechanism

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider dedicated to algorithmic research in imaging and image processing, today announced the successful development of a holographic digital image processing software system based on visual saliency and channel attention mechanisms. The software can improve the image processing performance of intelligent holographic systems and has been successfully applied in several fields or industries. For example, face recognition, AR/VR, 3D reconstruction; smart healthcare and smart cities, medical devices, industrial inspection; smart agriculture, intelligent robots, machine vision, and intelligent security equipment; and the development of automatic driving.

Recommended AI: AiThority Interview with Marc Bolitho, CEO of Recogni

The attention mechanism is a data processing method in machine learning, which is widely used in different types of machine learning tasks, such as natural language processing, image recognition, and speech recognition. Channel attention mechanisms and visual saliency can effectively improve the efficiency of image processing. The channel attention mechanism uses known features to select the most appropriate channel to extract information of interest during image processing. Visual saliency analyses known features and extracts salient regions (i.e., critical regions of human interest) in an image through intelligent algorithms that simulate human optical characteristics.

Attention mechanism can be beneficial in many tasks such as image classification, target detection, semantic segmentation, video understanding, face recognition, person re-identification, action recognition, a small amount of display learning, medical image processing, image generation, pose estimation, super-resolution, 3D vision, multi-modal tasks, and self-supervised learning. Attention mechanisms are essentially similar to how people observe things in the outside world. Generally speaking, when people observe things in the outside world, they first pay more attention to certain crucial local information that they are more inclined to observe and then combine the information from different regions to form an overall impression of what is being observed. The attention mechanism shifts the computer’s attention to the essential parts.

Recommended AI: AiThority Interview with Alan Holland, CEO and Founder of Keelvar

Related Posts
1 of 39,171

The channel attention mechanism is usually based on the SE Block model, a channel-based attention model that models the importance of each feature channel and then enhances or suppresses different channels for different tasks. The channel attention mechanism in computer vision learns different weights for each channel. The weights are the same in the plane dimension. So channel domain-based attention is usually a direct global average pooling of information within a channel while ignoring local information within each channel.

After convolution, GAP (global average pooling) is performed by the squeeze module to compress the spatial dimension of the features, i.e., each two-dimensional feature map becomes an actual number, and the number of feature channels remains unchanged. The Excitation module then uses a two-layer hourglass-type structure (through dimensionality reduction and raising) to implement the generation of weights for each feature channel using a fully connected layer and Sigmoid function. The channel weights are learned to show the correlation between the modeled feature channels. Finally, the results of the obtained weights are multiplied with the original feature map to present the display results. Applying the operational mechanism of the model to several benchmark models yields more significant performance gains with a slight increase in computational effort. As a general design idea, it can be used for any existing network and has strong practical implications.

The channel attention mechanism can improve system performance by appropriately weighting features according to the importance of the input and simulating human vision for practical analysis and understanding of complex scenes. The attention mechanism can be operated in a variety of ways. WiMi’s R&D team is also conducting in-depth research in this area to improve the ability of holographic image processing systems to capture global information of holograms, improve image processing accuracy, increase computational efficiency, and reduce power consumption.

Recommended AI: AiThority Interview with Bartley Richardson, Director, Cybersecurity Engineering and R&D at NVIDIA

[To share your insights with us, please write to]

Comments are closed.