Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

WiMi Hologram Cloud Develops CNN-based Image Fusion Algorithm System to Promote Innovation

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider,  announced the development of a convolutional neural network-based image fusion algorithm system. The application of convolutional neural networks to image fusion has obvious advantages: it can improve the feature extraction and assignment aspects of image fusion and enhance the quality of fused images.

Recommended AI: The Future of AI Is Here. Now Let’s Make It Ethical

Image fusion is the processing and fusion of two or more images acquired by different sensors. Image fusion can achieve complementary information between images and maximize image quality to generate content-rich and more visually perceptive fused images and then complete the analysis and processing of information. CNN is a typical deep-learning model. It learns feature-representation mechanisms at different levels of abstraction from signal or image data. CNN extracts features of the input image by learning filters to obtain other feature maps at each level, and each unit or coefficient in the feature map is called a neuron. Different computational methods, convolution, activation function, and pooling, are generally used to connect the feature maps between adjacent levels.

The key advantage of WiMi’s CNN-based image fusion algorithm is that it maximizes the extraction of useful information from the source image and fuses it into the resultant image to obtain a high-quality image.

The system first acquires the images to be fused and preprocesses them. Then the preprocessed images are input to the convolutional neural network for training. The system extracts its image fusion features and then uses the optimal thresholding method to segment the fusion features and fuse different regions of different images accordingly to get the final image fusion results.

Related Posts
1 of 40,729

Recommended AI: AiThority Interview with Jessica Kipper, Senior Director, Software Product Management at Schneider Electric

A complete CNN is a multi-layer structure that includes an input layer, a pooling layer, a fully connected layer, and a convolutional layer. The convolutional layer is the most critical part, which contains multiple neural network nodes to extract the features for image fusion. The pooling layer can downscale the image fusion features, obtain a new image fusion feature mapping set, and then iterate continuously through the weights for training and learning. Before performing optimal image fusion, the system will segment the image into different regions. And the image to be fused is divided into different regions by the optimal segmentation threshold, and the different regions are fused to output the image fusion result.

This system’s processed images have significantly higher clarity and brightness, improved image signal-to-noise ratio, and higher image quality, which can obtain better image visual effects. The system has obvious advantages over traditional image fusion technology.

Recommended AI: AiThority Interview with Pete Wurman, Director at Sony AI

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.