Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

WiMi Hologram Cloud Built An XR-based HCI System to Provide Multi-view Fusion Solutions

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider, announced the development of a method and system for human-computer interaction based on XR technology that enhances the user experience and enables the effect of changing the viewpoint at different locations. The convergence of visual interaction technologies facilitates an immersive experience that seamlessly transitions between the virtual and real worlds.

AiThority: How AI Can Improve Public Safety

The XR-based HCI method and system acquires a first-person perspective image of a user, a secondary perspective image, and a configuration method and a fusion mode between images by collecting the user’s location and observation perspective. The user observation viewpoint includes a first-person perspective and a third-person perspective. The system constructs a multi-view fusion image based on the user’s observation viewpoint and location to provide the user with different viewpoint images based on the configuration method and the fusion mode. The viewpoint images include first-person perspective images and secondary perspective images. In capturing the user observation perspective, the system captures user commands based on the user’s location and acquires interaction tasks. The user commands include voice commands and action commands. In the process of acquiring the viewpoint image, the system acquires the viewpoint image based on the task type of the interactive task. The task type indicates the user’s imaging requirements for the viewpoint image. Based on the imaging requirements, the system acquires the configuration method and the fusion mode.

In the process of acquiring the interaction task, when the interaction task requires high observation and perception ability for the tight space and low perception ability for the far space, the user observation viewpoint, the first-person perspective is the primary viewpoint, and the multi-viewpoint fusion image is the primary viewpoint image. When the interaction task requires average observation and perception ability in both near and far space, the user observation view, the first-person perspective, or the third-person perspective is the primary view, and the multi-view fusion image is the primary view image or the secondary perspective image. When the interaction task requires high perception ability in both near and far space, the user observation view is the first-person view, the third-person view is the primary view, and the multi-view fusion image is the fused image of the primary and the secondary view.

Related Posts
1 of 40,535

Latest Insights: Is Customer Experience Strategy Making or Breaking Your ‘Shopping Festival’ Sales?

When the interaction task requires low observation perception ability for tight space and high perception ability for far space, the user observes the viewpoint, the third-person perspective is the primary viewpoint, and the multi-view fusion image is the second perspective image. When the interaction task requires general observation and perception ability for both near and far space, the secondary perspective image corresponding to the first edge of the primary perspective image is obtained based on the first edge of the primary perspective image. The secondary perspective image is stitched with the primary perspective image to generate the multi-view fusion image, in which the user switches between the primary perspective image and the second perspective image through user commands.

Read More: The Practical Applications of AI in Workplace

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.