Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

WiMi Hologram Cloud Develops A Human-Robot Interaction System Based on Machine Learning Algorithms

WiMi Hologram Cloud Inc., a leading global Hologram Augmented Reality (AR) Technology provider, announced the development of an HRI system based on machine learning algorithms. The multi-modal HRI system fuses voice and gestures, which converts the user’s voice and gestures into commands that the robot can execute. Using gestures for HRI is an ideal way to do so. This is because gestures express rich semantics and are easy to recognize. Voice interaction based on natural language understanding is the most direct and convenient.

The HRI system, based on machine learning algorithms, uses a combination of gestures and voice to control the robot. Voice is used as a natural interaction method to control the robot, and gestures are used as a complement to voice to improve the accuracy of commands. Combining gestures and speech reduces the disadvantages of using gestures or speech alone and makes communication between humans and robots more natural, efficient, and accurate.

Read More: ChatGPT Won’t Replace Your Marketing Job, But it’s Critical to Leverage for Success

Through voice interaction, the robot understands what people say and communicates with humans with emotion. The human-robot dialogue system presents humanized and intelligent interaction characteristics. Gesture recognition is based on the motion trajectory of human hands and simulates images or syllables according to the change of gestures. Thus, certain meanings or words are formed to express thoughts vividly, allowing the robot to understand and interact with human language.

Related Posts
1 of 41,106

With the gradual maturity of HRI and the application of voice emotion recognition in people’s lives, the need for machine intelligence to understand human emotions has become more urgent.

WiMi’s HRI system provides a faster, more efficient, and more diverse interaction experience by fusing multi-modal perceptual information. Using gestures and voice for real-time parallel interaction, visual communication, and voice information are associated and shared in real-time during the interaction. Multiple interaction modes complement each other to form a complete interaction system. The system leads the gradual development of HRI in the direction of intelligence and humanization and helps build a harmonious and natural human-robot environment.

Latest Insights: Is Customer Experience Strategy Making or Breaking Your ‘Shopping Festival’ Sales?

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.