Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

WiMi Developed An Application Scenario-based Digital Human Gesture Generation Algorithm System

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider announced that it is developing an application scenario-based digital human gesture generation algorithm system. The system allows the digital human to generate different actions based on environmental changes. Not every digital human’s sentence must be accompanied by a gesture. The same sentence can have other motions in different scenarios. Therefore, the application scenarios of the digital human need to be designed more profoundly.

Latest Insights: How to Get Started with Prompt Engineering in Generative AI Projects

The system takes the abstract communication intention through natural language processing and then maps the text to the classification of gesture semantics to construct a semantic classification model. The system first judges whether gestures are required since gestures assist and enhance semantic expressions. For cases where it is unsure whether gestures are needed, the system tends to show that gestures are optional. Then the system performs statistics on the constructed corpus of different scenarios and analyzes the patterns of all actions in the corpus and their correspondence with semantics. After that, the system collates the semantic actions in the corpus, performs the mapping relationship between emotion, gesture metaphor semantics, original text, and gesture quantitative description language, and constructs the classification model.

The mapping process from semantics to a quantitative description of gestures is a one-to-many classification problem. For different application scenarios, the system uses different corpora for training. The system uses natural language processing techniques for multiple gestures under the same metaphorical gesture semantic subdivision category to semantically match the gesture text with the input text and select the most matching gesture. For the communication intention, the system constructs the classification of this text to the emotion and gesture metaphor semantics through natural language understanding, the association relationship with the original text, and finally, generates the gesture.

Related Posts
1 of 40,625

Recommended: AiThority Interview with Brigette McInnis-Day, Chief People Officer at UiPath

The gestural interaction of digital humans greatly enhances their emotional expressiveness. WiMi’s digital human gesture automatic generation algorithm uses the semantic classification method of metaphorical gestures to build a quantitative description language of gestures, which provides a quantitative way for the semantic computability of gestures. WiMi also proposes a method to create an emotional corpus for different application scenarios and constructs an emotion-rich corpus, which provides a database for the digital human gesture generation algorithm research.

The current theory and technology of digital humans are getting increasingly mature, and the application scope is expanding. Digital humans have been applied in many industries, such as finance, transportation, logistics, retail, and manufacturing, helping different industries to realize digital intelligence transformation.

Latest Interview Insights : AiThority Interview with Abhay Parasnis, Founder and CEO at Typeface

[To share your insights with us, please write to sghosh@martechseries.com] 

Comments are closed.