Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

GIST Scientists Develop Model that Adjusts Videogame Difficulty Based on Player Emotions

The novel approach will help create a better gaming experience for all types of players

Appropriately balancing a videogame’s difficulty is essential to provide players with a pleasant experience. In a recent study, Korean scientists developed a novel approach for dynamic difficulty adjustment where the players’ emotions are estimated using in-game data, and the difficulty level is tweaked accordingly to maximize player satisfaction. Their efforts could contribute to balancing the difficulty of games and making them more appealing to all types of players.

Difficulty is a tough aspect to balance in video games. Some people prefer video games that present a challenge whereas others enjoy an easy experience. To make this process easier, most developers use ‘dynamic difficulty adjustment (DDA).’ The idea of DDA is to adjust the difficulty of a game in real-time according to player performance. For example, if player performance exceeds the developer’s expectations for a given difficulty level, the game’s DDA agent can automatically raise the difficulty to increase the challenge presented to the player. Though useful, this strategy is limited in that only player performance is taken into account, not how much fun they are actually having.

Recommended AI News: The FinOps Foundation Announces AIA as a Premier Member

In a recent study published in Expert Systems With Applications, a research team from  the Gwangju Institute of Science and Technology in Korea decided to put a twist on the DDA approach. Instead of focusing on the player’s performance, they developed DDA agents that adjusted the game’s difficulty to maximize one of four different aspects related to a player’s satisfaction: challenge, competence, flow, and valence. The DDA agents were trained via machine learning using data gathered from actual human players, who played a fighting game against various artificial intelligences (AIs) and then answered a questionnaire about their experience.

Related Posts
1 of 33,232

Using an algorithm called Monte-Carlo tree search, each DDA agent employed actual game data and simulated data to tune the opposing AI’s fighting style in a way that maximized a specific emotion, or ‘affective state.’ “One advantage of our approach over other emotion-centered methods is that it does not rely on external sensors, such as electroencephalography,” comments Associate Professor Kyung-Joong Kim, who led the study. “Once trained, our model can estimate player states using in-game features only.”

Recommended AI News: 47% of Educational Institutions Experienced a Cyberattack on their Cloud Infrastructure in 2022

The team verified—through an experiment with 20 volunteers—that the proposed DDA agents could produce AIs that improved the players’ overall experience, no matter their preference. This marks the first time that affective states are incorporated directly into DDA agents, which could be useful for commercial games. “Commercial game companies already have huge amounts of player data. They can exploit these data to model the players and solve various issues related to game balancing using our approach,” remarks Associate Professor Kim. Worth noting is that this technique also has potential for other fields that can be ‘gamified,’ such as healthcare, exercise, and education.

Recommended AI News: Panaya and Illumiti Partner to Drive Smart Testing and SAP S/4HANA Digital Transformation

[To share your insights with us, please write to]

Comments are closed.