What’s the Next Game AI Will Solve?
More and more, computers are beating us at our own games. Among the AI-led defeats that have seen the most coverage are chess, Go and shogi (Japanese chess) — but the computers are just getting started.
In June 2018, OpenAI announced that its AI can now beat the top 1% of amateurs in Dota 2 5v5 match-ups. The Dota 2 effort is significant in that video games bring new challenges to game-playing AIs. Not only is map information obscured from the bots, but there are thousands of potential actions at any given time.
AI has similarly beaten pros at Texas Hold-Em Poker, another game that involves incomplete information. These types of wins represent a leap forward from AIs success in “perfect information” games like chess, where everything you need to decide your next move is right in front of you.
So what’s next for the AIs?
Maybe – Checkers
Checkers may seem like the poor man’s chess, but it’s a game with an almost frightening number of possibilities — think 500 billion board positions and an end game of 39 trillion positions. And while computers have got to the bottom of a “perfect play”, the problem is that the game, when played perfectly by both sides, always ends in a draw. An AI against an AI, then, or an AI vs a checkers grandmaster who plays a flawless game, will end up in a tie. AI might have “solved” checkers, but the result won’t be a win.
No – Bridge
Your grandmother’s favorite game, bridge, uniquely combines logic and math together with cryptic contract bids between partners. These bids are designed to communicate information to a teammate while misleading the opposing team. While AIs are powerful enough to handle the math side of bridge, they lack the ability to interpret the complex communication between partners.
Maybe – Backgammon
Surely if an AI can solve chess, it can solve backgammon, right? Not necessarily. Backgammon combines skill with chance, meaning that even the most brilliant AI won’t necessarily win all of the time. Efforts pitting computers against human backgammon players date back to the 70s, with the first computer “win” against a human being recorded in 1979. But while backgammon AIs have advanced the game and have achieved a level of play almost on par with top-ranked human players, the element of chance involved, along with the highly analytical endgame, puts total AI victory out of reach.
No – Mahjong
The Chinese tile-based game Mahjong poses similar challenges to both Texas Hold-Em and bridge. Not only does it feature four players, but turns can be “skipped,” hand value varies and draws are random. Good Mahjong players need to be able to evaluate their own and opponents’ hands, and when to meld or discard tiles. Part of this is mathematical, and part is based on “tells” from other players. While a computer can handle the math of the game, the elements of luck and communication put Mahjong out of reach of today’s AIs.
Maybe – Crosswords
Crosswords might seem the kind of constraint satisfaction problem that computers love, but they’re more complex than that. Cryptic clues, wordplay and cultural references make solving crosswords more than just a matter of looking a word up in a dictionary or thesaurus. While computers have performed highly on straight-forward crosswords, they struggle when the clues involve lateral thinking. Take the high-achieving computer that failed miserably when the clues were “spoonerized,” which is where the initial sounds of two words are switched. The humans handled the challenge, but the computer lacked the problem-solving skills to parse the clues.
Maybe – Pacman
In 2017, gamer boards lit up with the news that a perfect score in Ms Pac-Man had been achieved, and by an AI no less. Given that the game has a whopping 1077 states, this was thought to be a near impossibility. But a closer look at the code behind the AI showed that that the bot hadn’t actually “learned” how to win Pacman: It had just been hard-coded to avoid ghosts and chase after pills and fruit. Not only that, but the problem-solving aspect was farmed out to no fewer than 150 AI sub-agents all working on the project in parallel. The results were still impressive, but not everyone is convinced that this counts as a true win for AI.
Yes – StarCraft 2
Like Dota 2, StarCraft 2 is an incomplete information game with a partially obscured map. A hugely popular game with a user base of highly skilled players, it’s been a target for the AIs of companies including Google (DeepMind) and Facebook (CherryPi). But until recently the bots haven’t been up to the task, despite being able to move faster and control more units at once: think up to 19,000 actions a minute. But that’s changing. A partnership between DeepMind and StarCraft creator Blizzard Entertainment designed to accelerate AI research bodes well for the AI. At the time of writing, DeepMind’s AlphaStar AI was just reported as having beaten StarCraft 2 Grzegorz Komincz in a 5-round match-up suggesting that AI might be about to take humanity’s crown at yet another game.
No – Dungeons & Dragons
It might not be the prototypical board game, but the roleplaying icon D&D is a unique beast. Unlike most games, the conditions for success aren’t as clear: you want to beat the bad guy, but you also want to tell a great story along the way. Open-ended, creative, collaborative and story-focused, D&D represents a very human type of intelligence that AI is a long way off emulating.
Experts say that AIs will be able to beat us at everything by 2060, but until then we’ve got the upper hand – especially when it comes to tasks involving creativity, collaboration and communication. AIs might be able to crunch numbers or condense 3,500 years of game play into 19 days, but humans still outperform them at tasks that require more than sheer computing power.
Read More: The Future of Business Communications