AI Beat Humans At Chess and Go, but There’s Still One Strategy Game it Can’t Conquer

Google+ Pinterest LinkedIn Tumblr +

2017 was a big year for AI in the gaming world: Elon Musk’s OpenAI beat an extremely complex online eSports game; AlphaGo beat the number one world champion in a three-round game of the ancient Chinese strategy game, Go; and LiberatusAI beat a team of four world-champion poker players in a 20-day tournament, until it was up $1.7 million. These AI triumphs have produced a frenzy of excitement over the ability of machines to learn through experience. And it’s no wonder — these games have long been held up as pinnacles of human capacity for strategic thinking. In fact, in 1950, John McCarthy, who coined the term artificial intelligence, referred to chess as the “Drosophila of AI” — a reference to the significance of the fruit fly to the study of genetics.

Since IBM’s Deep Blue beat the chess world champion in 1997, AI has been making steady progress toward mastering human strategy games. But there’s a strange paradox that has gone largely unnoticed: while AI is adroit at rapidly mastering human-built physical games, such as chess, poker, and Go, it is still less skilled than humans when it comes to complex computer games.

In other words, computers can master everything but computer games. And the significance of this goes beyond human hubris: it indicates that, at least for now, engaging in intuition, trickery, and strategy against multiple opponents at a time are still uniquely human traits — and ones that it’s worth considering if we really want a machine to learn.

NewtonX interviewed former executives at EA, Scopely, Zynga, and RockYou! as well as 2 pioneer scientists from IBM who worked on Deep Blue to understand the specifics of AI applications to human games, the recent advances and the limitations remaining.

Why Computers Can’t Yet Handle Multiplayer

Elon Musk took to Twitter to brag about OpenAI conquering Dota 2, a multiplayer battle arena game, saying “OpenAI first ever to defeat world’s best players in competitive eSports. Vastly more complex than traditional board games like chess & Go.” But while this is technically true, it was actually playing a 1v1 game — versus the 5v5 game that is dramatically more complex than games like Chess or Go.

AI has not conquered multiplayer computer games for several reasons, notably:

1. Not all potential moves from opponents are visible

Unlike Chess, where all of the pieces of the game are laid out and visible from the outset, computer games will often conceal elements of the game until they appear as a plot twist. In Dota 2, for example, players can stand out of sight and surprise the AI player. This makes planning for all possible outcomes difficult, and one of the biggest weaknesses of AI is that it handles new situations worse than a human would.

2. They require predicting and preempting multiple opponent moves

There’s a reason OpenAI only learned how to play 1v1: the character it played had just three skills that emphasized precision and positioning (two things AI excels at). But when you add in four more players, each with unique skills, the ability to hide, and the ability to engage in trickery, an AI bot will end up falling to humans.

Self-Play: How Long Before AI Does Learn These Skills?

While we humans can currently pat ourselves on the back for being better than computers at games like World of Warcraft, our assurance will likely be short lived. A wide-ranging NewtonX survey predicted that AI will outperform humans in activities over the next ten years, including translating languages, editing high school essays, driving vehicles, checking out customers in retail settings, performing administrative tasks in medical settings. If AI can replace drivers in the next decade, it’s likely that it will also be able to best a human in a video game.

By 2028 game engines will be almost exclusively designed by machines. Click To Tweet

A former executive at RockYou! actually predicted that soon AI will develop games on its own, and that by 2028 game engines will be almost exclusively designed by machines. In fact, Researchers at Georgia Institute of Technology are developing AI that can recreate a game engine by watching gameplay and studying individual frames. Importantly, that does not signal the end of human involvement — in order to develop a game engine, AI still needs to watch comprehensive videos of the game in action, meaning that it is humans that will develop storylines and concepts. And currently, AI has trouble developing game engines wherein action occurs offscreen — out of the player’s sight.

An interesting question, though, is whether conceivably AI would be able to “learn” creativity. After all, OpenAI and AlphaGo taught themselves to play — through watching and playing thousands and thousands of games. Conceivably, as machines learn from a greater and greater corpus of human creativity, innovation, and development, they could spot patterns in innovation, and create and advance technology in similar ways.

A NewtonX gaming expert (former executive at Scopely) put it this way: “As AI gets better and better at mastering games, it will also improve at helping us develop game engines — which is a win-win for everyone involved. As in all industries today, this may have an effect on employment, but my two cents is that there’s always room for human involvement.”

Companies Are Giving AI Back to Consumers

Some gaming companies are crowdsourcing AI development. Blizzard, for instance, teamed up with DeepMind to allow anyone to develop their own AI agent to play against other consumer-generated AI agents. This allows Blizzard and DeepMind to test agents on specific tasks and see where certain agents beat others.

A former executive at EA explained that this environment is useful for researchers of AI in various applications: “This provides a playground where AI agents can battle each other. There are very few forums where it’s safe to do this, and it’s super useful because it allows us to see how AI’s interact with one another within a complex rule system.”

Games provide a playground where AI agents can battle each other safely within strict parameters. Click To Tweet

Blizzard has also released hundreds of thousands of replays from StarCraft II that allow agents to learn through imitation. Agents can learn how to perform individual skills on “feature layers” — isolated elements of the game. As more and more agents learn different features, they can advance to tasks that are easy for humans, but difficult for AI, such as developing responses to the unknown.

As the NewtonX expert noted, these environments will allow us to see AI strengths, weaknesses, learning patterns, and interaction habits develop in a safe and controlled environment. This information will not only allow us to improve AI efficacy in game settings, but will also yield invaluable information about how AI learns and grows in other applications.

The data and insights in this article are sourced from NewtonX experts. For the purposes of this blog, we keep our experts anonymous and ensure that no confidential data or information has been disclosed. Experts are a mix of industry consultants and previous employees of the company(s) referenced.

Share.

About Author

Comments are closed.