The Limitations of Artificial Intelligence in Gaming

Sunday, July 24, 2016

The Limitations of Artificial Intelligence in Gaming

Artificial Intelligence

Gaming has long been a foundation for training artificial intelligence. From chess, to poker to the most recent work by DeepMind on Go and video games, it seems as though AI is dominating any game it faces—but could this impression be wrong?

Artificial Intelligence (AI) has come a long way since its inception in 1955, when John McCarthy coined the term. At the beginning, many computer programs were AI, but they were built on systems that focused on searching and learning. Where AI has really struggled is mastering both problem solving and intuition.

Gary Kasparov vs. Deep Blue

Image Source-

A great environment to push the advancements of AI is gaming—that is classic, “off line” board, card and puzzle games that us humans have enjoyed for centuries as well as video games.

Related articles
The first time a piece of AI tech took on human games was in 1949, when a program created by Arthur Samuel, now considered one of the pioneers of machine learning, was taught to play checkers. The professor’s program was only outclassed in the ‘70s. Since then, a number of different games from across the world, from poker to Go have gotten the AI treatment. 

Backgammon, which stretches back to around 3,000 BC, and chess, which boasts a 1500 year history and originates from India, have all been put up against AI. It’s the old meeting the new, and there have been some promising advancements, yet these breakthroughs are not always what they seem.

The AI that Took On Chess - And Almost Won

One of the first examples of AI to take on a human gaming champion –and win– was Deep Blue. The first version of this chess playing AI was created by IBM in 1996 and played against Gary Kasparov. With 200 processors, Deep Blue could calculate 50 billion positions in three minutes, and it still lost!

IBM went away and re-looked at Deep Blue, advancing its capabilities. The 1997 version could calculate 200 million moves a second, and it beat Kasparov. At the time this caused havoc across the world, there were real concerns the machines could take over humanity’s place at the top of the intellectual food chain.

However, it was soon realized that there were still many limitations to AI. Deep Blue is great at playing chess. Chess is a game of complete information—all the possible moves are right there on the table. Although you can’t always predict what your opponent will do, you can work on probability and reaction. Take Deep Blue out of the arena it knows, and it would fail to adapt. Still, it was a great start for AI, and one that has led on to many more game playing programs.

The AI that Solved Poker - Almost

Differing from chess, poker is a game of incomplete information, meaning the player is never aware of the full facts. You don’t know what is in your opponent's hand until they are forced to show you, which doesn’t even always happen. This creates many challenges for a human player, and it’s one that AI tries to solve with algorithms.

Created by the research team at the University of Alberta, is a piece of AI called Cepheus. This poker bot was created to play heads-up Limit Hold’em, where two players go head to head. They claim that it has “solved poker”, but in reality it has only weakly won the game. Based on expectations, Cepheus can only win 0.000986 big blinds in a game. To completely solve the game, Cepheus would need to win 0.0000000. However, where it stands this bot is still unbeatable.

You may think that there are no limitations here, AI has done it. However, a good poker player will tell you that sometimes there is no right or wrong answer - you have to go with your intuition. If you’re analytical, you call 70% of the time and fold 30% of the rest. Unlike a human, Cepheus can’t read a situation, and instead uses a random generator. This makes it as unpredictable as a human player, but it doesn’t mean it is as good as a person, nor that it has solved the game.

Cepheus’ biggest limitation is that it does not understand tilt, where a poker player can gain advantage of another player who is emotional, confused or frustrated. Nice try Cepheus - but you’ve not got a royal flush yet!

Cepheus Poker AI

Image Source -

The AI That Almost Beat The Game Of Go

Invented in China 2,500 years ago, Go is claimed by many to be the most complex non-computer game out there. The huge set of possibilities in this game have made it very difficult for computers to work out. This was until Google stepped in with AlphaGo, which is part of the company’s DeepMind project. 

Alpha Go

Image Source-  

Google started with 150,000 games of Go and studied them to find patterns. AlphaGo then played games against earlier versions of itself. This created a policy network which played a good game. By getting AlphaGo to play against its own policy network the program gained a good estimate of which positions were winning ones. AlphaGo was then able to assign a probability valuation of the position, this valuation combined with some serious computer power to search all possible moves made for a formidable Go player - which beat the world champion, thrice.

Again, it seems like the AI has nailed it. However, AlphaGo’s policy network and value probability actually led to its downfall. In life, outcomes are not equal when it comes to their steaks. So, it doesn’t always matter if you win or lose, but by how much. AlphaGo plays each game of Go independently of each other, not looking forward to subsequent successes or failures, often making leisurely moves. In game four against Grandmaster Lee, Lee made an unexpected move and AlphaGo played badly against it, as it assumes its opponent will always make optimal moves. The program then did not notice the mistake for many moves after. The fact is, of course increased process powers of a computer will mean they can beat a person in the right conditions, but the computer cannot think beyond those boundaries.

The AI that can Beat you at some Atari Games

Another piece of AI from the Google think tank, this time one that can beat you at your favorite computer games, 49 Atari video games to be precise! This may not sound as impressive as winning a game of poker or Go, but this AI has taught itself to play.

Google's DeepMind AI uses a typically human technique to facilitate the AI’s learning process: positive reinforcement through what was dubbed Q-learning. When it beats a high score or goes on to a new level, it is rewarded. Through this system, DeepMind played better than previous methods in 43 games, and it beat 29 real life people in all of them.

This new approach is exciting in two ways: one it shows great adaptability, and two this new method of combing traditional computer learning with biologically inspired practices could be the future of AI.

Although this is all impressive stuff, DeepMind couldn’t master every game. In fact, it failed when it came to Ms. Pac-Man, Private Eye and Montezuma’s Revenge. This is because the AI still isn’t advanced enough to think even a few seconds ahead.

These advances are certainly impressive, but sometimes the interpretations of these news items by people who lack technical knowledge creates confusion and exaggeration. No, there is no AI that can play poker better than current top pros. No, But, yes, AI has come on a really long way, and some of the practical implications of these advancements for us as people is exciting. However, we don’t think that the machines are ready to take over just yet, they still need their human overlords to contain and guide their learning - for now.

References & Sources:

By 33rd Square  Embed


Post a Comment