A computer program modeled on the human brain has recently learned how to play 49 classic Atari games.
This is an impressive feat on its own, but what’s even more interesting is that the program was better than a professional human player in more than half of the games. The computer program is from the geniuses at Google DeepMind and is the first system that has learned how to master a wide range of complex task rather than one specific one according to the research team.
It’s no surprise that major advancements in AI are occurring. Technology companies have been investing heavily into research in machine learning and just last year Google purchased DeepMind Technologies for a reported £400m - that’s no small amount of money.
In a study published in the journal Nature, Dr. Demis Hassabis (The vice president of engineering at DeepMind) says, “Up until now, self-learning systems have only been used for relatively simple problems. For the first time, we have used it in a perceptually rich environment to complete tasks that are very challenging to humans.”
This isn’t the first time a machine has excelled at a game. Back in 1997 IBM’s Deep Blue, a chess-playing computer, beat Garry Kasparov, the world champion at the time.
The difference between Deep Blue and the DeepMind machine, however, is that the Deep Blue machine was pre-programmed with various instructions and algorithms that told it what to do in certain scenarios - it didn’t learn anything or gain a skill.
DeepMind’s computer program, on the other hand, is armed with only the most basic information before it is given a videogame to play. Dr Hassabis explained: “The only information we gave the system was the raw pixels on the screen and the idea that it had to get a high score. And everything else it had to figure out by itself.”
The team gave the DeepMind machine 49 different games to play, each one varying in playstyle. The games ranged from classics like Space Invaders and Pong to boxing and tennis games and the 3D-racing challenge Enduro. In 29 of them, it was either on par with or better than a human player. Interestingly, for Video Pinball, Boxing, and Breakout, the machine performed far better than the human professionals, but it struggled with Pac-Man, Private Eye, and Montezuma’s Revenge.
“On the face it, it looks trivial in the sense that these are games from the 80s and you can write solutions to these games quite easily.What is not trivial is to have one single system that can learn from the pixels, as perceptual inputs, what to do,” said Dr. Hassabis.
“The same system can play 49 different games from the box without any pre-programming. You literally give it a new game, a new screen and it figures out after a few hours of gameplay what to do.”
This type of advanced AI research is the latest development of what is now referred to as “deep learning”. Scientists are creating complex programs that, much like the human brain, can be given large amounts of varying data, such as images or sounds, and extract useful information or patterns to understand what is actually happening.
What does this mean for us gamers? Perhaps the AI in our games will finally stop noobing about or maybe we’ll be able to interact with NPCs in a far more complex way - we aren’t really sure. We’ll have to wait and see.