Computer teaches itself to become a chess master
Chess grandmasters have long played second fiddle to computer programs that calculate millions of moves a second. Now an artificially intelligent machine has taught itself to play the game better than most humans in three days — and it is set to get even better.
The chess program, named Giraffe, could be applied to other board games, which it would quickly learn well enough to beat most human opponents, its designer Matthew Lai claimed. Giraffe was said to have reached a rating equal to international chess master, which accounts for the top 2.2 per cent of players, after spending only 72 hours playing against itself.
The program attained an equivalent World Chess Federation rating of 2,400, according to Mr Lai, of Imperial College London. To be anointed a grandmaster, players must typically maintain a rating above 2,500. Magnus Carlsen, the current world No 1, has a rating of 2,853. The 100th ranked player in the world has a rating of 2,653.
Giraffe is believed to be the first working chess AI to feature an artificial “neural network”, a system that mimics cognition in the human brain, to formulate its moves. It differs from the process used in leading chess programs, which employ raw computing power to crunch millions of moves at once.
Mr Lai said that humans and traditional chess computers played the game very differently. “Humans are much more selective in which branches of the game tree to explore,” he said.
“Computers, on the other hand, rely on brute force to explore as many continuations as possible, even ones that will be immediately thrown out by any skilled human.”
In the most famous chess match to pit a human against a machine, Deep Blue, an IBM supercomputer, beat Garry Kasparov, the then world No 1, in a six-game series in 1997. Researchers estimated that Deep Blue, which had 480 processors, could search 200 million moves per second. Kasparov was said to have been able to search between three and five moves per second.
However, he and Deep Blue played with roughly the same skill. This, Mr Lai said, is because the human brain is more efficient at rejecting bad moves as it moves towards a conclusion. He developed Giraffe to combine computer processing power with the efficiency of human thinking.
Although the new program is a highly effective player, it is not yet as strong as the most sophisticated chess software, which is so advanced that humans have no realistic chance of beating it. In Mr Lai’s testing, Stockfish 5, one of the top systems, attained a rating of 3,387.
Mr Lai said that Giraffe had at least comparable positional understanding to the world’s top chess computers. This was “remarkable”, he said, because they were “carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters”.
He suggested that his results might underestimate Giraffe’s power, because it was likely that the test moves he used had already been written into the leading chess programs.
It was also possible that Giraffe knew about patterns that had not yet been studied by humans, he said.