How AI has changed chess theory
People are losing to artificial intelligence on their territory — computers are already winning at chess, go poker, and even Dota 2. We have compiled a brief overview of such confrontations and tried to figure out what applied tasks game algorithms can solve in the future.
Also, find more articles and news about AI on our blog. We post every week!
In 1914, the Spanish engineer and mathematician Leonardo Torres y Quevedo, who invented one of the first radio control systems, introduced a chess automaton. He was primitive and could only play the endgame — the final stage of the game — but none of the masters of that time could win against the Torres machine.
The First World War began in the same year, and soon after, the Second World War stopped further development. The following critical stage for artificial intelligence came only in 1955 — when the term "artificial intelligence" appeared. It was invented by the American scientist John McCarthy, and three years later, he created the Lisp programming language, which became the main one in working with AI.
In 1956, another engineer, Arthur Samuel, created the world's first self-learning computer that plays checkers. Samuel chose checkers because of the elementary rules, which simultaneously require a particular strategy. The computer was trained on simple game guides that could be bought in the store. They described hundreds of games with good and bad moves. Three years later, Samuel introduced the concept of machine learning.
Interesting fact: In 1966, Joseph Weizenbaum introduced Eliza, the first-ever chatbot. Eliza could speak English on any topic. Weizenbaum developed it to simulate a psychotherapist's appointment. He chose a problematic situation in which much relies on the ability to listen and recognize the main thing in the interlocutor's remarks — the computer of that time could not. The developer wanted to show in this way how unnatural the communication between a person and a computer would be. Still, during the tests, it turned out that people experienced feelings and emotions in a conversation with Eliza, as with a full-fledged interlocutor.
People's first losses
In 1985, Carnegie Mellon University began developing ChipTest, a computer for playing chess. In 1988, IBM joined the project, and the prototype was renamed Deep Thought. They decided to test him in the case a year later and invited Garry Kasparov, who easily won both games.
In 1995, IBM introduced Deep Thought II, later called Deep Blue, referring to the company's nickname, Big Blue. A year later, the first match between Kasparov and the improved computer occurred. The man won again: in six games, Kasparov won three times and lost once, and two matches ended in a draw.
A year later, in May 1997, the much improved Deep Blue scored two wins in the return match, lost once, and drew three times, becoming the first computer to win against the reigning world chess champion.
In the early 2000s, computers consistently beat world champions, and chess became the first game in which people lost to computers.
AI for challenging games
Artificial intelligence developers have started looking for a new challenge in more complex and unpredictable games that require more complex algorithms. After the victory of Deep Blue, an astrophysicist from Princeton University said that "it will take 100 years before a computer can beat a person in go — maybe even more." Scientists accepted the challenge and began developing machines for this game with simple rules that are tough to master.
The first computers that could compete with humans appeared only in this decade. In 2014, Google DeepMind introduced the AlphaGo algorithm, which competed with people on equal terms for two years, but won its first significant victory only in October 2015, defeating the European champion.
A year later, a user named Master appeared on the popular Asian server Tygem, where world champions also play. In a few days, he played 60 matches and never suffered a defeat, which caused outrage and suspicion of foul play. On January 4, 2017, Google revealed that an improved version of AlphaGo had been hiding under the nickname all this time.
In May 2017, AlphaGo — the same one who became famous online under the nickname Master — fought with Ke Jie, the first go player in the world ranking, and won three matches out of three. In October, Google DeepMind released a more powerful version than Master. AlphaGo Zero is self-taught without human involvement, just endlessly playing with itself. After 21 days, he reached the Master level; after 40, he was already better than all previous versions.
AlphaZero, an even more powerful variant of AlphaGo Zero, was released in December 2017. He became better than his predecessor in 8 hours, simultaneously reaching the level of a grandmaster in chess. So go became the second game in which people could no longer win.
Go, and chess obey strict rules, and training an artificial intelligence in them is a matter of time. But there are games in which the human factor comes to the fore. For example, poker is essentially a psychological game based on emotions, non—verbal communication, and the ability to bluff and recognize a bluff.
In 2017, after more than ten years of attempts and failures, two teams independently developed AI models capable of beating poker professionals. The University of Alberta presented DeepStack, a neural network with a synthetic form of intuition, and researchers from the already familiar Carnegie Mellon University showed Libratus AI. In 20 days, the neural network conducted 120 thousand of games against professionals who gathered every evening to discuss possible loopholes and flaws in Libratus. The neural network was also analyzed every game day, improving its results.
Libratus won $1.7 million (virtual so far) from professionals in less than a month. One of the experiment participants described his impressions as follows: "It's like playing with someone who sees all your cards. I'm not accusing the neural network of unfair play. It's just that it's that good."
Elon Musk and Dota 2
In 2015, Elon Musk and Sam Altman, president of Y Combinator, founded OpenAI to create an open and friendly artificial intelligence.
In 2017, as part of an experiment, the development team decided to train their neural network in Dota 2, a game in which two teams of five people fight each other using many combinations of more than a hundred heroes. Each has its own set of skills, and players can collect items to strengthen the character. This is the giant game in modern esports.
In two weeks, the neural network was able to train and defeat several of the best players in the world in one-on-one mode, and now its creators are preparing to release a version for the primary method, five by five.
Stockfish is an open-source chess engine. He has been leading the history of his development since 2008.
The program developers are Tord Romstad, Marco Kostalba, Joona Kiiski, and Harry Linscott. About a hundred more (more precisely 126) programmers have contributed during the nine years of active engine development.
The engine supports 32-bit and 64-bit modes. In the last 6-7 years, the struggle for the championship has mainly been held under the sign of the rivalry between Komodo and Stockfish engines, and the battle has been born with varying success.
In 2014 (Season 6), the championship title was won by Stockfish, who beat Komodo 35.5:28.5.
At the end of the same year (Season 7), Komodo took revenge.
In season 8 (November 15), another match and Komodo is ahead again.
In season 9 (December 16), Stockfish beats Houdini in the final – 17 wins, eight losses, 75 draws.
2016 was a breakthrough year in the history of engine development. Today, in many ways, Stockfish is ahead of its eternal rivals in recent years, Komodo and Houdini.
Stockfish's success is mainly due to the policy of distribution. Having found and tested the gain, the developers upload a new version for open testing.
Note the purity of the engine code. There are no severe glitches left.
In the CCRL rating, Stockfish occupies the 2nd line.
Stockfish has twenty difficulty levels.
Since the engine has a vast practical power, significantly surpassing any person, including the world champions of all time, it makes sense to evaluate its style only in comparison with other top engines.
For example, it is believed that in contrast with Komodo, whose strength is positional play, Stockfish puts more emphasis on tactics.
At the beginning of 2018, algorithms from Alibaba and Microsoft surpassed a human in a test for reading comprehension.
In March 2018, a small robot assembled a Rubik's cube in 0.38 seconds. The record among people is 4.69 seconds.
In May 2018, artificial intelligence began recognizing skin cancer better than people.
According to a survey of more than 350 experts in the field of artificial intelligence, algorithms will soon be able to beat us in any game. In 10 years, they will learn to drive better than us, and by 2050 they will perform operations more accurately than us.
The researchers, having created neural networks that achieve superhuman abilities in games in a few days, are now trying to find their use in real life. Google DeepMind uses AlphaGo Zero to research protein folding, trying to find a cure for Alzheimer's and Parkinson's diseases.
"Our ultimate goal is to use breakthroughs like AlphaGo to solve all kinds of pressing problems in the real world," says Demis Hassabis, CEO of the company. "If such algorithms can be applied in other situations, such as, for example, studying protein folding, reducing energy consumption, or creating new revolutionary materials, then this will greatly advance all of humanity and have a positive impact on our lives."