# The Singularity Is Here In Chess

* Chess machines can already beat all humans *

Kenneth Regan, the same as our Ken, is besides many things, an international chess master. He combines his strong understanding of theory, of computing, and of chess. This combination is almost unique in the world and has led him to work on many interesting questions concerning chess.

Today I am proud to announce that Ken’s work on chess is highlighted in the New York Times Tuesday Science section.

## Chess Cheating

Players can use computers to cheat at chess today—even at the world championship level. It is believed that programs that run on laptops can play way above all human players today, and the gap is growing as the programs get better and the laptops get faster and have more memory. A player was even caught red-handed having consulted a program on his smartphone during the final round of last June’s German Championship.

Of course tournament officials are always on guard to see if anything is amiss. But some cheating is alleged to have escaped human surveillance. This is where Ken enters the fray.

## Beyond Cheating

Ken’s work started as a method to indirectly discover whether or not a player cheated by using a computer. The central idea was simple: look to see if there was more agreement with the players moves and the program than would be statistically reasonable. But it is not simple. Comparing human play with machine play raised many questions that went well beyond the narrow goal of detecting cheating.

What Ken and his co-authors have done in the last few years is study the record of chess games, not with the goal of just cheating detection, but to understand better how humans make complex decisions under pressure. This work is discussed in the Times article and of course in technical papers that Ken has written with his co-authors Guy Haworth and Giuseppe DiFatta and Bartlomiej Macieja. Here are the two most recent papers:

- K. Regan and G. Haworth, “Intrinsic Chess Ratings.” Proceedings of AAAI 2011, San Francisco, August 2011.
- K. Regan, B. Macieja and G. Haworth, “Understanding Distributions of Chess Performances,” to appear in the proceedings of the 13th ICGA conference on Advances in Computer Games, Tilburg, Netherlands, November 2011.

I will just summarize what Ken has done by quoting him: I have created a statistical model of decision making in chess. As input the model takes (a) deep-and-broad computer analysis of chess positions, and (b) parameters modeling the skill level and attributes of a human (or cyber) player. It outputs (c) estimated probabilities for every move in every position by such a player, and (d) projected error bars on associated statistics, which appear to be within a factor of 1.15 (best-move choice frequency) to 1.4 (error frequency) of “true” in field tests done to date. The former error-bar test was first reported in our blog here. This work is based on over 20 million pages (figuring 2K/page) of data on human decision-making under pressure. It is significant that it’s all from actual competition, not a simulation where there can be nagging doubt about true incentive as in some economics/social-choice studies. Ken and student helpers have written Perl code to collate the data and C++ code to analyze it, almost 200 dense single-spaced pages of code.

In my opinion Ken’s work is quite non-trivial and could have applications to other human decision problems. In any event Ken has been working on it for the last several years. And now his work is good enough to be used in actual cheating cases and more. This is in the Times article. Way to go Ken.

## Open Problems

Will a time come soon when we will “cheat” in doing mathematics by using a computer?

“Will a time come soon when we will “cheat” in doing mathematics by using a computer?”

Aren’t we there already? At the problem solving level, we use tools such as maxima, matlab and mathematica to do things faster (and I am not talking about numerical analysis per se but more about letting the computer do the algebra/calculus for us). And there are also the people who work in automated theorem proving. :)

Does it matter if we “cheat” a little, if that leads to new insights?

Moral issues associated to medical “cheating” are complex… but as a purely factual matter, it is beyond doubt the case, that everyone in the STEM, chess, and athletic communities increasingly competes / collaborates / teaches / learns from / is evaluated by colleagues who are doping.

I see your point. Thanks for the nice reference.

A review of the literature and a summary ethical guidance for physician-patient decision-making is here. But what are a teacher’s responsibilities, to students who inquire: “Many other students dope … should I / must I dope too?” These issues are far tougher for drug enhancement, than for computer enhancement.

Was it Kasparov who suggested generalizing Chess by randomizing the placement of pieces in the first row? There are also variants on 10×10 boards, “fairy” chess pieces like the Knightrider, Grasshopper, Camel, etc. Do you believe that moving to these variants and destroying known openings and endgames would put humans ahead of computers again? Or are computers getting good enough (or at least fast enough) to handle these variants as well?

“Do you believe that moving to these variants and destroying known openings and endgames would put humans ahead of computers again?”

In my opinion, humans would be on top this way … but only for a little while, until the chess programmers catch up with those too.

It is an interesting question whether humans do better in the long run in these randomized games.

One advantage of a computer would be simply avoiding rookie mistakes, like letting your Grasshopper and your Camel get forked by a Goblin. And humans would be deprived of opening libraries and end-game analysis just like the computers.

Although I guess if the rules varied wildly enough, then computers might start having a lot of trouble just with static analyses of position, like in Go.

Bobby Fischer

Thanks everyone. I’m on record at Hans Bodlaender’s great Chess Variants website as favoring a “non-random” version of Fischer’s chess, between it and David Bronstein’s proposal, here. I believe something like this will need to be adopted by 2040 or 2050.

Computers already seem to be equally imposing at Fischer’s chess. My one attempt so far to make a chess-like game that favors human-type thinking is to start each side off with 8 “tandem pawns”. A tandem pawn moves and captures like an ordinary pawn, and can also decouple itself by moving just one of the two pawns in it. Pawns cannot (re-)couple back into tandems—thus for every standard chess position, the game rules are the same. I’m considering also having one special “Rocket Move” where on any square in ranks 2-6 (not just the 2nd rank), one pawn of a tandem may shoot forward 2 squares while the other one “recoils” backwards one square, even promoting if the tandem is on the 6th rank. This would allow having pawns on the first rank, without disturbing the property of being a “conservative extension” with regard to legal positions in standard chess.

The idea is to emphasize pawn structure and make plans longer-term, which I take to favor humans.

That sounds fun! Have you played it much? Does it feel strategically very different from conventional chess?

Have you looked into Arimaa?

Not personally (Arimaa), beyond one article describing the rules and strategy, though I know it was conceived expressly to be a game where humans could compete with computers. Do we?

If you want to have a chance against computers just switch to Go (baduk, weichi).

I’m okey with all that,cause playing politics is the greatest cheater in human history !

Actually, I personally already “cheated” in doing mathematics, if it is cheating at all. In a nutshell, I had a conjecture that some function in a high-dimensional combinatorial space is smooth in that sense that from any point we can go to a global minimum monotonously decreasing the function. The conjecture came from a computer simulation: whatever was the starting point, a random walk forced not to increase the function always ended in a global minimum.

What is more interesting, when we wanted to prove that the conjecture is true, we asked the computer to show such monotonously decreasing paths for some cases that we found the hardest ones. By analyzing these paths we got the clue how to prove the conjecture, what is now a theorem.

Is it a cheating?

Chess is a game with perfect information, this means that there is optimal strategy solution. In contrast with automated theorem proving which proving is not a game with perfect information (in general case).

In mathematics, what is mainly considered as “cheating” is not trying to have a computer producing a proof but rather trying to understand “what is going on” without proving, based on heuristic arguments, numerical calculations and other computer experimentation, and even by setting an array of conjectures and meta-conjectures. This is a source of tension between pure and applied mathematics and also at times with other scientific areas that uses mathematical formalism.

Congratulations, Ken, for this recognition. Using statistics to detect various kind of cheating, and identifying human imperfect decision making (which is a sort of Turing test in reverse) are very exciting areas, and so is, of course, chess, human- and computerized.

@Christopher:

That wasn’t Kasparov. That was Fischer.

@rjlipton,

Regarding mathematicians cheating with computer – isn’t use of Mathematica and/or Maple considered as such?

Also Shalosh.B.Ekhad is publishing papers with Doron Zeilberger for some time already

Why not let players use computers? This ups the ante in professional chess to force the top players to develop/acquire better software/hardware to improve their game if they want to remain competitive. This favors people like Ken who deeply understand both the game and the algorithmics of chess. Maybe chess will become like auto racing with the player/driver controlling a computer/car that gets money from sponsors.

Regarding variations that supposedly favor humans, I think machines can come up to speed on those much faster than humans. Humans acquire an advantage from meta-level intuition which derives from experience, but that only comes with time. It also reaches a limit (Kasparav?) imposed by our biology. In other words, machines become competent faster and eventually surpass humans, but there’s a sweet spot when humans tend to dominate.

Allowing players to use computers would take away one of the pleasures a chess spectator has; seeing an expert player’s eyes bulge after blundering away a piece or the whole game. A computer simply isn’t going to forget that a piece is en prise or fail to see a knight fork a few ply ahead because the position is complex. Watching humans + computers play chess is basically the same as watching computers play chess… mostly interesting only to chess programmers.

Regarding the “singularity” in doing mathematics and science, here is a recent article about it (featuring Zeilberger and Gowers on the mathematics part) and also discussing a program ‘Eureka’ for automatic mathematical modeling for scientific phenomena http://www.haaretz.com/weekend/magazine/an-israeli-professor-s-eureqa-moment-1.410881:

Eureqa is cute but hardly brings us any closer to the singularity.

Regarding the chess singularity, just this week

Chessbasecontinued that magazine’s longstanding tradition of breaking uniquely interesting stories by reporting the solving of the Kings’s Gambit (a famous chess opening).As the article reports, plans are now underway to use the spare cycles of Google’s server farm to solve chess completely during the coming year, similar to the way Jonathan Schaeffer and his colleagues recently solved the game of checkers with the search engine called

Chinook. Google expects the search to be complete by April 1, 2013.Kudos to author Vasik Rajlich for his incredible account.

Heh, I just read your link and was fooled by it, very clever April’s fool joke (as is explained by a later chessbase article).

We’ve already been using computers to find proofs: http://en.wikipedia.org/wiki/Four_color_theorem

hey Dick and Ken, when can we expect you guys to write on the late Turing award winner and his work?

latest*

We congratulated him at the end of this post, and have discussed doing more.

I do not think much thought should be given to computers being better than humans in chess. A computer does one particular task well, and all computers are invented by humans.

Also we don’t seem to mind that machines can run faster than Usain Bolt, or that gorillas can beat up any American football player. So why bother if something else is better than humans at a particular task.