Skip to content

Magnus and the Turkey Grinder

December 8, 2016

With part II of our “When Data Serves Turkey” post

Baku Olympiad source—note similarity to this

Magnus Carlsen last week retained his title of World Chess Champion. His match against challenger Sergey Karjakin had finished 6–6 after twelve games at “Standard” time controls, but he prevailed 3–1 in a four-game tiebreak series at “Rapid” time controls. Each game took an hour or hour-plus under a budget of 25 minutes plus 10 extra seconds for each move played.

Today we congratulate Carlsen and give the second half of our post on large data being anomalous.

According to my “Intrinsic Performance Ratings” (IPRs), Carlsen played the tiebreak games as trenchantly as he played the standard games. I measure his IPR for them at 2835, though with wider two-sigma error bars +- 250 than the 2835 +- 135 which I measured for the twelve standard games. Karjakin, however, played the rapid games at a clip of 2315 +- 340, significantly below his mark of 2890 +- 125 for the regular match. The combined mark was 2575 +- 215, against 2865 +- 90 for the match. It must be said that of course faster chess should register lower IPR values. My preliminary study of the famous Melody Amber tournaments, whose Rapid sections had closely similar time controls, finds an overall dropoff slightly over 200 Elo points. Thus the combined mark was close to the expected 2610 based on the average of Carlsen’s 2853 rating and Karjakin’s 2772. That Carlsen beat his 2650 expectation, modulo the error bars, remains the story.

Carlsen finished the last rapid game in style. See if you can find White’s winning move—which is in fact the only move that avoids losing:


The win that mattered most, though, was on Thanksgiving Day when Carlsen tied up the standard match 5–5 with a 75-move war of attrition. The site has named it the “Turkey Grinder” game. On this note we resume talking about some bones to pick over “Big Data.”

The Logistic Relationship

If you viewed the match on the official Agon match website, you saw a slider bar giving the probability {p} for one side or the other to win. Or rather—since draws were factored in—the {p} stands for the points expectation, which is the probability of winning plus half the probability of drawing. This is computed as a function {p = f(x)} of the value {x} of the position from the player’s side. The beautiful fact—which we have discussed before in connection with a 2012 paper by Amir Ban—is that {f} is an almost perfect logistic curve. Here is the plot for all available (AA) games at standard time controls in the years 2006–2015 with both players within 10 Elo points of the Elo 2000 level:


The “SF7d00” means that the chess program Stockfish 7 was run in Multi-PV mode to a variable depth between 20 and 30 ply. My scripts now balance the total number of positions searched so that endgame positions with fewer pieces are searched deeper. “LREG2” means the generalized logistic curve with two parameters. Using Wikipedia’s notation, I start with

\displaystyle  f(x) = A + \frac{K - A}{1 + e^{-Bx}}

and fix {K = 1 - A} to symmetrize. Then {A} is basically the chance of throwing away a completely winning game—and by symmetry, of winning a desperately lost game.

Chess programs—commonly called engines—output values in discrete units of {0.01} called centipawns (cp). Internally they may have higher precision but their outputs under the standard UCI protocol are always whole numbers of cp which are converted to decimal for display. They often used to regard {+10} or {+20} as the value for checkmate, but {+1000} has become standard. I still use {\pm 10} as cutoffs and divide the {x}-axis into “slots”

\displaystyle  -10, -9.99, -9.98, \dots, -0.02,-0.01,0.00,+0.01,+0.02,\dots,+9.99,+10.

Positions of value beyond the {\pm 10} cutoff belong to the end slots. Under a symmetry option, a position of value {x} goes into both the slot {x} for the player to move and the slot {-x} for the opponent. This is used to counteract the “drift” phenomenon discovered in this paper with my students that the player to move has a 2–3% lower expectation across all values—evidently because that player has the first opportunity to commit a game-chilling blunder.

The “b100” means that adjacent slots with fewer than 100 moves are grouped together into one “bucket” whose {x} value is the weighted average of those slots. Larger slots are single buckets rather than divided into buckets of 100. The end slots and zero (when included) are single buckets regardless of size. Finally, the number after “sk” for “skedasticity” determines how buckets are weighted in the regression as I discuss further on.

The {y}-value of a bucket is the sum of wins plus half of draws by the player enjoying the value {x} (whose turn it might or might not be to move) divided by the size of the bucket. This is regressed to find {A,B} most closely giving {y = f(x)}. The slope at zero is {f'(0) = \frac{1}{4}B(1-A)}. The quantity

\displaystyle  p_1 = A + \frac{1 - 2A}{1 + e^{-B}}

gives the expectation when {1.00} ahead—figuratively the handicap at odds of a pawn. Note how close this is to 70% for players rated 2000.

The Law and the Scales

The fit is amazingly good—even after allowing that the {R^2} value, so astronomically close to {1}, is benefiting from the correlation between positions from the same game, many having similar values. Not only does it give the logistic relationship the status of a natural law (along lines we have discussed), but also Ban argues that chess programs must conform to it in order to maximize the predictive power of the values they output, which transmutes into playing strength. The robustness of this law is shown by this figure from the above-linked paper—being rated higher or lower than one’s opponent simply shifts the curve left or right:


This is one of several reasons why my main training set controls by limiting to games between evenly-rated players. (The plots are asymmetric in the tail because they grouped buckets from {-10} up to {+10} rather than come in from both ends as the present ones do.)

Most narrowly to our goal, the {B} value determines the scale by which increases in value {x} translate into greater expectation, more directly than quantities like {p_1} or {f'(0)}. Put simplistically, if a program values a queen at 10 rather than 9, one might expect its “{B}” to adjust by 9/10. Early versions of Stockfish were notorious for their inflated scale. The goal is to put all chess programs on a common scale by mapping all their values to points expectations—and Ban’s dictum says this should be possible. By putting sundry versions of Stockfish and Komodo and Houdini (which placed 2nd to Stockfish in the just-concluded ninth TCEC championship) on the same scale as my earlier base program Rybka 3, I should be able to carry over my model’s trained equations to them in a simple and direct manner. Here is the plot for Komodo 10’s evaluations of the same 100,000+ game positions:


The fit is just as fine. The {A} values are small and equal to within {0.0006} so they can be dismissed. The {B} values are {0.9330} for Komodo against {0.8921} for Stockfish, giving a ratio of about {1.046}. The evaluations for 70% expectation, {0.9619} for Komodo and {1.0072} for Stockfish, have almost the same ratio {1.047} to three decimal places. So we should be able to multiply Komodo’s values by 1.046 and plug them into statistical tests derived using Stockfish, right?

The error bars of {\pm 0.0184} on Komodo’s {B}, which are two-sigma (a little north of “95% confidence”), give some pause because they have {\pm 2\%} wiggle. This may seem small, but recall the also-great fit of the linear regression from (scaled) player error to Elo rating in the previous post. Under that correspondence, 2% error translates to 2 Elo points for every 100 below perfection—call that 3400. For Carlsen and Karjakin flanking 2800 that means only {\pm 12} Elo but grows to {\pm 28} for 2000-level players. Here is a footnote on how the “bootstrap” results corroborate these error bars and another data pitfall they helped avoid.

But wait a second. This error-bar caveat is treating Komodo’s {B} as independent from Stockfish’s {B}. Surely they are completely systematically related. Thus one should just be able to plug one into the other with the conversion factor and get the same proportions everywhere, right? The data is huge and both the logistic and ASD-to-Elo regressions this touches on have {R^2 \gg 0.99} and the force of natural law. At least the “wiggle” can’t possibly be worse than these error bars say, can it?

Revenge of the Turkeys

Here are side-by-side comparison graphs with Stockfish and Komodo on the same set of positions played by players within 10 Elo points of 1750.


Now the Komodo {B} is lower. Here is a plot of the {B}-values for Komodo and Stockfish over all rating levels, together with the Komodo/Stockfish ratio:


The ratio waddles between 0.96 and 1.06 with a quick jag back to parity for the 2700+ elite players. Uncertainty {\pm 0.05} speaks a gap of 5 Elo points for every 100 under perfection, which makes a considerable 70–point difference for Elo-2000 players.

Well, we can try clumping the data into huger piles. I threw out data below 1600 and the 2800 endpoint—which has lots of Carlsen but currently excludes Karjakin since his 2772 is below 2780. I combined blocks of four levels at 1600–1750, 1800–1950, up to 2600–2750, and quadrupled the bucket size to match. Here is the plot for 2200–2350, with a move-weighted average of 2268:


With over 500,000 data points, mirrored to over a million, can one imagine a more perfect fit to a logistic curve? With Stockfish the {R^2} value even prints as unity. And yet, this is arguably the worst offender in the plot of {B} over these six piles:


The point for 2600–2750 goes down. It is plotted at 2645 since there are far many more 2600s than 2700s players, and it must be said that the 2400–2550 pile has its center 2488 north of 2475 because 2550 included all years whereas the 2000–2500 range starts in the year 2006. But the data point for 2200–2350 is smack in the middle of this range. Why is it so askew that neither regression line comes anywhere near the error bars for the data taken with the respective engine?

A Check on Just Gobbling Data

Getting a fixed value for the ratio is vital to putting engines on a common scale that works for all players. The above is anything but—and I haven’t even told what happens when Rybka and Houdini enter the picture. It feels like the engines diverge not based on their evaluation scales alone but on the differences in their values for inferior moves that human players tend to make, differences that per the part-I post correspond almost perfectly to rating. Given Amir Ban’s stated imperative to conform any program’s values to a logistic scale in order to maximize its playing strength, and the incredible fit of such a scale at all individual rating levels, how can this be?

I get similar wonkiness when I try to tune the ratio internally in my model, for instance to equalize IPRs produced with Komodo and Stockfish versions to those based on Rybka 3. There is also an imperative to corroborate results obtained via one engine in my cheating tests by executing the same process with test data from a different engine. This has been analogized to the ‘A’ and ‘B’ samples in doping tests for cycling, though those are taken at the same time and processed with the same “lab engine.”

I had hoped—indeed expected—that a stable conversion factor would enable the desirable goal of using the same model equations for both tests. I’ve become convinced this year that instead it will need voluminous separate training on separate data for each engine and engine version. A hint of why comes from just looking at the last pair of Komodo and Stockfish plots. All runs skip the bucket for an exact 0.00 value, which by symmetry always maps to 0.50. Its absence leaves a gap in Komodo’s plot, meaning that Komodo’s neighboring nonzero values carry more weight of imbalance in the players’ prospects than do 0.01 or -0.02 etc. coming from Stockfish. The data has 48,693 values of 0.00 given by Komodo 10 to only 43,176 given by Stockfish 7. Whereas, Komodo has only 42,350 values in the adjacent ranges -0.10 to -0.01 and +0.01 to +0.10 (before symmetrizing) to 47,768 by Stockfish. The divergence in plot results may be amplified by the “firewall at zero” phenomenon I observed last January. The logistic curves are dandy but don’t show the cardinalities of buckets, nor other higher-moment effects.

In the meantime I’ve been using conservative ratios for the other engines relative to Rybka. For example, my IPRs computed in such manner with Komodo 10 are:

  • Regular match: Carlsen 2765 +- 170, Karjakin 2815 +- 165, combined 2790 +- 120.

  • Tiebreaks: Carlsen 2675 +- 365, Karjakin 2245 +- 360, combined 2460 +- 260.

These are all 70–100 and so lower than the values I gave using Rybka. Critics of the regular match games in particular might agree more with these than my higher official numbers, but this needs to be said: When I computed the Rybka-based IPR for the aggregate of moves in all world championship matches since FIDE’s adoption of Elo ratings in 1971, and compared it with the move-weighted average of the Elo ratings of the players at the time of each match, the two figures agreed within 2 Elo points. Similarly weighting the IPRs for each match in my compendium gives almost the same accuracy.

That buttresses my particular model, but the present trouble happens before the data even gets to my model. Not even the scaling stage discussed in the last post is involved here. This throws up a raw existential question.

Much of data analytics is about “extracting the signal form the noise” when there is initially a lot of noise. Multiple layers of standard filters are applied to isolate phenomena. But here we are talking about raw data—no filters. All we have observed are the smooth linear correspondence between chess rating and average loss of position value and the even more perfect logistic relation between position value and win/draw/loss frequency. All we did was combine these two relations. The question is:

How did I manage to extract so much noise from such nearly-perfect signals?

Open Problems

Can you see an explanation for this wonkiness in my large data? What caveats for big-data analytics does it speak?

The chess answer is that Carlsen played 50.Qh6+!! and Karjakin instantly resigned, seeing Kxh6 51. Rh8 mate, and that after 50…gxh6 the other Rook drives home with 51. Rxf7 mate.

When Data Serves Turkey

November 30, 2016

A head-scratching inconsistency in large amounts of chess data

Slate source

Benjamin Franklin was the first American scientist and was sometimes called “The First American.” He also admired the American turkey, counter to our connotation of “turkey” as an awkward failure.

Today I wonder what advice Ben would give on an awkward, “frankly shocking,” situation with my large-scale chess data. This post is in two parts.
Read more…


November 24, 2016

Theorems and Proofs—which are more important?


Ken and I wish to thank all who read and follow us. May you have a wonderful day today all day.

But we would like to pose a basic question about teaching complexity theory: Theorems vs. Proofs.
Read more…

GLL At the Chess Match

November 20, 2016

Dick and I will be on Sunday’s game telecast

Business Insider source

Magnus Carlsen of Norway and Sergey Karjakin of Russia are midway through their world championship match in New York City. The match is organized by Agon Limited in partnership with the World Chess Federation (FIDE).

Tomorrow, Sunday—early today as I post—at 2pm ET is Game 7 with the match all square after six hard-fought draws. Dick and I are in New York City and will be on the telecast streamed by the sponsoring website, A one-time $15 charge brings access to that and all remaining games.
Read more…

Unskewing the Election

November 8, 2016

Not the polls but voter impulse this time

Cropped from source

Nate Silver has gone out on a limb. Four years ago we posted on how the forecast of his team at FiveThirtyEight jived with polls and forecasts by other poll aggregators. This year there is no jive.

Today, Election Day in the USA, we discuss the state of those stating the state of the election.
Read more…

Halloween Math Style

October 31, 2016

The top scariest possible results

Head chopped from source

Washington Irving was a famous writer of the early 1800’s who is best known for his short stories. The Legend of Sleepy Hollow was based on the folklore that each Halloween a decapitated Hessian soldier, killed in the American Revolution, rises as a ghost, a nasty ghost, who searches for his lost head.

Today is Halloween and while Ken and I are not searching for any lost heads, we do believe it is a good day to think about scary stories.
Read more…

Absolute Firsts

October 29, 2016

An initiative for women in computing

AIA source

Louise Bethune was the first female professional architect in the United States, and possibly the world. She worked in Buffalo in the late 1800s through the early part of the 20th century.

Today we roll out ideas for an initiative on attracting women to computer science.
Read more…