Skip to content

Coins on a Chessboard

February 21, 2016


A 35-year-old puzzle with extras

Maxim_Kontsevich
DeLong Lecture Series source

Maxim Kontsevich has established deep connections between algebraic geometry and mechanisms of physics. He won a Fields medal in 1998 for “contributions to four problems of geometry” and recently won one of the inaugural Breakthrough Prizes in Mathematics, having earlier won the corresponding prize in Fundamental Physics. He divides his time between the University of Miami and the Institut des Hautes Études Scientifiques (IHES), which boasted Alexander Grothendieck at its inception. His IHES bio says that he is known for

…drawing on the systematic utilization of known deformations of algebraic structures and the introduction of new ones that have been revealed to pertain to many other questions where there were no a-priori connections.

Today we discuss a much lighter topic he thought up as a teenager that uses coins on a chessboard.

The puzzle uses only coins—or pebbles or pawns—no other chess pieces. The game starts with one or more coins in squares near the lower-left corner, which we’ll call square {a1.} The only move in the game is that if a coin is on a square—say {b4}—and the squares {c4} to its right and {b5} one step forward are empty, then the coin can be removed from {b4} and new coins placed on the squares {b5} and {c4.} Thus each coin “clones itself” into two while vacating the square it was on. Variously the board may be limited to {8 \times 8} or {n \times n} or allowed to be infinite up and/or rightward.

A subset {P} of cells including {a1} is designated a “prison,” and initially one or more cells in {P} (and perhaps some outside {P}) are occupied to make an initial configuration {C_0.} The game is won if moves are made to reach a configuration {C} in which all cells in {P} are empty. Figuratively speaking, the prisoners inside {P} have broken out—their clones have escaped. The question is:

Given {P} and {C}, can the game be won? And what is the complexity of deciding this?

Some Escapes

The simplest nonempty prison is {P = \{a1\}} and {C_0 = \{a1\}.} The stone escapes in one move. So let’s next take {C_0 = \{a1,a2,b1\}} so that its movement is initially blocked. Then it can escape in four moves:

\displaystyle  b1,b2,a2,a1.

We need only give a move’s source square. At the end there are coins on {a2,a3,b1,b2,b3,c1,c2}. We might go on to try to clear {a2} and {b1} but let’s ask about that later. Go back to {C_0 = \{a1\}} only but let’s widen the prison instead to {P = \{a1,a2,b1\}.} After the first move {a1} we need to clear the new coins on {a2} and {b1} and their moves block each other at {b2}. But after the moves {b1,b2,a2} we win.

Now let’s make it harder by widening the prison to include {b2} too. We can still win in four more moves: {b3,c3,c2,b2.} Using a pretty applet designed by Seth Weiss of Lincoln-Sudbury Regional High School in Massachusetts, we have this configuration:


ChessPebbleConfig


The next question to ask is, can we enlarge our cleared space one cell further? Or are too many clones stepping on each others’ toes? This turns out to be equivalent to asking, can we win the case {C_0 = \{a1\}}, {P = \{a1,a2,b1,b2,a3,\}} perhaps adding {c1} too to {P}?

A Proof Idea

The most common starting setup in recent sources of this puzzle uses {P = C_0 = \{a1,a2,b1\}}. Can the clones win? This is featured in a video by Zvezdelina Stankova of Mills College and UC Berkeley for the Berkeley Math Circle, which she founded.

Here is the starting configuration together with the idea of assigning a weight to each square. The weight is {1/2^d} where {d} is the distance to the lower-left corner in horizontal plus vertical steps.


ChessPebbleWeights


The three coins have total weight {2}, the corner contributing {1} and the others {1/2} each. The key idea is:

Every move by a coin preserves the total weight of the configuration {C}.

If and when we clear the prison {P}, the coins outside it must have total weight {2}. Now the total weight of the infinite board is {4}. We can verify this by noting that the bottom row has a geometric series summing to {2}, and each further row has half the weight in each square, so the total is

\displaystyle  2 + 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots = 4.

The upshot is that the whole region outside {P} contributes only weight {2}, and hence any finite set of coins outside it must have total weight strictly less than {2}. It is hence impossible to clear {P}. This is a neat idea.

Now try applying this idea to the cases in the last section. Kontsevich originally contributed the puzzle to the 1981 Soviet “Tournament of the Towns” math competition with the easier case of a single coin in the corner and {P} having the ten cells within distance {3}. The coin has weight {1} but {P} totals {3\frac{1}{4}}, leaving less than {1} outside. This made the no answer fine for a timed question. The above case {P = \{a1,a2,a3,b1,b2,c1\}}, however, only totals weight {2\frac{3}{4}}. So maybe it is possible—or maybe if we don’t require clearing {c1}? We’ll leave that to you, our readers, with unlimited time.

A Surprising Algorithm

One further idea that makes playing the game less dependent on strategy is we can allow moves that pile up more than one coin on a square, provided we later clear all but one coin on that square. Any legal configuration {C} reached this way can also be reached by strictly legal moves in a different order.

This yields the idea of a “diagonal sweep”: Given {C_0}—or any configuration {C}—go out by diagonals starting in the corner until the last diagonal that intersects a cell in {P}. For each nonempty cell on each diagonal that belongs to {P}, play these extended pile-on moves until it is empty; if the cell is not in {P}, play until it has one coin. Stop after finishing the last diagonal even though the resulting configuration {C'} will generally be illegal with multiple-coin squares.

A single sweep can be executed in time at worst quadratic in the size of {P} (assuming {P} is connected). Amazingly, a simple property of {C'} decides whether the game {(P,C_0)} is winnable. So the decision problem “can you win?” has a polynomial-time algorithm.

Can you work out what this telltale property is? Can you apply it to answer the cases left open above?

If you are stumped you can always consult a 1995 paper by Fan Chung, Ron Graham, John Morrison, and Andrew Odlyzko which proved this result and much more. A 2010 paper by Qiang Zhen and Charles Knessl explores more of the beautiful numerical results and problems that associate to this puzzle.

Now With Real Coins

We at GLL always try to offer something new, and this is where we will finally justify calling the markers coins. Let a coin that is heads move as above but also have the option of cloning itself into three coins that are tails, filling the cell one diagonal step northeast as well as one cell up and to the right. A coin that is tails can clone itself into two tails coins as before, but can also clone into two heads coins each a knight’s move northeast. The knight moves can jump over obstacles like in chess but for strictly legal play the landing squares must be empty.

For example, a starting heads coin at {a1} can become tails at {a2,b2,b1}. Then we can replace {a2} by heads at {b4} and {c3}, and {b2} can jump over to give heads at {c4} and {d3}. The tails coin at {b1} is blocked from a knight’s move at {c3}. However, we can play four standard heads-coin moves to clear {c3} and finally occupy {c3} and {d2} with all coins heads again. The resulting configuration {C} is:


KnightPebbles


Thus the new rules have enabled us to win Kontsevich’s original impossible case. This was possible because the new rules do not conserve weight. The configuration {C} has total weight

\displaystyle  \frac{3}{16} + \frac{2}{32} + \frac{3}{64} + \frac{2}{128} = \frac{5}{16} = 0.3125,

so it went down. However, the bottom row and leftmost column can never be used again. Hence it is equivalent to move {C} one cell diagonally southwest. Then its weight is multiplied by {4} giving {1.25.} Devising weighting rules to handle these multiple options and factors seems challenging but feasible.

I’ve just thought of this variant, and am not taking time to research it beyond some sanity checks before posting. So the field is all open for you, our valued readers. Does it allow every finite region {P} to be cleared? If so, what tightening of the rules would make the decision problem nontrivial again? Could it be undecidable? Similar questions for John Conway’s game of Life are undecidable. The games we’ve been discussing here are more similar to evolutions of two-state one-dimensional cellular automata but again our variants seem to be different.

Open Problems

Do you find our new heads/tails version of the chessboard pebbling game interesting? Is its decision problem nontrivial, and is there an efficient algorithm for it?

Update 2/27/16: See reader <a href=comment showing that not all regions can be cleared so the decision problem is nontrivial.

[“known for”->”drawing on” and added second prize in intro]

7 Comments leave one →
  1. February 22, 2016 2:20 am

    A small comment: Kontsevich is winner of TWO inaugural Breakthrough Prizes, one in Fundamental Physics (2012) and one in Mathematics (2015).

    • February 22, 2016 9:07 am

      Ah—added, thanks. Incidentally we plan on mentioning two other chessboard+coin puzzles; they would have been too long for this post.

  2. February 22, 2016 11:01 am

    Dear Professor Regan,

    We pretend to show the answer of the P versus NP problem. Our principal argument is based in a technique that has been used throughout history in both formal mathematical and philosophical reasoning, as well as informal debate: The Reductio ad absurdum. Reductio ad absurdum is a common form of argument which seeks to demonstrate that a statement is true by showing that a false, untenable, or absurd result follows from its denial, or in turn to demonstrate that a statement is false by showing that a false, untenable, or absurd result follows from its acceptance.

    We start assuming the hypothesis of P = NP for the sake of contradiction. The contradiction is supported by a Theorem that states if P = NP, then the problem SUCCINCT HAMILTON PATH would be in P.

    The problem SUCCINCT HAMILTON PATH is defined as follows:

    A succinct representation of a graph with n nodes, where n = 2^{b} is a power of two, is a Boolean circuit C with 2 * b input gates. The graph represented by C, denoted G_{C}, is defined as follows: The nodes of G_{C} are {1, 2, …, n}. And (i, j) is an edge of G_{C} if and only if C accepts the binary representations of the b-bits integers i, j as inputs. Since the number n is not a b-bits integer, then we represent it as the b-bits integer 0. The problem SUCCINCT HAMILTON PATH is now this: Given the succinct representation C of a graph G_{C} with n nodes, does G_{C} have a Hamilton path? The problem SUCCINCT HAMILTON PATH is in NEXP-complete.

    In our Theorem, we take an arbitrary succinct representation C of a graph G_{C} with n nodes, where n = 2^{b} is a power of two and C will be a Boolean circuit of 2 * b input gates. Interestingly, if C is a “yes” instance of SUCCINCT HAMILTON PATH, then there will be a linear order Q on the nodes of G_{C}, that is, a binary relationship isomorphic to “<" on the nodes of G_{C}, such that consecutive nodes are connected in G_{C}.

    This linear order Q must require several things:

    * All distinct nodes of G_{C} are comparable by Q,
    * next, Q must be transitive but not reflexive,
    * and finally, any two consecutive nodes in Q must be adjacent in G_{C}.

    Any binary relationship Q that has these properties must be a linear order, any two consecutive elements of which are adjacent in G_{C}, that is, it must be a Hamilton path.

    On the other hand, the linear order Q can be actually represented as a graph G_{Q}. In this way, the succinct representation C_{Q} of the graph G_{Q} will represent the linear order Q too. We can define a polynomially balanced relation R_{Q}, where for all succinct representation C of a graph: There is another Boolean circuit C_{Q} that will represent a linear order Q on the nodes of G_{C} such that (C, C_{Q}) is in R_{Q} if and only if C is in SUCCINCT HAMILTON PATH.

    It is easy to see, the graphs G_{C} and G_{Q} will comply with |G_{Q}| < |G_{C}|^{3} when (C, C_{Q}) is in R_{Q}, since both graphs would have the same number of nodes and G_{C} would contain a Hamilton path. Consequently, we obtain the same property for their succinct representations, that is, C_{Q} should be polynomially bounded by C. Indeed, the Boolean circuits C and C_{Q} will be exponentially more succinct than G_{C} and G_{Q} respectively. Hence, if C_{Q} were not polynomially bounded by C when (C, C_{Q}) is in R_{Q}, then the graph G_{Q} will not be polynomially bounded by G_{C}: but we just already mention this cannot be true.

    We finally show R_{Q} would be polynomially decidable if P = NP. For this purpose, we use the existential second-order logic expressions, used to express this graph-theoretic property (the existence of a Hamilton path), that are described in Computational Complexity book of Papadimitriou: Chapter "5.7 SECOND-ORDER LOGIC" in Example "5.12" from page "115". Indeed, we just simply replace those expressions with Universal quantifiers into equivalent Boolean tautologies.

    Certainly, a language L is in class NP if there is a polynomially decidable and polynomially balanced relation R such that L = {x: (x, y) in R for some y}. In this way, we show that SUCCINCT HAMILTON PATH would be in NP if we assume our hypothesis, because the relation R_{Q} would be polynomially decidable and polynomially balanced when P = NP. Moreover, since P would be equal to NP, we obtain that SUCCINCT HAMILTON PATH would be in P too.

    But, we already know if P = NP, then EXP = NEXP. Since SUCCINCT HAMILTON PATH is in NEXP–complete, then it would be in EXP–complete, because the completeness of both classes uses the polynomial-time reduction. But, if some EXP–complete problem is in P, then P should be equal to EXP, because P and EXP are closed under reductions and P is a subset of EXP. However, as result of the Hierarchy Theorem the class P cannot be equal to EXP. To sum up, we obtain a contradiction under the assumption that P = NP, and thus, we can claim that P is not equal to NP as a direct consequence of applying the Reductio ad absurdum rule.

    You could see the details in paper "Is P equal to NP?":

    https://hal.archives-ouvertes.fr/hal-01270398/document

    Best Regards,
    Frank Vega

  3. o0o permalink
    February 23, 2016 10:39 am

    Kontsevich’s original rule is an identity in Napier’s Location Arithmetic (in a suitable basis): if a position is taken to represent a number in location arithmetic, the rule leaves the quantity unchanged.

    https://en.wikipedia.org/wiki/Location_arithmetic

    I wonder if this is what inspired the puzzle.

  4. February 26, 2016 12:57 pm

    Not every P can be cleared. Weight each square (1/2)^(d/2), where d is the distance from the lower left corner, and ignore whether a coin is heads or tails. Now, no move decreases the total weight, so starting with too much weight in P leads to an impossible problem.

    • February 27, 2016 11:26 pm

      Nice idea to slow down the growth of weight while keeping it finite. I’ve added an update. One can go on to explore exactly which finite regions can’t be cleared. This validates my sanity-check impression from playing on after the last diagram shown. Thanks!

Trackbacks

  1. One Flip Sends Many Bits | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s