# Separating Words: Decoding a Paper

*A clever trick on combining automata*

John Robson, who goes by Mike, has worked on various problems including what is still the best result on separating words—the topic we discussed the other day. Ken first knew him for his proof than checkers is -complete and similar hardness results for chess and Go.

Today I want to talk about his theorem that any two words can be separated by an automaton with relativley few states.

In his famous paper from 1989, he proved an upper bound on the *Separating Word Problem*. This is the question: Given two strings and , how many states does a deterministic automaton need to be able to accept and reject ? His theorem is:

Theorem 1 (Robson’s Theorem)Suppose that and are distinct strings of length . Then there is an automaton with at most states that accepts and rejects .

The story of his result is involved. For starters, it is still the best upper bound after almost three decades. Impressive. Another issue is that a web search does not quickly, at least for for me, find a PDF of the original paper. I tried to find it and could not. More recent papers on the separating word problem reference his 1989 paper, but they do not explain how he proves it.

Recall the problem of separating words is: Given two distinct words of length , is there a deterministic finite automaton that accepts one and rejects the other? And the machine has as few states as possible. Thus his theorem shows that roughly the number of states grows at most like the square root of .

I did finally track the paper down. The trouble for me is the paper is encrypted. Well not exactly, but the version I did find is a poor copy of the original. Here is an example to show what I mean:

[ An Example ] |

So the task of decoding the proof is a challenge. A challenge, but a rewarding one.

## A Cool Trick

Robson’s proof uses two insights. The first is he uses some basic string-ology. That is he uses some basic facts about strings. For example he uses that a non-periodic string cannot overlap itself too much.

He also uses a clever trick on how to simulate two deterministic machines for the price of one. This in general is not possible, and is related to deep questions about automata that we have discussed before here. Robson shows that it can be done in a special but important case.

Let me explain. Suppose that is a string. We can easily design an automaton that accepts if and only if is the string . The machine will have order the length of states. So far quite simple.

Now suppose that we have a string of length and wish to find a particular occurrence of the pattern in . We assume that there are occurrences of in . The task is to construct an automaton that accepts at the end of the copy of . Robson shows that this can be done by a automaton that has order

Here is the length of the string .

This is a simple, clever, and quite useful observation. Clever indeed. The obvious automaton that can do this would seem to require a cartesian product of two machines. This would imply that it would require

number of states: Note the times operator rather than addition. Thus Robson’s trick is a huge improvement.

Here is how he does this.

## His Trick

Robson’s uses a clever trick in his proof of the main lemma. Let’s work through an example with the string . The goal is to see if there is a copy of this string starting at a position that is a multiple of .

The machine starts in state and tries to find the correct string as input. If it does, then it reaches the accepting state . If while doing this it gets a wrong input, then it switches to states that have stopped looking for the input . After seeing three inputs the machine reaches and then moves back to the start state.

[ The automaton ] |

## The Lemmas

We will now outline the proof in some detail.

### Hashing

The first lemma is a simple fact about hashing.

Lemma 2Suppose andThen all but primes satisfy

*Proof:* Consider the quantity for not equal to . Call a prime *bad* if it divides this quantity. This quantity can be divisible by at most primes. So there are at most bad primes in total.

### Strings

We need some definitions about strings. Let be the length of the string . Also let be the number of occurrences of in .

A string has the *period* provided

for all so that is defined. A string is *periodic* provided it has a period that is less than half its length. Note, the shorter the period the more the string is really “periodic”: for example, the string

is more “periodic” than

Lemma 3For any string either or is not periodic.

*Proof:* Suppose that is periodic with period where is a single character. Let the length of equal . So by definition, . Then

for . So it follows that

This shows that and cannot both be periodic, since

Lemma 4Suppose that is not a periodic string. Then the number of copies of in a string is upper bounded by where

*Proof:* The claim follows once we prove that no two copies of in can overlap more than where is the length of . This will immediately imply the lemma.

If has two copies in that overlap then clearly

for some and all in the range . This says that has the period . Since is not periodic it follows that . This implies that the overlap of the two copies of are at most length . Thus we have shown that they cannot overlap too much.

### Main Lemma

Say an automaton *finds* the occurrence of in provided it enters a special state after scanning the last bit of this occurrence.

Lemma 5Let be a string of length and let be a non-periodic string.Then, there is an automaton with at most states that can find the occurrence of in where

Here allows factors that are fixed powers of . This lemma is the main insight of Robson and will be proved later.

## The Main Theorem

The following is a slightly weaker version of Robson’s theorem. I am still confused a bit about his stronger theorem, to be honest.

Theorem 6 (Robson’s Theorem)Suppose that and are distinct strings of length . Then there is an automaton with at most states that accepts and rejects .

*Proof:* Since and are distinct we can assume that starts with the prefix and starts with the prefix for some string . If the length of is less than order the theorem is trivial. Just construct an automaton that accepts and rejects .

So we can assume that for some strings and where the latter is order in length. By lemma we can assume that is not periodic. So by lemma we get that

Then by lemma we are done.

## Proof of Main Lemma

*Proof:* Let have length and let be a non-periodic string in of length . Also let . By the overlap lemma it follows that is bounded by .

Let occur at locations

Suppose that we are to construct a machine that finds the copy of . By the hashing lemma there is a prime so that

if and only if . Note we can also assume that .

Let’s argue the special case where is modulo . If it is congruent to another value the same argument can be used. This follows by having the machine initially skip a fixed amount of the input and then do the same as in the congruent to case.

The automaton has states and for . The machine starts in state and tries to get to the accepting state . The transitions include:

This means that the machine keeps checking the input to see if it is scanning a copy of . If it gets all the way to the accepting state , then it stops.

Further transitions are:

and

The second group means that if a wrong input happens, then moves to . Finally, the state resets and starts the search again by going to the start state with an epsilon move.

Clearly this has the required number of states and it operates correctly.

## Open Problems

The open problem is: Can the SWP be solved with a better bound? The lower bound is still order . So the gap is exponential.

[Added “Mike”, some typo fixes]

1. After separating u0 and u1, there appears to be one more step.

Let the states reached after reading u0,u1 be , and consider the first time that M reaches t after reading u0; let x be the string remaining in S at this point, and let y be the string remaining in T after u1. Then x and y have different lengths and they are now separated by composing with a DFA of size .

2. Here is a tight example for the prefix separation lemma.Let denote the size of the smallest DFA which separates from its prefixes, that is, on reading , reaches a state not visited before.

Then the two lemmas together state that .

This is tight for , where .

Regards,

Aravind.

Dear N.R. Aravind:

Thanks so much. Will ponder this, but sounds good. Again thanks. I also added “latex” to try and get it typeset. Hope that is okay with you.

Let me know if not.

Best

Dick

A really nice trick!

I think there might be a little bug in the proof. It can happen that alpha1 appears in T, in a different position than in S, but such that it collides with the original position modulo p. In that case, the automaton would accept both S and T. There is an easy fix: it suffices to add all appearances of alpha1 in T to the list of k_i’s we don’t want to collide with.

Btw, I don’t really get the last step in the proof of Lemma 3. Could someone please explain?

Dear Adam:

Thanks. The fix certainly works. The last step of the not overlap too much lemma is just this: the argument shows that there are two values for the same symbol. This is the key.

Best

dick

To be honest, that didn’t help me, but I looked at the original paper (ScienceDirect has a reasonably good looking copy), and now I see how to prove the lemma. Still, I think the last chain of inequalities in the blog version of the proof has some typo. I cannot figure out what was the intended meaning, but it is not true that l-p \leq l/2, it’s quite the opposite.

Robson also had a paper in 1999 about separating word with CFG’s (with Currie, Petersen and Shallit)

We got an O(sqrt(n)*log(n)) upper bound using a different technique that may or may not be known. I’m very excited about it and hope to share more details something this week. Even though the bound is worse, I think the approach is neat. 🙂

Hi there! I’m very excited that we found a potentially different approach that leads to a upper bound. It’s possible that it could be related to one of Robson’s works. Or, maybe there is someway to combine the approaches to get a better upper bound. Any thoughts are greatly appreciated.

Here is the approach:

————–

PART 1: Prime numbers

Lemma 1: Let positive integers k_1, k_2, and n such that k_1 <= n and k_2 <= n be given. There exists a constant c_1 (independent of n) such that if k_1 != k_2, then there exists a prime number such that k_1 != k_2 mod p.

This is stated in: https://cs.uwaterloo.ca/~shallit/Talks/hawaii2.pdf

As far as I can tell, this follows by estimates on the Chebyshev function combined with the Chinese remainder theorem.

————–

PART 2: Periodic sums of binary strings

Let a binary string x of length n be given. Let integers r and d such that be given.

Definition: The (r, d)-periodic sum of x is the sum of all bits of the form where i is an integer.

A binary string can be characterized by its periodic sums. In particular, observe that for all binary strings x and y of the same length, we have that x = y if and only if x and y have the same periodic sums.

Further, we prove that it's sufficient to look at periodic sums with small values of d to check if x = y.

Lemma 2: Let a natural number n be given. There exists a constant c_2 (independent of n) such that for all binary strings x and y of length n, we have that x = y if and only if:

For all (r, d) such that , the (r, d)-periodic sum of x equals the (r, d)-periodic sum of y.

To see this, first turn each periodic sum of x into an equation. There are n unknowns (one unknown for each bit of x) and the sum is known. We can associate with each equation a binary vector where the coordinates from the vector are the coefficients of the unknowns from the equation.

The hope is that if we have enough equations, then the system has a unique solution which is just the binary string x. This happens when the associated vectors span .

Thanks to an awesome answer on stack exchange, we known that there exists some c_2 such that for (r, d) where , the associated vectors are sufficient to span . ( See here: https://mathoverflow.net/questions/343355/do-the-following-binary-vectors-span-mathbbrn ).

As a result, x is characterized by the periodic sums where .

————–

PART 3: The upper bound for separating words

Theorem: Let binary strings x and y of length n be given. If x != y, then there exists a DFA with states that accepts x and rejects y.

By Lemma 2, there is some (r, d) such that the (r, d)-periodic sum of x != the (r, d)-periodic sum of y and .

Further, since the periodic sums are bounded by n, by Lemma 1, there is some prime p such that the (r, d)-periodic sum of x != the (r, d)-periodic sum of y mod p and .

Therefore, we can build a DFA with states that computes the (r, d)-periodic sum of the input string mod p to differentiate x and y.

————–

I hope to write this up more cleanly in latex and am open to any ideas, suggestions, or corrections. Thank you!

Hi, Mike—odd that these were held up in the mod queue. Let me know whether this one supersedes the previous one. Dick and I may be able to talk about it over the weekend.

Thank you very much! Sounds great.

Yes, if you can delete my previous post, that would be great. It had a few typos in it. 🙂

I just found out that this method is already known! Please see this paper: “Separating Words by Occurrences of Subwords” by Vyalyi and Gimadeev.

Sorry, this version had some typos. I posted an update with the corrections.

If possible, I would suggest deleting this duplicated post. Thank you!

Upper bound improved to n^{1/3}: https://arxiv.org/abs/2007.12097

This is great! I am excited to take a look. Thanks for sharing. 🙂

A tiny mistake in Lemma 3: since p is at most l/2, we have that l – p is at *least* l/2, not at *most* l/2. Fortunately it’s easy to fix: since p is at least 1, we have that l – p is at most l – 1, so we are still indexing into u.

Thank you very much. Most likely it was a change of indexing between drafts. I have also make a curly ell from that point on: lemmas 3 and 4 and the main proof. I checked once with my eyes but can’t say I’m sure I caught them all. One idea for something like MathDeck would be to enable a search for mathematical uses of the plain as opposed to occurrences within words, or even abbreviations such as “l.” for liters.

A regex-aware search for (in Vim syntax) `\` will miss most of the occurrences that you don’t want. It’ll still give you false positives, like on “l.”, but it should be easier to deal with manually.

My Vim regex got eaten, probably because it uses less-than-greater-than signs. In Perl syntax, it’s `\bl\b`.