Skip to content

Does the Distributive Law Work?

February 17, 2009


A novel proof of termination

images13

Larry Snyder is a professor of Computer Science and Engineering at the University of Washington in Seattle. His main interest has always been and still is in the area of parallel computing. I first began to work with Larry
when we both arrived at Yale University. While we have not worked together for years, it was always a great pleasure to work with him. He has a terrific sense of humor that makes working together pure pleasure.

We joined the Yale University Computer Science department in 1973–yes 1973–that is not a typo for 1993. He was an assistant professor and I was a “Gibbs Instructor”. I do not think they still have this type of position but it had the advantage of less teaching and a small travel budget, and the disadvantage of only being for two years.

As new faculty we were interested in the graduate student examination process. One day at lunch we asked our colleagues how it worked. We were told the general structure and then told that Alan Perlis had made up one of the exam questions. His question was so “hard” that not a single graduate student got even partial credit. I was immediately interested after we were told the question. I would like to say that we immediately solved it, after all we were now faculty not graduate students. But it took us a couple of hours before we got the problem solved. I think the problem and our solution is pretty neat and thought I would share it with you.

The Problem

Imagine that you have a binary tree that has variables a,b,c,\dots at the leaves and the operators + and \times at the internal nodes. Clearly, such an expression represents an arithmetic expression. For example, (A+C) \times (D + E) is the tree in the figure below: (Yes I skipped B–just to keep you on your toes.)

slide2

The question is suppose we expand out the tree using the distributive law. That is we can replace X \times (Y + Z) by X \times Y + X \times Z for any subtrees X,Y,Z. We think of this as the rule:
       X \times (Y+Z) \to (X \times Y) + (X \times Z)
and the corresponding rule:
       (Y+Z) \times X \to (Y \times X) + (Z \times X).

Of course these are just the famous distributive laws of arithmetic. Note, we can apply these rules any where in the tree and we keep applying the rules until there is no further place left to apply them. Perlis’s question was: Prove that for any tree the rule applications eventually stop. Clearly, this must be true: if we start expanding out an arithmetic tree it must eventually become “all expanded”. We have known this since grade school: every time we expanded some expression it eventually became one that could not be expanded any further. Clearly. The rub is proving this.

The reason it not obvious is that the usual way to prove that such a process halts is to show that some measure is decreasing and thus eventually the process must stop. More precisely, we would define a measure of “complexity” of an arithmetic tree. The measure should have two properties: First, it should be a natural number, i.e. a number 0,1,2,3,4,\dots. Second, each time we apply the distributive law the measure must decrease by at least one. If we can find such a measure, then the process must stop.

The difficulty with this process is that lots (most?) natural measures do not work. The main issue with suggested measures is that
when we apply the distributive law, say,
       X \times (Y+Z) \to (X \times Y) + (X \times Z).
we create two copies of the subtree X. This rules out lots of simple but natural measures. For example the number of symbols \times will not decrease but can increase. Indeed most natural measures fail to work.

If you wish to solve this on your own, then stop reading here and try to prove that this process does indeed halt.

Our Solution

Our solution is a general method that took advantage of one key property of the distributive law that you may have missed. What struck me immediately about the distributive law is this: it is not an arbitrary tree re-writing rule. It is a rule that preserves the value of the tree when viewed as an expression. It took Larry and I a while to figure out a proof that exploited this insight. Here is the proof:

Suppose that T is some arithmetic tree. Make two observations about applying the distributive law: The first is that every application of the law preserves the value of the tree. Second, each application strictly increases the size \mid T \mid of the tree: size of a tree is just the number of vertices in the tree. Now replace each leaf of the tree with the number 2. This is equivalent to replacing all the variables with the value 2. Now assume that the distributive law can be applied to the tree T forever, then there is an infinite sequence of trees T=T_1,T_2,\dots where T_{i+1} is the result of some application of the distributive law to T_i. We will show that this contradicts the two observations.

At the start the tree T_1 has some value, call that value V ge 1. Since the value of the tree stays the same, after every application of the distributive law the value of all the trees T_i is V. However, we claim that this is impossible. Since the trees increase in size eventually there be an index i large enough so that the tree T_i has a path of at least V long. It is easy to see, by the choice of the value 2 on the leaves, that the value of this tree is at least 2V. This is a contradiction that shows that the distributive law applications must eventually stop.

Open problems

The method that we used to show that the distributive law always stops can be generalized to many other tree re-writing rules. The key insight is that the rules must preserve some value. We did some work on this many years ago, but I believe that the method could be used for a much wider range of problems. Perhaps our technique could help solve some “halting problems” that arise in modern computation questions.

9 Comments leave one →
  1. March 6, 2009 7:45 pm

    Very clever!

    Here’s another attack on the problem that curiously uses what at first seemed our enemy: the fact that copies of subtrees are created when we apply our rule. First note that applying either distributive transform does not change the height of the tree. This means there is some bound (2^height) on the number of possible leaf nodes, i.e variables, in our tree. Now, since every subtree must contain at least one leaf (at least two, actually), and we are copying a subtree, each step must increase the number of leaf nodes. Because of the bound, we can’t keep doing this forever.

    • rjlipton permalink*
      March 7, 2009 8:43 am

      Thanks…rjlipton (dick)

      The problem what you say about height is wrong. It does go up in many cases. Consider A*(B+C) where the depth is depth(A) + 1. We assume A is biggest tree. Then, after the rule is applied we get A*B + A*C now the depth is the depth(A) + 2. So it has gone up. Another way to see this is think of the rules as expand-out the expressions. Clearly as we expand the expression can grow in size. The point of the problem is that eventually they stop growing.

  2. March 9, 2009 4:06 pm

    Yes, I see, you’re right of course. Thanks for the reply. Quite amazing that the fact that expanding an arithmetic expression always terminates is not as obvious as it would seem. Very interesting entry.

  3. Diego permalink
    December 15, 2009 8:01 am

    There’s an easy way of proving this, using the “polynomial interpretation method.”
    Interpret in {2,3,…} the function symbols as:
    +(x,y) = 2x + y + 1,
    *(x,y) = xy.
    It’s easy to check that for each rule, using this interpretation, lhs > rhs.

    A good reference for this is “Term Rewriting and All That,” by Baader & Nipkow.

    • rjlipton permalink*
      December 16, 2009 7:17 pm

      I think are our method is easy too.

  4. Pavel Panchekha permalink
    December 28, 2009 2:32 pm

    How about the following solution?

    We take our tree, and write it out, parenthesized (with the standard math precedence rules). So (* (+ A B) C) would be (A + B) * C, and (+ (* A B) C) would be A * B + C (note: no parentheses needed).

    All right, now we look at what the distributive law does. First of all, we have to generalize it to the case (A + B + C) * D — we can do this just by considering a tree rewriting rule into a “variadic+” node. Not hard, doable uniquely. Then, we look at the number of parentheses before and after a distributive law rewrite. Since all of the children of the + node that we’re distributing over must be * nodes (otherwise, we’d combine them into a variadic +), we’re lowering the number of parentheses by some amount.

    OK, that’s our monotonic invariant, hit it with a PLNN, done.

    • rjlipton permalink*
      December 28, 2009 3:02 pm

      The D gets repeated—no? Will it not make many repeats occur worst case?

Trackbacks

  1. What Are Proofs For Anyway? « Gödel’s Lost Letter and P=NP
  2. Three In The Room | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s