Skip to content

More “O” Abuse

February 13, 2009


Lower bounds on VLSI

images8

Bob Sedgewick is a best selling author of computer science textbooks. His most famous book is entitled simply “Algorithms,” and countless undergrads have used it as their introduction to computing. He was also the first chair of the department of Computer Science at Princeton University. As chair he helped design the current building for Computer Science, in which he the builders encode a “secret” in the bricks on the outside of building that spell out the phrase P=NP? Clearly, P=NP is an important problem–what other problems are “encoded” onto the face of buildings?

VLSI

In the 1980’s Bob and I worked on proving lower bounds for VLSI, we were part of a group of researchers that worked on proving similar results. Recall VLSI stands for “Very Large Scale Integration”, i.e, for silicon chips. All the results, ours and theirs, were of the same form: we all proved that their were restrictions on the size and speed of an integrated circuit chip that computed something. The results were of the form: if a chip computes \dots, then
            AT^{2} > \Omega(n^{2})
where A is the size of the chip in area and T is the time that the chip takes to do the computation. Note, the results prove a “space-time” tradeoff: either the chip has to be big or the chip has to be slow. Increasing the chip’s speed, for example making T smaller, forces the chip to become bigger.

The way that these results are proved is interesting, since they use communication complexity type arguments. Today communication complexity is a major area of theory, our work were some of the first applications of this method of proof. While we did not invent the method, we were the first to introduce nondeterministic communication complexity. I always thought that it was neat, and a bit surprising, that nondeterminism played a role in proving lower bounds on chips. After all chips are real devices that are deterministic, but our proofs required nondeterminism. In any event we were the first to define this type of communication complexity, since then nondeterministic complexity has been studied heavily for it’s own sake.

The Abuse

What does this have to do with abuse of O-notation, you might ask? At the time chip manufactures made chips that computed the product of two n bit integers, i.e. that computed multiplication. The n was typically small, n=32 was common, yet the chips while fast were always quite large. Bob and I wondered if the results we proved could shed some light on this: did the chips have to be big in order to be fast, just as our results implied? So one day Bob and I started to work out the exact constant that was hidden in the proof that for multiplication,
            AT^{2} > \Omega(n^{2}).
Our hope was that while n=32 is not large, our methods would still say something non-trivial about the size and performance of actual chips. Luckily, the proofs were constructive so that getting a handle on the hidden constant was not too difficult. After a couple of hours at the blackboard we got the answer:
            AT^{2} > cn^{2}
for a known c. The trouble was when we plugged in n=32 or even n=1,000, the bound on the size of the chip was a “joke”. The methods showed at best that the chips were not much bigger than the size of a single transistor. This was terrible, since we expected something much better. But we checked our calculations carefully, and there was no error. The constant c was extremely small. Tiny. The bounds we and many others had worked hard to prove said nothing about real computations. Nothing. Bob and I stopped working on these bounds.

Open Problems

A challenge to the theory community is to check on the bounds that are used in both upper and lower bound theorems, not just those about chip size. In pure mathematics the exact constants for many results are not really important. However, in an area like computer science that is grounded in practice the bounds matter. It’s not enough to say that we can show that something is larger than cn^{2} for some c. It makes a huge difference whether the c is large, medium, small, or tiny. The difference is one that we currently ignore at best–I think this is a kind of intellectual dishonesty that we must avoid.

Finally, one could view this whole area as a potential opportunity. If we spend time and energy trying to understand the exact behavior of our results, we may just discover new methods. We may be led to methods and ideas that allow realistic constants. I certainly hope that this happens.

About these ads
6 Comments leave one →
  1. March 25, 2009 12:53 am

    As a newcomer, I have to say I love your blog!

    My friend and I always joke that any two numbers are “within a O(1) factor” of each other. For example:

    “Hey, I just played XYZ game and scored 37 my first time.”
    “Not bad, my highest score is 10,235,792.”
    “Ehh whatever … just a constant factor.”

    I think he’d enjoy this post.

  2. Peden permalink
    March 25, 2009 11:34 am

    I’m the “friend” from above, and he was right in that I did enjoy this post.

    It seems to me that part of the intellectual source of “abuses” is that O-notation is really about rates of growth, not magnitudes. We are constantly tempted to translate things into the realm of magnitudes because that’s more real to us as human beings, but doing so requires a different kind of analysis (like the one you did to tease out the value of ‘c’).

    If we use O-notation to make statements about rates of growth only, I think we are less likely to make these kinds of mistakes. Of course, you can still say something about relative magnitudes for very large n, but without the constant you have no idea how large that n really is. I would argue then that it is only through practical experience that we know the traveling salesmen problem is intractable. NP problems in general are not universally intractable.

    I’m also curious how you translated the bounds given by your proof into the physical world. Where do the units come in? The speed of light perhaps?

    • rjlipton permalink*
      March 25, 2009 12:25 pm

      I do not disagree that O notation is extremely useful. We must just be careful is my concern. For example, there are many results that prove \sqrt n bounds, but if the hidden constant is huge, these may be worse than a n bound.

Trackbacks

  1. Fads and the Theory of Computation « Gödel’s Lost Letter and P=NP
  2. BDD’s and Factoring « Gödel’s Lost Letter and P=NP
  3. How To Avoid O-Abuse and Bribes « Gödel’s Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,691 other followers