Skip to content

The Role of Amplifiers in Science

August 10, 2009


The ability to amplify signals, objects, and mathematical properties plays a key role in all of science

images

Andy Yao is a Turing Award winner, a long time friend, a co-author, and one of the great theorists of all time. He has solved many open problems, and helped create many important new areas of complexity theory.

Today I want to talk about one of Andy’s famous results, the XOR Lemma, while placing it in a larger context. Throughout science one of the recurrent themes is the power of methods that are able to amplify something: the “thing” can be a physical object or a signal, or can be a mathematical property. In all cases, amplification methods often lead to great progress and create new insights.

Andy spent many years, as I did, at the computer science department at Princeton. One of his great traits was the ability to focus on one problem, usually until he solved it. Many of us try to think in parallel about many things at once, but in my opinion, Andy is a serial processor. A very powerful serial processor.

I once went, years ago, to see Andy about a new result of mine: the fact that the permanent has a random self-reduction. I told him the result and added that I felt it could be important. He listened politely and checked his calendar, and told me that he would be available at a certain time in ten days hence. That’s the way he operated: focused and steady.

You could also guess that this was his style just from looking at his desk. Typically it either had nothing on it or perhaps at most one sheet of paper. One sheet. Some desks in the department looked like a bomb had recently gone off and scattered papers everywhere; others were less disordered. But Andy’s was special.

One December the graduate students came to me to ask about trying to create some fun event for the upcoming holiday party. I came up with the “whose-desk-is-it-contest?” They took an aerial photograph of each faculty member’s desk. At the party each guest was given a sheet and their task was to match the picture of the desk with the faculty member. It was a silly game, that we all seemed to enjoy, and besides the winner got a fun prize. I believe out of almost one hundred entries no one missed matching Andy’s desk to Andy.
Picture 1

Let’s turn now to talk about the main topic on amplifiers in the spirit of the XOR Lemma for Yao.

Physical Amplifiers

There are countless examples of amplification techniques. Here are some that work with physical systems:

{\bullet} The lever. This is the familiar system that can be used to amplify a force. The famous Archimedes of Syracuse once said, “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world.”

{\bullet} The microscope. The first microscope was apparently made in the late 1500’s in the Netherlands. Hans Lippershey, Sacharias Jansen, and Zacharias Jansen are usually credited with the invention. One repeating theme in all these amplifier inventions is that it is often hard to be sure who invented them first. This is true of older amplifiers, like the microscope, and as you will see of more modern ones too. I am not sure why this is the case; perhaps someone can shed some light on why?

{\bullet} The telescope. Like the microscope the telescope seems to have come from the Netherlands also in the early part of the 1600’s. The great Galileo Galilei did not invent the telescope, but improved it; and also was perhaps the first to really understand it value. The inventors are usually listed as Hans Lippershey, Zacharias Janssen, and Jacob Metius.

{\bullet} The relay. Joseph Henry invented the first relay in 1835. The reason it is an example of an amplifier is a small current can control a larger current. You all probably know well how a basic relay works: a small current flows through a coil that moves an arm. The arm when moved allows another circuit to be completed and a larger current can then flow.

{\bullet} The vacuum tube. In 1907 Lee De Forest is credited with creating the basic vacuum tube. He had a grid that could be charged negatively or positively, and in this way control a current that flowed across the tube. In his version there was not a true vacuum in the tube, that improvement was made later on GE scientists.

{\bullet} The transistor. This is perhaps the most important of all amplifiers, without which the entire base of modern technology would be impossible. Try building a iPhone with vacuum tubes. It was invented on 17 November 1947 John Bardeen and Walter Brattain, at AT&T Bell Labs. Quickly, William Shockley saw the vast potential and is often viewed as the father of the transistor.

Physical and Algorithmic Amplifiers

{\bullet} The PCR method. Kary Mullis won the Nobel Prize in 1993 for his discovery of this method that allows the amplification of even one strand of DNA. It is an amazing result for several reasons. Of course without PCR modern TV shows like CSI would be impossible. Even more important PCR is one of the central tools in all of bio-technology.

There is some controversies over who invented what when. I have no insider information, but I have an opinion. What makes PCR so different for the biologists is that PCR needed no new biology. From a theory point of view what Mullis did was use existing ideas to create a new “algorithm.” PCR uses basic facts about DNA that was long known: the double strands break apart when heated; they tend to pair up A-T and G-C when cooled; there is an extension process that uses an enzyme called DNA polymerase. It was discovered by Arthur Kornberg who won the Nobel Prize in 1953.

Mullis’s contribution, I believe, was putting together the pieces to form the PCR algorithm. An analogy for us is any famous algorithm. Typically, such an algorithm only uses the operations that any computer can perform, but there order is critical. And moreover understanding there effect is what makes the algorithm new and exciting.

Algorithmic Amplifiers

{\bullet} The XOR lemma. This is the famous result of Yao that shows how to amplify the hardness of a boolean function. Roughly his result shows that if {f} is a boolean function that is weakly predictable, then

\displaystyle  F(x_{1},\dots,x_{t}) = f(x_{1}) \oplus \cdots \oplus f(x_{t})

is almost completely unpredictable, provided {t} is large enough. Like the other amplifiers this one also has an interesting history. The first published proofs were by others—no doubt that Andy had a proof, but the details were done later on.

{\bullet} The parallel repetition method. This is the famous result of Ran Raz on showing that winning {n} games in parallel decreases exponentially in {n}. This result also has an interesting history. Is there something about amplifiers that makes for complex history?

The result was first stated by Lance Fortnow and Mike Sipser, who thought that playing in parallel was the same as playing serially. Quickly, it was realized that this was not obvious, and the search began for a proof. After many partial results, including some work I did with Anne Condon and Jin-Yi Cai on a special case, Ran resolved the whole problem. His proof is quite hard and there are now better proofs.

{\bullet} The matrix block trick. This is a folk idea—I believe—that can be used in various contexts to amplify the effect of some theorem about matrices. Here is a simple example: suppose that {A} is an {n \times n} matrix that you want the value of the permanent of. Create the matrix {B = A^{\otimes n}} which consists of {n} copies of {A} as diagonal blocks: here {n=3},

\displaystyle  B = \left( {\begin{array}{ccc} A & 0 & 0 \\ 0 & A & 0 \\ 0 & 0 & A \\ \end{array} } \right)

Then, it is easy to see that the value of the permanent of {B} is the value of the permanent of {A} to the {n^{th}} power. This shows that if there is a way to approximately compute the value for {B}, then there is a way to get a much better approximation to the value of {A}. This trick can be used for many other matrix valued problems.

{\bullet} The repetition method. This is perhaps the most basic amplification method. In order to increase something, just do the same thing many times. This works for error correcting codes, for example. Sending the message many times will increase the probability of getting all the bits of a message. One of the problems is that it is not very efficient, but it is very useful precisely because it relies on no structure of the problem.

Another nice example of the method is to amplify the correctness probability of a BPP computation. If the computation has a probability of {2/3} of being correct, then {t} repeats of the computation with independent coin flips will have an error probability that is at most {1/3^{t}}. One of the major thrust in this area, of course, is to do better than this simple repetition method.

{\bullet} The Tensor Power Trick This is a trick that is explained beautifully in Terry Tao’s great blog. See his post for the details, but here let me try to hopefully explain what is up. The trick—he calls it that—is to amplify an inequality and eventually get a better one. He uses the following transformation: take a {f: X \rightarrow \mathbb{C}} and replace it by

\displaystyle  f^{\otimes n}(x_{1},\dots,x_{n}) = f(x_{1}) \cdots f(x_{n}).

He shows that using this transformation can often improve an inequality. In his hands this simple amplification idea can work wonders. Take a look at his post. Look at all of his posts.

{\bullet} Direct Products I have to stop somewhere, so I will stop with a recent paper by Russell Impagliazzo, Valentine Kabanets and Avi Wigderson on “ Direct Product Testing” from STOC 2009. This is really another type of repetition amplification, note the similarity to the last idea. The direct product code of a function {f} gives its values on all possible {k}-tuples {(x_{1},\dots,x_{k})}—the direct product code consists of {k}-tuples {(f(x_{1}),\dots,f(x_{k}))}. This is used to amplify hardness in many constructions. They prove some beautiful new results on how to test how close a set of {k}-tuples is to a direct product.

Open Problems

One simple open problem is to create a list of methods that fall under this umbrella. I know I have just listed some of the amplification methods. I did not, for example, discuss the zig-zag graph product or other graph products. What are some others that I should have included? Please help.

The other problem is to create new method that are currently missing. Here are a couple of potential ones:

  1. Can we make a PCR like amplifier for other types of molecules? In particular what about an amplifier for proteins? I do not believe that one is known. There are very indirect methods to make more of a protein, but that is not an amplifier. If I am right there is a Nobel lurking here—but proteins have none of the neat properties of DNA so it probably is a very tough problem.
  2. One the top open problems today in computational game theory is finding good approximations to various game/economic problems. One approach, perhaps, is to try and create an amplifier here. Suppose that {G} is a bi-matrix game. Can we construct another game {H} so that an approximate strategy for {H} yields an even better strategy for {G}? One must be a bit careful since there are hardness results, so the amplifier must not be in conflict with these results.
  3. Another of my favorite problems is the edit distance problem. There are many very good approximation methods for this famous problem. Can we build an amplifier here that could help improve these results? More precisely, given two strings {x} and {y} can we construct new strings {x'} and {y'} so that a good approximation to the edit distance between {x'} and {y'} yields an even better one between {x} and {y}?
22 Comments leave one →
  1. Tony permalink
    August 10, 2009 1:22 pm

    Things like boosting and co-training in machine learning also seem like a type of amplification. Given some weak learners, can they be combined to form a strong learner?

    • rjlipton permalink*
      August 10, 2009 3:36 pm

      Good example.

  2. Someone permalink
    August 11, 2009 6:50 am

    The content of the blog is great. It seems though that you are able to write faster than I am able to understand things…

    • rjlipton permalink*
      August 11, 2009 7:18 am

      What a kind comment.

  3. August 11, 2009 7:16 pm

    Here’s another one: Chernoff’s bounding method. Unsatisfied with Markov’s Inequality Pr(X>tE[X]) < 1/t? Then try studying the variable Y=e^{cX} instead.

  4. August 13, 2009 7:46 pm

    I second Someone.

    By the way, professor Lipton, it would be nice if you have a “print article” plugin (such as wp-print), I don’t know how feasible that is as you’re using wordpress.com’s service. I often print your posts to read and the menu column takes space.

  5. August 14, 2009 7:06 am

    This is cool, thanks for the share, I appreciate it big time, thanks once again.
    Cheers,
    gadgettechblog.com

  6. August 15, 2009 6:53 pm

    Pretty cool post. I just came by your blog and wanted to say that I have really enjoyed browsing your posts.

    Any way I’ll be subscribing to your feed and I hope you post again soon!

  7. asterix permalink
    April 18, 2010 9:27 pm

    Here’s another example of amplification at work:

    http://online.wsj.com/article/SB119871820846351717.html

  8. asterix permalink
    April 18, 2010 9:30 pm

    A free link to the above:

    http://www.realestatejournal.com/buysell/mortgages/20071228-ng.html

Trackbacks

  1. The Role of Amplifiers in Science | Rachna
  2. The Role of Amplifiers in Science « Gödel's Lost Letter and P=NP | Interesting History Facts | Facts: Interesting
  3. The Role of Amplifiers in Science « Gödel's Lost Letter and P=NP | Interesting History Facts | Facts: Interesting
  4. links for 2009-08-13 « Blarney Fellow
  5. The Role of Amplifiers in Science
  6. Permanents of Matrices With Negative Coefficients? « Gödel’s Lost Letter and P=NP
  7. The Tensor Trick and Tuttle’s Flow Conjectures « Gödel’s Lost Letter and P=NP
  8. Guessing the Truth « Gödel’s Lost Letter and P=NP
  9. A New Way To Solve Linear Equations « Gödel’s Lost Letter and P=NP
  10. The Ryan Wiliams Combine « Gödel’s Lost Letter and P=NP
  11. A Most Perplexing Mystery | Gödel's Lost Letter and P=NP
  12. Graduate Student Traps | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading