A Lazy Approach To Problem Solving
Solve the right problem, not the given problem
Leon Redbone is a singer and guitarist specializing in interpretations of early 20th-century music—including jazz, blues, and Tin Pan Alley classics. With his own unique style, he has an almost cult-like following—to which I belong. One of his classic renditions is of the song “Lazy Bones” written back in 1933 by Johnny Mercer and Hoagy Carmichael.
Today, inspired by the song, I would like to suggest that we actively be more lazy—that we be lazy when it comes to problem solving.
The central thesis is: Do not blindly solve a problem by working on the problem. Instead ask, is the problem the real fundamental question? Or is there a more basic—and hopefully easier to solve—problem that needs to be solved?
The idea taken to its extreme would be to not solve the problem at all, but continue on as if the problem had been solved. Only solve the problem when you are forced to. This is related to the notion in programming language theory called lazy evaluation. Its history is a bit convoluted. The notion was introduced into the mother of all languages, the lambda calculus, by Christopher Wadsworth in 1971, then for other programming languages in two independent papers in 1976 by Peter Henderson and Jim Morris, and by Daniel Friedman and David Wise.
In order to explain the notion I will give three examples. The first is almost trivial, the second is almost impossible, and the third is just right, well at least the third seems to be interesting and new.
Read on to see what we are talking about. To me it sounds a bit like the story of Goldilocks and the Three Bears. I recall that one bed was too small, one too large, and one was just right. Here one example is too easy, one is too hard, and one is approachable.
I have discussed the issue of problem solving and talked about the “A to B to C fallacy.” There I also talked about solving the right problem. Here is the promised trivial example, which hopefully will help demonstrate the idea of being lazy. Imagine that I present to you a large-degree polynomial :
which I promise has only real roots. That restriction is not necessary, but will make the example easier to explain. If I ask you for the roots of this polynomial, you may sigh. The problem of finding the roots is in polynomial time, provided one needs only to be exponentially close, but the algorithm is far from a one-liner. If you had to write the code for it you would not be happy.
The lazy approach is to immediately say “sure, I have the roots.” Just pretend that you have them and call them . Then wait until we actually need to determine something about the roots. For example, if I ask for the the norm of the roots as a vector you can smile. This is a symmetric function of the roots, and therefore can be obtained from the coefficients of the polynomial without computing the individual roots. In this case the lazy approa
ch works extremely well.
Note, this is based on the formula
together with and being actual coefficients of the polynomial—up to an easy-to-determine sign. In this case being lazy was a huge win.
The above example, while trivial, is one that I have thought about as a motivation for trying to be lazy with other more important problems. One of the central problems in biology is the problem of discovering the structure of a protein–the protein folding problem. The standard dogma is that proteins fold into a 3-dimensional structure, and that this structure holds the key for understanding how the protein works.
However, there is some recent disagreement that we will discuss another day. Unfortunately, discovering the 3-dimensional structure is still a challenging and difficult problem.
I have often asked biologists, why do they want this structure? One paper raises this very question and answers it this way:
- Structure is more easily conserved than sequence.
- Similar folds often share similar function.
- Remote similarities may only be detectable at structure level.
Another reason for their interest in finding folded structures is that they hold the key for where small molecules can “dock” and in this way turn off the protein. If the protein causes a disease, then stopping its function can be a cure. Essentially, these small molecules are drugs. The idea is if we can determine the structure, then we can in principle find which small molecules can dock, and thus we can find new drugs.
The lazy problem-solving approach here says: Given a protein, do not determine its structure, but rather determine which small molecules can dock. I have discussed this over the years with my colleagues in biology. The usual answer is, with an amazed look: “can you do that?” Unfortunately, today I cannot. But perhaps, a big perhaps, perhaps we can determine potential molecules that can dock without the computational effort of finding the structure.
You can see, I hope, the analogy with the root-finding problem.
Do not find the roots (the structure); instead find the docking molecules (the norm of the roots).
I doubt it will be as simple for proteins as for polynomials, but I do not believe that this is an avenue that people are currently addressing. If the lazy way can be followed up, then here is a new approach to the whole drug discovery problem.
Let’s turn to a problem that lies between the previous two. It does not seem trivial, but is possible. The idea is that when we solve a system
for , it is likely that we do not really want . We want some information about . For example, we might wish to know the norm of . The question we plan to address is, are there algorithms that can do this faster than solving the entire linear system?
The answer is yes, provided we can use pre-processing. In particular we can prove the following theorem:
Theorem 1 Suppose that is an by invertible matrix over the reals. Let be fixed, and let (a set of indices) be fixed. Then there is a randomized polynomial time algorithm running in time that given the vector determines
up to a multiplicative error of , where is the solution to . The algorithm does require pre-processing time that is given , , and .
Note that this theorem is exactly analogous to the polynomial example. We can compute partial information about the solution without finding all of . I think this result, whose proof I will explain in the near future, captures nicely the lazy problem solving theme. It avoids solving the full linear system as long as the matrix stays fixed. I would like to see if there are some realistic applications of this theorem. The question is about the prevalence of linear-system problems that have three critical properties:
- The matrix stays the same, but the right hand side, , changes.
- Only partial information about , the solution, is required.
- There is no (obvious) way to invert faster than the stated pre-processing time.
I would also like to add that a similar theorem can be proved when other functions of are needed. I do not yet know what information about can be computed efficiently.
The song “Lazy Bones” is also a brilliant example of problem solving. The legend, which appears to be true, is that it happened as follows. According to Carmichael, in an interview, Mercer came into Carmichael’s apartment in New York one day and:
[He] saw Hoagy “snoozin” on his couch. Mercer said, “Hoag, I’m gonna write a song called ‘Lazy Bones’.” Carmichael said, “Well, let’s get at it.” They went over to Hoagy’s piano, Johnny said the first line and Hoagy started playing a melody. The song was done in twenty minutes.
Perhaps we can solve more problems with this let’s-get-it-done-now attitude. The next time you see a colleague asleep on a couch, say “let’s solve and go to the computer “
[Added third "critical property" and changed formatting at end.]