# Problems not Algorithms

* Clear statements of problems are critical *

*A problem well stated is a problem half solved.* Charles F. Kettering

William Kahan is a famous numerical analyst. He won his Turing Award for his work on making computers do arithmetic correctly. It may sound simple, but was really difficult and important work. The key is that numbers are represented on computers by a fixed numbers of bits. The fact that the numbers have a fixed number of bits makes even simple operations like addition tricky.

My two favor stories about him concern his tremendous candor. The first is about his ideas on “fuzzy sets” and the second is on “who should get tenure”. I will only tell the first one—to protect the innocent and the guilty. When I first arrived at the Computer Science Department at Berkeley, the faculty decided to have a new series of lectures that fall. The plan was to have short lectures by each faculty of the department—in this way new graduate students would learn each faculty’s research area.

One day Professor Zadeh was presenting his area of research—an area that he created called “fuzzy sets”. Fuzzy sets were then and still are today a controversial area. Some researchers do not think much of this area. However, the area is immensely popular to many others. There are countless conferences, books, and journals devoted completely to this area. Kahan was in the audience while Zadeh was speaking. Finally, at some point Kahan could take it no more. He stood up and Zadeh asked him what his question was. Kahan stated in the most eloquent manner that it might be okay to work on fuzzy sets in the privacy of your own basement (after all this was Berkeley), but there was no excuse for exposing young minds to this “stuff”—his term was stronger. We all were shocked. For a few seconds no one spoke. I wondered how in the world Zadeh could respond. Zadeh finally said, “thank you for your comments”, and went on with the rest of talk, as if nothing had happened. The next year the faculty talks were cancelled.

William Kahan is one of the most articulate speakers I have ever had the pleasure to talk with. Well with him you tended to listen more than talk *with*. But in any case he told me a striking story once. It demonstrated what I mean about problems not algorithms. In 1960’s there was a major quest in his field of research. It was to find algorithms that could solve high degree polynomial equations. Engineers came to computer scientists like Kahan asking for better and faster ways to solve their problems.

Finally, one Kahan day asked why the engineers needed to solve their problem. What he really wanted to know is “where are all these equations coming from that you want us to solve?” They relied “well we have these symmetric real matrices. We compute the characteristic polynomials and want their roots.” Kahan was aghast. What they, of course, really wanted were the eigenvalues of the matrix. Since it was symmetric these are always real. Moreover the fact that they arise from a matrix allows many much more stable and more efficient methods for solving their problem. Numerical researchers now started to work on the eigenvalue problems rather than the root finding problems.

The point of the story is simple, Kahan told me. Never just answer a question. You must find out what the real problem is and solve that. Solving the problem that you are asked may or may not be the right thing to do.

** The “A to B to C mistake” **

Let’s get back to problems. They are key. Not algorithms. Problems must come first. Think of algorithms as solutions. In everyday experience it makes no sense to talk about the solution to a cross-word puzzle without the “clues”. Consider sports. Imagine a football game. It’s late in the last quarter—there is only 3 seconds left in the game. Your favorite team (mine is the NY Giants) is down 4 points. Their *problem* is to make a touchdown. A field goal is only worth 3 points and they will still lose. So they need a touchdown. That’s the problem. There are many possible solutions. But the problem is to somehow score a touchdown. Not to run a particular play. It does not matter if they run or throw the ball—as long it gets into the end-zone for a touchdown. It’s tempting to not state problems by themselves. We often include a solution or even part of the solution as part of the problem. This is a disaster. The reason it’s a disaster is that by not stating the problem as simply as possible, we take potential solutions off the table. The reason that we often do include part of the solution is that we think we are being helpful. If we knew a solution there would be no problem. However, if we think that we have part of solution it is tempting to change the problem. When we add a partial solution we effectively constraint the solver. We may constraint so much that there is no solution at all. Or we may constraint so that the only solution is a poor one. Or one that is hard to find.

I call this phenomena the “A to B to C mistake”. Imagine that you wish to get from some location to another location . That’s your problem. You might ask a friend to help you go from to . That’s great. That’s exactly the right approach. Ask the question that you really need to have solved. This is the best approach to actually getting the best answer.

The “A to B to C mistake” is like “a little knowledge is dangerous”– a slight misquote from Alexander Pope. Suppose that you you think you know that is on the way from to . Then, you might be tempted to ask your friend how can I get from to and then from to . This is a different question. You are actually not at all interested in going to Since you believe that any solution must go through , you have restricted the possible solutions. This is a potential disaster. You may have restricted your friend in a way that eliminates a great solution that does not go through .

As an example consider the “protein folding problem”. The protein folding problem is to take a sequence of amino acids that form a particular protein and to predict their 3-dimensional structure of how the sequence of acids “fold” up in 3-space. This is a well known computational problem. The best known algorithms for this problem are extremely inefficient. Some computations take months on the world’s fastest computers. In the following discussion don’t worry if the concept of protein or amino acid or folding is new to you. It really is not necessary to understand the example. I claim that the question of discovering the three dimensional structure of a protein is another example of the “A to B to C mistake”. I have asked my biology colleagues do you *really* need the structure of the protein? Or are you interested in something else? After some discussion they usually agree that they do not need the exact structure. What they want to know is, for example, where drugs can “dock”. This is critical to finding drugs that can treat diseases—often a way to stop a disease is to block the action of a critical protein.

The exact 3-dimensional structure of a protein does supply this information—if we know the structure we can in principle find docking locations. However, I claim this is exactly the “A to B to C mistake”. If they really need to know where drugs can dock on a protein, then they should ask that question. Do not assume that we must first go through the intermediate point of finding the 3-dimensional structure to find the docking places. I am not saying that I have discovered a way to compute the docking places *without* the structure. What I am saying is that by asking the “wrong” question, biologists are potentially restricting solutions. Perhaps there is an easy algorithm that can find docking location without computing the structure. It is possible.

This is probably not the right place to go on about this issue, but my point is that it is critical in computer science to ask the right problem. Let’s not make the mistake of asking the wrong question. Please do not make the “A to B to C mistake”.

** Open Problems **

I think the general challenge for the theory community is to try and find out what the real problem is and not make the “A to B to C mistake”. We should, especially when we are working on cross disciplinary problems, try hard to find out what the real problems are. If we can do this, then we may be able to greatly enhance the impact that we.

Let me end with one concrete simple example. A famous result is that the Discrete Fourier Transform (DFT) can be computed on points in time . This result has revolutionized signal processing among many other fields. But we rarely want to compute the value of *one * transform. Almost always we want to compute the transform of many different sets of points. Suppose that we wanted to do this with different sets of points. Is there a way to compute the DFT in time better than ? Could we do it in time , for example? Why not?

*It isn’t that they can’t see the solution. It is that they can’t see the problem.* G.K. Chesterton

### Trackbacks

- Lipton’s Blog « Jags
- The Orbit Problem « Gödel’s Lost Letter and P=NP
- Big Problems With Big Iron « Gödel’s Lost Letter and P=NP
- A Lazy Approach To Problem Solving « Gödel’s Lost Letter and P=NP
- How To Add Numbers | Gödel's Lost Letter and P=NP
- Lotfi Zadeh 1921–2017 | Gödel's Lost Letter and P=NP
- Quantum Switch-Em | Gödel's Lost Letter and P=NP

There’s a typo in the post, “Finally, one Kahan day” unless you mean to be discussing a new unit of measurement.

Great post, as is the one on big iron that led me here.

When I was grad student, I was involved in organizing the US Defense Department MUC evaluations of information extraction (IE) technology. (IE systems use natural language processing to extract pieces of information from text and fill out database records, or more elaborate structures.) Since I was designing tasks and effectiveness measures, I would constantly ask the DOD people what they were using IE for. Since I didn’t have a security clearance, they wouldn’t tell me.

Finally I found one person who described an actual IE application. The information was extracted from news stories and used to populate relational database records, so that users could search for news stories using SQL. They had never heard of text retrieval systems!

This reminds me of Vapnik’s principle: “When solving a problem of interest, do not solve a more general problem as an intermediate step.”

It’s amazing that you call this the A to B to C mistake, because, back in university, we used the same phrase to describe the behaviour of one of our friends. This guy wanted to do C, then thought that a good intermediate step was B, and then would ask everybody how to do B. The problem is that he had a very weird way of choosing B. He was notorious for confusing people.

One of the most common examples of “A to B to C mistake” in complexity theory is to try to prove something as hard as P≠NP or “quantum computers aren’t feasible”, while in fact your main concern is to convince yourself that RSA cryptography is secure. Security shouldn’t always be presented in the media as the best – if not only – selling point of complexity theory. Security doesn’t need a proof of P≠NP. The absence of a proof of P=NP is more than enough.

I think it is also true that if P=NP then security does need correctness of the algorithm.