Teaching Helps Research
How teaching interacts with research
Tim Budd is a computer scientist who works mostly in programming theory and practice. He once was a graduate student of mine, and did his thesis work on an APL compiler. Ken recalls that when he asked a certain undergraduate classmate in 1980 whether anyone had written a compiler for APL, the friend replied “A compiler for APL? That would be worth a Nobel Prize!” Well, we have gone another year of missing on our recurring prediction that a person we recognize as a computer scientist will win a Nobel Prize, though the prize in chemistry this year came close.
Today I want to talk about the relationship between teaching basic material and doing research.
Budd comes to mind for several reasons: for one he is a dedicated and excellent teacher. He has written a large number of well received textbooks on many topics related to programming—see this. On his home page he has an amusing quote:
Whenever I teach a course there always comes a moment as I’m reading the textbook where I say “you know, I think I could write a better textbook than this”. The unfortunate thing is that this is true even when I am using my own book!
I am not such a great teacher, but I try to make the material special for my students. One interesting principle that I discovered years ago is the close relationship between teaching and research. In my experience, teaching even basic material can lead to interesting research results. I will cite one ancient example of mine, a recent one of mine, and one of Ken’s. For a wonderful example of this phenomena see this—the story how Rao Kosaraju discovered a new algorithm for finding strong connected components in directed graphs. It happen because he was teaching basic graph algorithms, and—I will not spoil the story, so read about it.
I think the connection between teaching and research is even more relevant these days because of the rise of on-line teaching, with the possibility that teaching could change radically in the future. One possible future, among many, is that we will no longer be teaching basic material. Our undergrads and perhaps even grad students will all be using some type of on-line system to gain the basic knowledge they need. This may or may not happen, but it did raise the issue of the connection between teaching and research. So let’s discuss some examples.
Budd was a student of mine when I was at Yale. His PhD really came about because I happened to be teaching introductory programming based on APL. I have discussed this before in detail:
When I had to learn APL in order to teach the beginning class I decided to start a project on building a better implementation. The standard implementation of the language was as an interpreter, and one of my Yale Ph.D. students, Tim Budd, eventually wrote a compiler for APL. We also proved in modern terms a theorem about the one-liners of the language. Each one-liner could be implemented in logspace; this was not completely trivial, since the operators were so powerful. Perhaps another time I will discuss this work in more detail. For now see Tim’s book on his APL compiler.
Tim’s book has a simple title: An APL Compiler. There wasn’t one before. Today there is another prominent one, called APEX. There is also a translator to the C# programming language, whose intro page declares:
Code in APL – ship in C#: The Dream becomes a reality.
Is this a dream? Worthy of a Nobel Prize? Well what happens in computer science is that both machines and software methods change. On-the-fly compilation of segments of a program became often just as good as whole-program compilation on the pipelined machines of the 1990’s.
However, our other work on APL functions being in logspace might possibly be more useful now with the advent of streaming computing models and the MapReduce framework. If they can be run in log-many passes by memoryless finite-state automata according to Ken’s vector-machine model, then they would be in not just logspace.
A Poor Person’s PCP
The following arose this last week as I tried to make up my complexity class midterm. Consider the following problem: 3SAT. A set of clauses is in this set provided at least of the clauses are satisfiable. The following is an easy theorem—it was on my exam.
Theorem: The problem 3SAT is NP-complete.
The proof is easy. Let’s allow arbitrary clauses—it is easy to fix them to be clauses in 3SAT format. Then add to the clauses the new pair
Clearly if is satisfiable, then so is . Also if has an assignment that handles all but one clause, then must be satisfied.
This is the solution my students found. I had in mind a different method, which none found. Take the clauses and create a new copy that is identical except all variables used in are changed to . Then look at the set of clauses . The key is if is satisfiable, then clearly these are. If are satisfied except for at most one clause, then clearly either or is satisfied.
What I liked about this latter idea is that it really uses a repetition code to prove the problem: just state the problem twice and it became more error tolerant. This is the same as just repeating a message to make it tolerant to errors. Note, we could map to a series of clauses
where each is the same set of clauses on new variables. Then we would have either: (i) there is an assignment that makes all the clauses true, or (ii) there is no assignment that does not have at least clauses false.
One interesting issue is: could we directly follow this path to get really high error tolerance based just on a use of error correcting codes? Of course the known proofs of the PCP Theorem use error correction ideas, but they do not use a direct link to an error correcting code. Can this be done? I once recall Sanjeev Arora saying to me that he believed that the PCP Theorem really was a result about error correction, and that it should have a “pure” coding theory proof.
Ken was lecturing once in an advanced graduate course on Leslie Valiant’s theorem on polynomial-time universality of the determinant. He followed a proof in a survey paper by Joachim von zur Gathen that given an arithmetic formula of size constructs a matrix such that . Ken told the class that this linear size expansion wasn’t optimal but was easier for exposition. One of his students, Hong Liu, worked out a construction that even beats the previous best-known size when one defines the size of a formula carefully, and is indeed optimal. Moreover it is likewise easy to present, as shown by what became his short joint paper with Ken in the journal Information Processing Letters.
Do you have other stories of connecting teaching to research?