The World is Digital
One of the biggest insights of computer science is that the world is digital
Alan Kay is a brilliant computer scientist who has won the Turing Award, along with many other honors. He is famous for creating the Dynabook concept—the idea that became the laptop, and for creating many programming languages, such as Smalltalk.
Today I want to talk about one of the biggest results in all of Computer Science. No, it is not the Halting Problem, it is not the P=NP question, it is not that Linear Programming is in P, nor any of the many other beautiful results of theory. It is, in my opinion, the realization that the world is digital (WID).
This summer I was at an NSF workshop on Computational Thinking—an idea that Jeannette Wing is behind. During the workshop I said:
In my opinion, one of the key insights in all of Computer Science, is that the world is digital.
I also added that this was a relatively recent result, and it has had a tremendous impact on the world.
As soon as I said this, Bill Wulf and Alan Kay both jumped all over me, and continued their comments into the next coffee break. They must have thought that I was crazy or worse—perhaps they still do. Their position was that WID had been known forever—back to the Greeks—therefore it was not new, nor was it a deep insight. I disagreed. It’s hard to argue with the past president of the National Academy of Engineering, Bill; and with a Turing Award winner, Alan. But I tried. I did not get them to budge, not at all.
I still think I am right. I will explain why and I hope that you might agree with me, but either way let me know what you think.
The World is Digital
The more original a discovery, the more obvious it seems afterwards—Arthur Koestler, journalist, novelist, social philosopher, and science writer.
The fact that all things can be represented by sequences of zeroes and ones sounds like a simple idea, but I claim that it is deep and profound.
Look around you. You may see, like I do, beautiful trees outside in the backyard. You may hear wonderful sounds: birds chirping; music playing classical or rock or hip hop or another genre. You may be reading a book or watching a show on a HD-TV and seeing images of people that are almost lifelike.
Yet no matter how beautiful the images are, no matter how complex the sounds are, they all can be reduced to sequences of bits. The world truly is digital. In a sense, 0 and 1 represent computer science’s basic atomic particles. Information consists of arrangements of 0′s and 1′s. Nothing else is needed. Computer science differs from modern physics, while Physics uses hundreds of “basic particles” to build the world—we only need two.
I believe that the discovery that all things can be viewed as digital is as revolutionary as the notion that the world is made of atomic particles. But, the discovery that the WID took half a century longer to discover than even the simplest atomic particles.
Electrons, for example, were discovered by Joseph Thomson in 1897, long before we knew that the WID. Some may scoff and say that it’s obvious that the WID. But, I think I can argue that is not so. Yes, particle physics is a deep theory; however, Computer Science’s insight that everything can be represented digitally is profound.
Who Knew What When?
One way to argue, I believe, that WID is profound is that it is a recent discovery. I believe that if it was “obvious,” then it would have been known for a long time.
The Ancient Greeks: Greats like Euclid did not know the WID. In all of his famous Elements, there is not one section on boolean functions.
Yet, Alan and Bill said to me that the ancient Greeks already knew that the WID. Of course, the Greeks knew that the square root of was not a rational number. Somehow, Alan and Bill felt that this showed that they knew that the WID. I think the Greeks knew many wonderful things but that they were in the dark with respect to the WID—at least that is my opinion.
A kind of atomic theory was known, without proof, to the Greeks. That the world is made of discrete atoms is not the same I would argue as the WID. They did not make the key jump: that is they did not realize that any object of any kind could be represented as a binary string. It seems obvious to us today, but it was not centuries ago.
The Classic Mathematicians: The great mathematicians such as Augustin Cauchy, Joseph Fourier, Carl Gauss, David Hilbert, Pierre-Simon Laplace, and Isaac Newton did not know the WID. They mainly studied continuous systems, and when they did work on discrete systems, they worked with the integers. But I do not believe they ever thought about boolean functions, and nor
do I think that they knew the WID.
Lewis Carroll: Greats like Lewis Carroll—Charles Dodgson—did not know that the WID. While he is famous for his Alice In Wonderland series of books, during his later years he spent a great deal of time working on symbolic logic. He was especially interested in the theory of syllogisms: a set of premises that lead to a conclusion. Here is one of his examples:
Babies are illogical
Nobody is despised who can manage a crocodile
Illogical persons are despised
He then concluded that
Babies cannot manage crocodiles
Lewis Carroll used a version of Venn diagrams to help his readers understand his logical examples; these diagrams are named after the mathematician John Venn who introduced them in 1880. Carroll worked out a method of squares within squares that worked up to letters—where a letter is a “proposition.”
What I find interesting is that Carroll did not see that he was working with boolean logic. His examples and notation are fairly complex, and show, I believe, again that the central nature of bits and boolean values was not understood by him nor his contemporaries.
Gödel: Greats like Kurt Gödel did not know the WID. If he had, he would not have needed to invent his elaborate scheme for numbering proofs. If he had realized that proofs were only a finite collection of symbols, he could have assigned each symbol a unique binary string. That’s all he needed to do: no primes, no fancy facts about primes, just binary strings. Indeed, I never present his complex numbering system when I teach Gödel’s Theorem; I just explain that proofs and formulas are binary strings.
Markov: Greats like the Russian mathematician Andrey Markov did not know the WID. He invented a model of computation that he called “normal algorithms.” Unlike Turing Machines, his model was based on string rewriting rules. I remember reading his 1950 book on his theory of algorithms. One of the striking parts of the book, to me, was the introduction. His model needed to restrict the set of symbols to a finite set of symbols—otherwise, his rewriting rules would not be finite. Today, since we know that the WID this is obviously no issue. However, when he wrote his book he was concerned about the restriction to a finite alphabet. He talked. at length, about the infinite number of ways that one might be able to write a script letter such an “a”:
But, he reasoned that while there might be an infinite number of script letter “a”s, the restriction to a fixed size alphabet was just fine.
That Markov felt, in 1950, that he had to discuss the restriction to a finite alphabet shows me that he was unaware that the WID. Today if I had to argue why a finite alphabet suffices to describe any script letter, I would point out that we can just represent the letter as a digital image. Let the image be by pixels, where is some huge number—if is large enough I would argue that if a human can tell two script letters apart, then they would have different digital representations.
How I Came to Know the World is Digital
I remember when it finally began to dawn on me that the world is digital. I would like to say that I always knew, or at least that it came to me in a flash one day. But that would be a lie. It took me years to realize it. It’s one of those ideas or notions that came slowly to me. However, once I saw it, I wondered how I could have not always known it.
At Carnegie Mellon, where I got my Ph.D., we were never taught that the WID. My professors, like Albert Meyer and Alan Perlis, never told me. Meyer was then, and is now a brilliant researcher—he never told me the WID. Perlis was, at the time, the chair of the entire department of Computer Science, yet he never told me either.
When I was an assistant professor of Computer Science at Yale University in 1976, I got my first inkling that the world could be viewed as digital. I was on the phone with a friend, Bob Sedgewick, who at the time was an assistant professor at Brown University. He was telling me how a newly purchased printer worked.
Now you have to go back 30 years to recall that printers were once very different from the beautiful laser/ink jet printers of today. In those days a printer was a typewriter on steroids. In case you have never used one, here is how a typewriter worked. There was a keyboard like today’s laptops. But when you pressed a key, a piece of metal hit an ink ribbon and then hit your paper. In this way each letter was printed.
Printers did the same—again each letter had a piece of metal that was pressed onto the paper. The user was limited to the font and symbols which were built into the printer. Some printers had clever arrangements, so the printer could go fast. Some had the metal letters arranged on wheels; others had them arranged in even more complex configurations. In any event, typewriters and printers used the impact of a piece of metal onto an ink ribbon to leave a mark on paper. The only difference between printers and typewriters was speed.
Back to my story: Bob was telling me about a new kind of printer that used dots to represent each letter or symbol on the page. He explained how the page was coded as an array of 0′s and 1′s. Where a 1 occurred, the printer placed ink on the page; a 0 instructed the printer to leave a blank on the spot. The way the ink was placed onto the paper was not laser-based or ink-jet based by the way. This early printer used a more primitive technology; however, the page was represented as an array of bits—it was digital.
I have to say, I was surprised. I knew about Gödel numbers. I knew about binary numbers. I knew a lot of computer science—I thought. However, I somehow was surprised that the printer was just placing bits onto a piece of paper. I did not see that printed words and images could be represented solely by dots. Really.
What Bob explained was that he had written a program that created an array of 0′s and 1′s. Then, his program sent the array to the printer which created a printed page. He explained, further, that the program could create text or images or anything in this manner. This was a major shift from pieces of metal striking a ribbon and putting ink onto a page.
Typewriters and printers had been limited to print only the images that were built into the metal keys or balls. Now with the digital approach of using bits as a way to control the printing process, a whole new vista opened up. Any picture, any image, any font could, in principle, be represented, or printed, without changing the hardware.
This was a huge breakthrough.
The Chicken and The Egg Argument
I always show these posts to Subruk before I post them; otherwise, they would be a mess. This time he raised an interesting point that we decided that he should make—that I would not change my thoughts to conform to his insight. I am doing this partly because I do not completely agree with his point, and partly because I think it is a great insight that he should make himself. The plan is that he will add a comment to this once it is out there for you to read.
I would, however, like to address an issue that is related to his point, but please read his comment.
Mathematics is driven by practice and also drives practice. There are many examples of deep mathematics that was created to solve very practical problems. For example, the theory of Fourier Series was driven by the need to solve certain differential equations. Many other examples come to mind: geometry—construction of temples, calculus–astronomy, probability theory—life insurance tables, and on and on.
On the other hand, many times mathematics has been ahead of practice. Godfrey Hardy was actually proud that his favorite area, number theory, was pure—that it had no possible applications. Of course we can only speculate what he would think of modern cryptography’s use of number theory as a basis of encryption. Other areas of mathematics were also developed because they were beautiful. Many examples come to mind: abstract measure theory, set theory, algebraic geometry, infinite dimensional geometry—I am sure the list could go on and on. Sometimes these areas have eventually been used to solve real problems, and sometimes they remain beautiful areas that seem to be without any practical application.
My point is simple. Even before computers existed mathematicians could have worked on binary strings, on boolean functions, and on many of the areas that we work on today. They did not, in my opinion, because they did not understand the power of bits; they did not know that the WID.
A final point. If they had worked on boolean functions years ago, one wonders where theory would be today? Curiously, the difficult foundational issues that plagued early analysis, for example, would have been completely moot. Early mathematicians struggled with the basic question of “what is a real number?” If they had worked on boolean problems, these difficult problems would have been completely avoided.
Is the “world is digital” one of the great insights of Computer Science? Or do you agree with Kay and Wulf?