# It Takes Guts To Do Research

* Or rather to find bold paths when opportunity gives only part of a map *

Robert Oppenheimer was one of the great physicists of the last century. He is most famous, of course, for his leadership of the Manhattan Project—the project that created the first atomic bomb.

Today I want to talk about research and why it requires a certain amount of courage, a certain amount of nerve, or just plain “guts.”

I am currently reading the book *Robert Oppenheimer: A Life Inside The Center* by Ray Monk. It was a Father’s Day gift from my dear wife, and I found time to start reading it this weekend. The book is over 800 pages.

It starts a bit slow, but eventually is a pretty good read. The author loves details, so if you read it be prepared for minute bits of information about Oppenheimer. The book could have used, in my opinion, one more edit: several sentences are repeated exactly. I guess a good sentence is hard to write, but using it more than once is annoying. Anyway it is an interesting take on this part of history.

One piece of information is a comment by the author on Oppenheimer’s famous quote after seeing the test of the first atomic bomb. He is quoted as saying:

Now I am become death, the destroyer of worlds

This quote was from the Hindu scripture, the Bhagavad Gita. It is however not the usual translation. Oppenheimer translates a Sanskrit word to “death” that is usually translated as “time.”

## Research and Guts

Oppenheimer was unique: brilliant by all accounts, a great leader, yet naive in many ways. Whether we should be happy that he helped create the first atom bombs is unclear. The world might be better if they did not exist, or it might be worse. I will leave that debate for others.

What I learned from the book, especially the first half, is that Oppenheimer for all his brilliance missed many discoveries that could have easily yielded him a Nobel Prize in Physics. He was doing his initial research during the boom times in the 1920’s. The field of quantum mechanics was being discovered by many who did win Nobels. Yet Oppenheimer missed several chances to capture one.

For instance, Harold Urey, whose career path also led to the atomic bomb project, co-authored with Arthur Ruark one of the first English textbooks on quantum mechanics, and went on to discover deuterium, for which he won his Nobel Prize, and produce heavy water. He made the leap to look for deuterium even before the official discovery of the neutron in 1932. Carl Anderson understood and jumped on Paul Dirac’s prediction of the positron in 1932, knowing to look for its traces in cosmic rays, and won his Nobel in 1936, the same year he also discovered the muon.

The question that arises is, why did Oppenheimer miss? The short answer is that he lacked some ability to take the leap, to take the chance, to have the guts to believe in his own research. I will shortly give a series of examples of findings that he could have made and almost did make.

I think there is an important lesson for all of us here. We need sometimes to just believe in what we are doing. Without this conviction we may miss breakthroughs, without this assurance in our ideas we may miss the “big” insight.

## Some Examples From Oppenheimer

In the spring of 1930 Oppenheimer published a paper “On the Theory of Electrons and Protons” as a letter to the editor in *Physical Review*. In the letter, Oppenheimer showed that an earlier paper of Dirac about “holes” had to be wrong. He showed that Dirac’s belief that these holes were due to protons was way off. The holes had to be filled with particles of the mass of an electron, not the mass of a proton which is about 2,000 times larger.

Thus Oppenheimer really had proved the necessity of the positron. Yet Oppenheimer failed to make this claim. Rather he believed that this showed that Dirac basic equation, and hence his whole theory, was wrong.

Dirac continued to argue that these “holes” were real particles. And when the existence of positrons was confirmed by Anderson’s experiments, Dirac shared the credit and the glory.

Oppenheimer with Milton Plesset wrote in 1933 another paper entitled “On the Production of Positive Electrons,” again as a letter to *Physical Review*. They used Dirac’s theory to calculate the frequency of pair production. But again they claimed it only works for low enough energies. Again Oppenheimer believed that Dirac theory was lacking at high enough energies. Abraham Pais describes this as a “fundamental observation,” which would later be studied as *showering*—and prove to be of great importance.

Oppenheimer with Wendell Furry worked later on a new extension of Dirac’s theory for electrodynamics. Dirac himself had no response to this new long paper. Later in 1934 Oppenheimer got Dirac to visit him at Caltech. There he persuaded Dirac to listen to a fifteen minute presentation by graduate students on the paper, giving the first attempt to deal with cancelable infinities in quantum field theory by a process later formalized as *renormalization*. After the presentation the students braced themselves for tough questions from the famous Dirac. He asked only one:

Where is the post office?

One of the graduate students, Robert Serber, obligingly drove him there, and tried asking again for comments. He was cut down by Dirac’s reply,

I cannot do two things at once.

Although renormalization is now recognized as having roots in papers Serber would write a few years later, the larger point is that Oppenheimer and his group missed a chance to energize quantum field theory before World War II. To be sure, the underlying mathematics is so famously abstruse that Dirac himself doubted it as late as 1975, saying:

I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small—not neglecting it just because it is infinitely great and you do not want it!

Among the first of a younger generation to put renormalization on a firm footing was Kenneth Wilson, who passed away just last week. He strikingly connected the abstract idea of the *renormalization group* to demonstrable properties of phase transitions in macroscopic materials, and won the Nobel for this in 1982.

## Can We Learn From Why?

Isidor Rabi wrote about “why men of Oppenheimer’s gifts do not discover everything worth discovering.” As quoted in the biography of Oppenheimer by Abraham Pais, Rabi noted that “Oppenheimer worked diligently,” “was very often on the track of the solutions,” had impeccable “taste in the selection of the questions,”—“and yet as in the case of quantum electrodynamics the definite solutions came from others.” Rabi first pinned it on Oppenheimer’s immersion in Eastern mysticism, but went on to say something more applicable:

He was insufficiently confident of the power of the intellectual tools he already possessed and did not drive his thought to the very end because he felt instinctively that new ideas and new methods were necessary to go further than he and his students had already gone. Some may call it a lack of faith, but in my opinion it was more a turning away from the hard, crude methods of theoretical physics into a mystical realm of broad intuition.

I think we can learn a lot upon replacing “physics” by “computer science” in this quotation. Well perhaps it is best for me to ask you, the readers, for possible examples.

## Open Problems

Perhaps some of you can share an example of your own.

[Wendell Fuzzy–>Furry, other format tweaks]

> After the presentation the students braced themselves for tough questions from the famous Dirac. He asked only one:

>

> Where is the post office?

>

> One of the graduate students, Robert Serber, obligingly drove him there, and tried asking again for comments. He was cut down by Dirac’s reply,

>

> I cannot do two things at once.

I feel a little embarassed to say this, but I don’t understand this anecdote at all. Why is Dirac going to the post office? What is the thing he is already doing? What did this have to do with Oppenheimer?

The chilling embarrassment is exactly the point. Basically R.O. missed a chance to connect; we infer he could have been more forceful defending the new ideas. It would have been hard, however—we quote Dirac himself 40 years later still not getting it.

I interpret the point of the anecdote is that you should press on with your ideas even if (some of) the established names don’t get it.

[The issue, of course, is that this is exactly the reasoning that pushes “cranks” to pursue unfruitful or wrong ideas for years, all the while feeling ignored. So it is a double-edged sword…]

“Democracy is the worst form of government except for all those others that have been tried.” (Winston Churchill)

I’m not sure that Dirac’s behaviour should be assumed to be a deliberate snub. He was famously awkward in his interactions with people, and this anecdote is normally taken as being an example of his eccentricity. (I can, by the way, highly recommend Graham Farmelo’s biography of Dirac, “The Strangest Man”)

We’re not hinting it was—rather that there was (to quote another movie line) “a failure to communicate”, and that Oppenheimer’s group lost the opportunity. Maybe not a good enough “elevator pitch”. Mind you it was a tough task, given our quote of Dirac still not “getting it” in 1975.

On a bit of a tangent but for the sake of completeness.

There might be at least one paper of his that according to quite a few physicists (as reported in “American Prometheus”) was worthy of a Nobel Prize had he lived longer. He wrote a paper with H. Snyder in 1939 titled “On Continued Gravitational Contraction” http://www.phys.huji.ac.il/~barak_kol/Courses/Black-holes/reading-papers/OppSnyder.pdf this paper essentially worked out that black holes had to exist. It is said that it was quite against the current of the time and wasn’t really noticed for a while till the 60s. It is then that the concept of a black hole was taken seriously and people started thinking that such objects might even exist. It is considered his most important work and perhaps the one (according to many colleagues) that was Nobel worthy.

Also since the article mentions A. Pais: He once reported that J. R. O didn’t regard his work on gravitational collapse as his most important (instead chose the “positron” work cited in the above article). Freeman Dyson actually has a few essays in which he expresses frustration that Oppenheimer didn’t take what he had himself (with Snyder) worked from General Relativity seriously and even thought that the solution was ugly and hence must not be as important. This is also an important lesson in my opinion.

There was another article that I meant to link other than the one written with Snyder: http://www.mpia-hd.mpg.de/home/fendt/Lehre/Vorlesung_CO/1939_oppenheimer_volkoff.pdf Also one essay by Dyson (mentioned above) is actually available for view on google books. Can be found here .

Thanks! I believe it is consensus understanding that if black hole evaporation is ever observed in some concrete sense, then Hawking will win a Nobel for it.

No mention of his Communist connections and activities? His accomplishments are exaggerated by commie sympathizers.

We’ve just seen a similar sort of example in number theory, but it was everyone but one who missed it. Almost everyone thought that fundamentally new sieves would be necessary to have any hope of proving that there were infinitely many primes of bounded gap. Zhang has the arrogance and insight to see that this could be handled by more or less current techniques.

This is inaccurate. The surprising breakthrough was GPY, who achieved the first bound whatsoever regarding prime pairs. It was logarithmic, and only 16 assuming a certain deep conjecture. It was obvious to everybody at the time that improving GPY or making tiny progress on the conjecture might be all that was needed to get finite gaps.

Really? So I agree that GPY was a big breathrough. But It doesn’t seem that it was at all obvious that further improvement was likely to get that sort of bound. I certainly don’t get the impression that anyone thought that improving on Elliott–Halberstam conjecture to any sufficient extent was going to happen anytime soon, and moreover, that’s not really how Zhang’s approach quite takes it. I’m a number theorist although sieve theory isn’t my area, and had previously read GPY and certainly didn’t get the impression from other people in the field that this was likely to work anytime soon. Do you have any evidence to back this claim up? It sounds like hindsight bias.

Also, while we’re at it, note that the GPY result isn’t just logarithmic, but in fact better than logarithmic.

I refer you to the Soundararajan BAMS survey and Mathematical Reviews on GPY. Both expressed optimism regarding improvements, and MR explicitly mentioned “near future”. Compare also with Motohashi-Pintz on the smoothed GPY sieve, where they came up with a more precise, and as they said, more approachable, conjecture that would give finite gaps. And that proved to be correct.

While we’re at it, GPY I was logarithmic, GPY II was better than logarithmic. (You’re not quibbling over o vs O, I hope.) GPY I, of course, was the breakthrough.

per angusta ad augustaInteresting article. There seem to be several mental factors other than pure competence to succeed in research.

One comment: In sanskrit, ‘kaala’, which literally means ‘time’, is also used in the sense of “my time has come..”, in which ‘time’ means ‘end’ or death.

Thanks. We use “Time” that way to announce the end of examinations :-). My copy of Srīmad Bhagavad Gītā Brāṣya (1983, Dr. A.G. Krishna Warrier, tr.) says “(The Blessed Lord said:) Dominant Time am I, wreaking the dissolution of the world…” IMHO it’s superposed between two senses of how “the marshaled warriors on both sides will cease to be”: immediately in battle or ultimately in time.

Jon: add an “l” and the Latin becomes, “to reach Caesar, serve lobster!”

Anderson was unfamiliar with Dirac’s predictions. He was about the third or fourth person to observe positrons, actually, but rather than dismissing them as noise (say, a stray electron entering from below) he took them seriously.

One can speculate that Oppenheimer’s pampered upbringing left him without the ultimate passion, or “guts” as you call it. There are numerous examples of the greats turning into softies in old age, and Oppenheimer took a short cut.

It’s also possible that Oppenheimer would never have won the Prize regardless. I could believe the Nobel committee would not want to offend Blackett, whom Oppenheimer tried to poison while a graduate student. And Blackett outlived Oppenheimer.

Dirac’s comments on renormalization are highly inaccurate. To be sure, physicists in general spent decades talking nonsense about just what was going on, whether they were in favor of it or against it. In brief, complaining about these infinities for being infinity is like complaining about 0.∞=0/0=some derivative. The proper complaint is that we don’t know the correct notion of “derivative” to apply.

its strange and in my opinion somewhat naive [to say the least] that the QM researchers working on the atom bomb never had much/any discussion of their moral qualms over what they were creating. oppenheimer personified this. oppenheimer was the perfect cog in an earlier version of the Military Industrial Complex [originating in the 50s out of WWII] which we see materialized in a particularly massive/sprawling form in our lifetime. they were like the 1.5M workers with secret clearance today who basically are not “authorized” to question their own role in the system, the Matrix. in this way Oppenheimer and Snowden are sort of like polar opposites, and Snowden comes off as more human than the at-times robot-like Oppenheimer for having a human conscience that compelled him to take action.

another natural “nearby” person to contrast Oppenheimer with is von Neumann.

as for nobel prizes, the history is quite funky and few are aware of it. nobel made his fortune in creating/mass producing dynamite, and there was some idea at the time [iirc] that dynamite might be so powerful that nations might be averse to using it in wars, and that it could thereby lead to peace. and then the atom bomb is a strange parallel of that thinking. some of the atom bomb scientists seemed to argue that if they created a weapon so terribly/horribly powerful that maybe it wouldnt be used.

to me it is a somewhat a commentary on the blind spots/failures of governments and private industry to truly reward scientific innovation that the most important/prestigious scientific prize, the Nobel prize was privately funded to begin with. [is it now state sponsored? not sure.]

The Nobel Foundation uses money earned yearly from the investment of Nobel’s money (the very one made from dynamite) to fund the prizes.

Reblogged this on Room 196, Hilbert's Hotel and commented:

Couldn’t agree more!

On a semi-serious, semi-funny note, he made the mistake of assuming something was wrong because of his scientific intuition. If only he had followed this line of research and publish it as a conditional result!

Apart, from making a complexity joke, the above is serious as well. If there is doubt, I believe conditional results, even if you do not believe the condition, are the way to go (unless of course, you can settle the condition). As many complexity examples demonstrate, deriving conditional results can increase or lower our confidence in the condition used and even open the way for new techniques to prove or disprove it.

I read Monk’s biography a little time ago and can also recommend it. Maybe Oppenheimer was not enough of a technician (i.e., mathematician) to work out the details of a new theory, to stick to one problem, to do the really hard work (if one compares him, e.g., with Dirac). It is mentioned that several of his papers contained technical flaws. On the other hand, he was self confident (and physicist) enough of not to be disturbed by it. It was never an option for him to work in experimental physics. So, in some sense the Manhattan project leader, next of being a physics professor, was the job on which he could do his best, as a theory driven all-rounder. Falling short to become a Nobel laureate, but doing unique groundbreaking work that overtrumped this missing award. As an aside, the study of Sanscrit and (eastern) religion was more like an innocent hobby, it never piled up beneath his main field: physics.

Could just be flat-out bad luck — much like fund managers who excel for a few years, or basketball players with a “hot hand”, that turn out to be statistically totally consistent with a run of random chance events.

One thing I’ve noticed about science fiction written in the mid-20th century is there’s a lot of obsession over “true intuition” and the notion that it might be a developable skill, inheritable trait, etc. To me that seems incoherent. Some people just get lucky about those things and are celebrated for it afterward.

William,

Actually, yes the little o v. big O is what matters here, since big O follows just from the prime number theorem, and it had previously required even deep work just to improve the constant.

In any event, reading the MR for the article, you seem to have a convincing argument that at least some other people in the field thought this way since it ends with the line “In any case, the results here give hope that bounded gaps can be achieved in the near future.” And obviously, a lot of people would have read that, so a fair number of other people had to have thought along that line of things. Thanks for pointing this out. You are correct and it looks like my initial claim was not accurate.

My point about o vs O quibbling is that of course I couldn’t have meant O(log), since as you state, that’s very well-known. (And yes, the default in this blog is O.) I was hoping you were referring to GPY I vs GPY II.

I remember when Ribet proved Serre’s conjecture epsilon regarding Frey curves, and the expectations I heard from my number theory friends was that we’d see Fermat’s last theorem proved in our lifetime, maybe 10-20 years even. This wasn’t based on feelings that Taniyama-Weil itself was vulnerable, but simply that with the linkage established, something along the way would suffice and people were going to look real hard. Still, Wiles surprised everyone.

I heard pretty much the same thing regarding GPY and bounded gaps. As for twin primes itself, that still seems beyond reach.

FYSMI (funny you should mention it), but I was thinking about Dirac holes just the other day, in connection with an inquiry into Fourier Transforms of Boolean Functions.

I have there a notion of

singular propositions, which are Boolean functions that pick out single cells in their given universe of discourse, and I needed a handy name for the complements of these. I suppose I could have been gutsy and called them “Awbrey holes”, but it turns out that a long ago near-namesake of mine is already credited with the discovery of something else entirely under very nearly that name. So I finally settled oncrenular propositions.For Computer Science I was thinking of perhaps Ted Nelson?

Paul.

nice article, but clearly it is self-referential.

so better if you tell us what were your big missed opportunities

and why you are award-less despite great initial promise and

reputation 3 decades ago.

was it lack of guts or something else? my guess is the latter.

do tell us, it may help others avoid your pitfalls. thanks.

Not sure how to answer this and related questions. Oh well. Perhaps will try to near future to answer.

Actually, I do have a question which is maybe natural to ask here. From what I know, visual, diagrammatic representations of lambda calculus are used mostly for fun or for teaching purposes. Trying to understand some geometric phenomena, I constructed such a visual representation, called graphic lambda calculus, which is a formalism working with trivalent locally planar graphs (think about lambda calculus terms) and moves between those (like beta reduction), which are local. Moreover, the formalism has no variables names. Or, it recently occurred to me that the moves look very much alike simple chemical reactions (maybe some catalysts excepted), involving some unknown molecules (the trivalent nodes, or gates, forming the graphs). Looking a bit in the direction of DNA computing, I have not been able to locate any work trying to implement anything like reduction of lambda terms into a chemical computation. This might be due to my ignorance, but I think the question of doing lambda calculus with molecules could be interesting, seen that a graphical formalism for lambda calculus which is free from any 1D constraint (like using words and manipulation of 1D strings), without variable names, might be more fit for such a task than the standard one. What do you think about this?

“It Takes Guts To Do Research”

and the relentless passion …

Recent exchanges between Gil Kalai and Scott Aaronson,

regarding non-linear versus linear quantum dynamics, provide an example wheresomefolks’ arguments are on the wrong side of history … but it’s far from clear which side is which, because the history hasn’t finished happening yet!In this regard, we are fortunate to have resources like Andrei Moroianu’s textbook

Lectures on Kaehler Geometry. In particular, the draft-version that Andrei posted on the arxiv server (as the still-available arXiv:math/0402223v1, version 1) presents as Lemma 4.7 some basic results that — read carefully — remind us that real-valued Hamiltonians in general do not induce real-holomorphic dynamical flows.SummaryIn Scott’s universe Hamiltonian flows areJ-morphisms, in Gil’s universe not (whereJis a complex structure in e.g. Moroianu’s notation).QuestionWhat about Nature’s universe?AnswerAt present, we don’t know much!For more than a century, the notion that the physical metric structure of the universe might be dynamical has induced sensations of mathematical, physical, and philosophical vertigo (in students especially), and researchers have variously experienced these sensations as exciting, confusing, and distressing.

The modern-day notion of dynamical complex structure is no different. It’s an open question whether the complex structure of Nature evolves dynamically, but certainly our best computational simulations of Nature do concretely exhibit dynamical complex structures.

The associated law-of-physics that “

” is well-nigh inconceivable to physics students … yet this notion is entirely natural to geometric dynamicists! (Mikhail Gromov is the paradigmatic example) That’s why it’s good to have books like Andrei Morianu’s that assist our understanding, and illuminate the rich mathematical content of the vigorous ongoing discussion between Gil and Scott! 🙂iis not necessarily constantConclusionFrom a geometric point-of-view, general relativity can be viewed as “The Quest to Measureg” (wheregis a metric structure in e.g. Weinberg’s notation). And similarly, 21st century quantum computing can be viewed as “The Quest to MeasureJ“. IfJturns out to be dynamic, then Gil’s school of thought is correct (in much the same dynamical sense that Einstein’s views were correct). Otherwise Scott’s school of thought is correct (in much the same dynamical sense that Newton’s views were correct).LOL! Fuzzy? Furry!In regard to tough-minded commitment to research, please let me commend also Ken Wilson’s immensely readable personal account

“The Origins of Lattice Gauge Theory”(arXiv:hep-lat/0412043). Wilson says “Like many second-rate graduate students, I pursued ideas from my thesis topic for over fifteen years before disengaging from it” … “I was studying the fixed source theory” … “[I made] only a little progress in extracting useful insights” … “I became largely isolated from the mainstream, [working] instead largely in a world of my own” … “[in this article] I have discussed shortcomings in my own work, and the extent to which subsequent research has overcome these shortcomings.”If Ken Wilson is this modest, then how much

moremodest should be … pretty much every other scientist in the world? And how much more tough-minded?Thanks for the “Fuzzy” fix—I didn’t check that. I pondered adding a section with a story of intensely studying combinatorial designs in 1982-83 at Oxford, and then in 1986 serving as the official host for Marshall Hall, Jr.—who wrote the book on them—when he came up from London to Merton College, Oxford. Once when the guestrooms were booked, he stayed in my rooms, but since my bed was too soft he camped out on my sofa still in his trenchcoat. He told me combinatorial designs had a lot more “legs” to them, noting I did complexity theory—so perhaps I missed an opportunity to “anticipate” Nisan-Wigderson as we say in chess problem composition. Since Avi was Dick’s student this would have made a funny connection, but since I never got anywhere down that road I decided it was ultimately off-topic and could be seen as vainglorious. Instead ‘I pursued ideas from my thesis topic’ of logic in complexity ‘for over five years before disengaging from it’ and re-tooling back to my old combinatorial ways :-). So content-wise this stayed as Dick’s not Pip’s post.

The recent quantum-centric debate between Gil Kalai and Aram Harrow — with valuable contributions from John Preskill and Scott Aaronson and many others — regarding the feasibility (or not) of quantum computing, and the linearity (or not) of quantum dynamics, provides an example where

somefolks’ arguments are on the wrong side of history … but it’s far from clear which side is which, because the history hasn’t finished happening yet!In this regard, we are fortunate to have mathematical resources like Andrei Moroianu’s textbook

Lectures on Kaehler Geometry. In particular, the draft-version that Andrei posted on the arxiv server (as the still-available arXiv:math/0402223v1, version 1) presents as Lemma 4.7 some basic results that — read carefully — remind us that real-valued Hamiltonians in general do not induce real-holomorphic dynamical flows.This framework provides us with a natural geometric language for summarizing a key point of the Kalai/Harrow debate:

A Key Quantum Debating PointIn the Harrow (QUIST) universe Hamiltonian flows areJ-morphisms, in the Kalai (skeptic) universe not (whereJis a complex structure in e.g. Moroianu’s notation).QuestionWhat about Nature’s universe?AnswerWe don’t know (yet!).For more than a century, the notion that the physical metric structure of the universe might be dynamical has induced sensations of mathematical, physical, and philosophical vertigo (in students especially), and researchers have variously experienced these sensations as exciting, confusing, and distressing.

The modern-day notion of dynamical complex structure is no different. It’s an open question whether the complex structure of Nature evolves dynamically, but certainly our best computational simulations of Nature do exhibit dynamical complex structures. The associated physics principle that “

” thus is entirely natural to geometers, and is accessible to computational dynamicists, yet is well-nigh inconceivable to quantum physics students.iis not necessarily constantThat’s why it’s good to have books like Morianu’s that amplify our understanding, and illuminate the rich mathematical content of the great STEM debates of our century. Illuminating too are Joseph Landsberg’s many arxiv preprints, which are admirably summarized in Landsberg’s terrific new book

Tensors: Geometry and ApplicationsGrappling with these tough new 21st century textbooks is a concrete training regimen by which students can work to develop in themselves the courage, nerve, and toughness that Dick and Ken’s essay celebrate in 21st century physicists like Dirac, Wilson, Oppenheimer, Urey, Anderson, Rabi, and Wendell “Fuzzy”.

As Feynman wrote way back in 1965:

As the vigor of the Kalai/Harrow debate makes clear, the process of discovering what Feynman calls “the fundamental laws of nature” is far from over! And as for Feynman’s envisioned “connections to phenomena in biology and so on”, we have scarcely begun to appreciate these connections!

ConclusionFrom a geometric point-of-view, general relativity can be viewed as “The Quest to Measureg” (wheregis a metric structure in e.g. Weinberg’s notation). And similarly, 21st century quantum computing can be viewed as “The Quest to MeasureJ“. IfJturns out to be dynamic, then Gil’s quantum views are correct (in the same sense that Einstein’s views were correct). Otherwise Aram’s quantum views are correct (in the same sense that Newton’s views were correct).The BIG question is who is the Oppenheimer of CS?

I would vote for you. No one else showed so much

promise and did so little with it. I feel sorry for you.

I held on to this and the comment by “alkasnd” above until Dick’s return from a long bit of the offline kind of travel late Wednesday (10th). To add to what Dick replied above, neither of us thought of this post as self-referential—as I noted in comments elsewhere here, I decided not to add a story involving myself and what was a good tip from Marshall Hall, Jr.

From the Wikipedia entry about the role Brigadier General Leslie R. Groves, Jr. played…

“[He] was impressed by Oppenheimer’s singular grasp of the practical aspects of designing and constructing an atomic bomb, and by the breadth of his knowledge. As a military engineer, Groves knew that this would be vital in an interdisciplinary project that would involve not just physics, but chemistry, metallurgy, ordnance and engineering. Groves also detected in Oppenheimer something that many others did not, an “overweening ambition” that Groves reckoned would supply the drive necessary to push the project to a successful conclusion. Isidor Rabi considered the appointment “a real stroke of genius on the part of General Groves, who was not generally considered to be a genius”.

Although, Oppie may not have “enjoyed” administration…

In early 1943, John Manley, the experimental physicist from the University of Illinois on assignment to the Metallurgical Laboratory of the University of Chicago, visited University of California theoretical physicist J. Robert Oppenheimer, whom he had been assisting with plans for the new laboratory at Los Alamos. He had “bugged Oppie for I don’t know how many months about an organization chart who was going to be responsible for this and who was going to be responsible for that. But one day in January, I climbed to the top floor of LeConte Hall where Robert had his office and pushed open the door. Ed Condon (the Westinghouse physicist whom Oppenheimer had chosen as his deputy director) happened to be in there with him at the moment, but Oppie practically threw a piece of paper at me as I came in the door and said, ‘Here’s your damn organization chart,’ ” Manley recounted.

from the “Manhattan Project History”

Doing research requires a strong belief in one’s own capacities. If you care too much about what your peers might think, then you can’t do any original work – in any case you can’t imagine a new theory. On the other hand, it doesn’t take much guts to improve on already-known methods. It obviously takes (at least) both kinds of researchers to make a scientific community.

Among the most celebrated examples of “gutsy” research is Nobelist Subrahmanyan Chandrasekhar’s derivation (in 1930, at the age of 19!) of the

Chandrasekahr Limit. This result established the inevitability of black hole formation, yet Chandra’s work faced vehement, prolonged, and not-obviously-rational opposition from the senior British astrophysicist Arthur Stanley Eddington.Chandra’s seminal textbook

An Introduction to the Study of Stellar Structure(1939; thus written before Chandra was 30 … amazing!) describes this controversy in the gentlest terms (Ch. X, footnote 23):Not the least virtue of Chandra’s book — or of any of the several textbooks written by Chandra — is the impeccable clarity of Chandra’s expository prose. No matter what scientific point you are attempting to explain, if you can find a passage in any of Chandra’s works that explains a similar point, then you will not need to seek further for a model of clear exposition. Chandra’s works are highly recommended, for systematic textual deconstruction, by students seeking to learn techniques of effective scientific writing!

In regard to the Chandra-Eddington controversy, there is no better example (known to me) of the Ed Wilson’s meditation (in his autobiography

Naturalist):It seems (to me) that Arthur Eddington served Subrahmanyan Chandrasekhar well, as the enemy that Chandra needed, to stimulate Chandra to become one of the greatest “gutsy” scientists — and one of greatest scientific writers too — of his (or any) scientific generation.

I think the corollary to computing is apt. Most computing research is not driven to make new discoveries. Academic computing is producing very little that is novel. Partly, because the field at large produces so much that is new, that to simply learn the new languages, platforms, etc, requires too much effort.

Secondly, there is no standard way to speak about advances in computer science. I read about a nice implementation of a data-flow model called Storm today that used none of the data-flow kind of language or similar models such as actor or process algebras.

Thirdly, the academic field is filled with code that cannot be run. Old papers with broken code amount to story telling.

Lastly, while computations are mathematics, computer science is not. The adoration of mathematics in computer science hinders the fields ability to ask novel computational questions, and explore avenues of research. Hewitt, Sussman, Minsky, and others in the old guard seem to be the only ones that regularly point out the inadequacies of the Von Neumann and Turing models, and hardly anyone listens.

Thankfully it isn’t necessary to be part of the academic community to do novel research! Computers are cheap and new models can be developed and tested as reliably as at the biggest institutions. The problems in computer science today are a lack of novel ideas and demonstrable implementations. Unfortunately, the the above problems limit the ability of novel research to move through the community at large and advance the field generally.

I have always found it strange that the most novel thinkers in computer science are often the oldest participants. This certainly wasn’t true 30 years ago. Which I believe points to a problem of pedagogy.