# Who Gets The Credit—Not Facebook Credits

* How partial results have helped solve some famous open problems *

Richard Hamilton is not a computer theorist, but is a senior mathematician at Columbia University. He is a leading expert in geometric analysis, and the creator of the program that helped solve the Poincaré Conjecture. He is a winner of numerous awards including: the Veblen Prize in Geometry, a Clay Research Award, and most recently the AMS Steele Prize for Seminal Contribution to Research.

Today I want to talk about a question that has been raised here and elsewhere: should researchers share their ideas? Especially ideas that are not “complete.”

Well every time you talk to a friend about a result, give a talk, post a paper, publish a paper, you are sharing your ideas. Most would argue that this is the key to advancing science in all areas. Yet there is the issue of *when* do you share the ideas. Do you wait until they are completely worked out—and what does that mean anyway? No matter what your result is it is likely not be the last word on your area. At least you should hope not. The last paper is a bit like: would the last person to leave please turn out the lights.

On the other end of “when” is this advice from Harvard’s great computing pioneer:

“Don’t worry about people stealing an idea. If it’s original, you will have to ram it down their throats.” Howard Aiken

** Fermat’s Last Theorem **

The proof of this great theorem, after being unsolved for almost 350 years, is due to Andrew Wiles—with some key help from Richard Taylor. But Wiles would have never been able to even start working on the proof without an insight of Gerhard Frey.

Frey gave a public lecture—are there private lectures?—where he pointed out some strange consequences of a non-trivial solution to

for an odd prime. Frey stated that such a solution would “likely” violate known conjectures. The “likely” was made in a precise statement by Jean-Pierre Serre, which was then proved by Ken Ribet. The missing piece had been called the “epsilon conjecture,” but it was not because it was a trivial or small step.

The key to me is the community needed to solve the problem. Yes Wiles worked alone for almost a decade on his solution. But without Frey and then Serre and Ribert, he probably would not even have started. Frey’s ideas were a perfect example of the type of **partial results** that can drive us all toward the solution of great problems.

I cannot even attempt here, or anywhere, an outline of Wiles proof. I think I can give an idea of how Frey’s idea can be used in a more modest way. The essential idea will start out like his idea: if there is a solution, then we will create a strange mathematical object. This object in our case will be a simple quadratic equation—the same kind you probably studied in high school. But the equation is “strange” in a way that will eventually lead to a contradiction.

Suppose that there is a non-trivial solution to the cubic case of FLT. Let

Then there are rational and so that

and is non-zero. The simple idea is to construct a mathematical object that is too strange, in some sense, and then prove that it cannot exist. The object will be just a quadratic equation: consider the polynomial

Define . Then it is easy to check that the quadratic equation

has solutions . Therefore must be a rational number too. In other words has rational point. gives that , which must also have a rational point.

Okay so what does this do for us? The point is that the equation

is an **elliptic curve**: a class of curves that are very well studied. There is a well known, to the experts, theory that allows one to prove that the only rational points are in and , which translates to . Thus this proves FLT in the cubic case.

** Poincaré Conjecture **

The proof of the Poincaré Conjecture by Grigori Perelman was one of the great results of mathematics. He carried forward to a conclusion a program that had been started by Richard Hamilton in 1982. Hamilton had proved a number of beautiful partial results, but the full proof needed Perelman.

Hamilton’s brilliant idea was to replace a static problem by a dynamic one. The standard statement of the conjecture is: Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere. The “obvious” approach is to take such a manifold and try to prove that it is a sphere. All earlier attempts failed.

What Hamilton did was he introduced a flow, the so called Ricci flow, that changes a manifold over time. The idea of his method was to then prove that this flow made the manifold move toward a sphere over time. He could not prove this in general—he knew the behavior was more complex. But Hamilton did prove some important special cases. See the following figure for an example of the flow in action:

Note, how the manifold changes over time, and becomes closer to a sphere.

** Unique Games Conjecture **

The recent paper of Sanjeev Arora, Boaz Barak, and David Steurer (ABS) shows that the famous Unique Game Conjecture (UG) of Subhash Khot may not be correct. I have discussed the exact conjecture in some detail already. The rough notion is that a UG instance is a type of graph edge-coloring problem that Subhash conjectured cannot even be approximately solved in polynomial time. The usual statement is: given a UG instance that has a solution which satisfies edges, even finding an edge coloring that satisfies fraction of the edges is difficult. The conjecture has been used to prove that many other interesting problems also have no good approximation.

The paper of ABS proves that there is a sub-exponential time algorithm for every UG instance. This does not rule out Subhash’s conjecture, but does raise some issues. One possibility is that the conjecture is wrong; another is that the conjecture is true, but the reductions that would establish it are more complex than usual. We will see in time what happens.

Just like FLT and Poincaré there were papers and ideas that helped guide ABS to their terrific result. Could Arora be the first to win a “hat-trick” in Gödel Prizes?

Since this area is still in progress I cannot say anything as definitive as I could for the other problems. However, is clear that ABS built their paper and proof on previous work of a number of researchers. In their paper they thank, among others, Alexandra Kolla for an early copy of her manuscript. She proves some results that seemed to have pointed the way for ABS. Kolla’s main theorem is roughly the following: For every satisfiable instance of Unique Games that satisfies certain technical assumptions, one can find an assignment that satisfies more than 90 percent of the constraints in time that depends on the spectral profile of the game. She also shows how spectral methods can solve in quasi-polynomial time game instances that where previously thought to be hard, and how to solve Unique Games on expander graphs.

** Open Problems **

Should researchers be encouraged to share ideas earlier than we do now? What mechanisms are needed to be sure that proper credit is given out? Would you publish a partial result?

I must say that one thing I have tried here is to state approaches, conjectures, and ideas about many problems. I hope that this is helpful for the field, but would like to know what you think.

I am in favor of publishing partial results, as long as they are correct. The credit becomes automatically if the result is correct and is going to be used by others. But if you only care of getting credits for major results, then you will not publish your partial results.

This is an amateur opinion from someone who has never published any new result.

As usual, thank you for a great post!

I remember Tim Gowers’s excellent talk in Oxford last year on his experience with collaborative problem-solving (polymaths). From my understanding of what he said (please feel free to correct me if you were there as well and disagree), he intentionally hadn’t collaborated much and hadn’t published partial results before getting a permanent position. Perhaps maths is different from TCS. From my (little) experience this is quite subjective and depending on the individual, but I would be very interested to know what more senior academics think.

William Thurston (who was also upstream of the Poincare’ proof) has a nice essay on striking the balance between getting “proof credits” and killing off your own research area by doing too much: Thurston, W. P. On Proof and Progress in Mathematics.

Bulletin of the American Mathematical Society, 30(2), 161-177, April 1994.The way the system works now, it is usually not good advice to share early results. Depending on the dynamic of the research area, it is usually good to keep an idea under wraps, sharing it only privately, for about a year to see if you can make further progress yourself. Even if you are properly scientifically credited for all of your partial results, you need the final results to find a job. After you get final results, you should save them until they are needed for job applications, or until you hear of someone else approaching the results.

Américo: Scientific credit may be automatic, but that is not the credit that matters. Proper scientific credit is not comforting when you can’t find a job.

I see your objection. If I were searching for a job as professional mathematician I would follow your advice. So my comment is most likely an utopic one.

This makes me think of Prisoner’s dilemma or another one of those game theory models: The socially optimal thing is to freely share ideas and partial results with all and sundry thus leading to flourishing collaborations and rapid progress. The “Nash equilibrium” might be to hoard your ideas until the last possible minute – thus slowing down progress and creating a climate of distrust.

I recall reading that around the time of Fermat (17th Century) it was not customary to publish proof’s of results. Everyone jealously their methods- and as a result progress was slow.

After “Then it is easy to check that the quadratic equation” don’t you mean

“z^2-z(x^3+y^3)+t^3″, this is a polynome equal to your(z-x^3)(z-y^3), and hence x^3 and y^3 are clearly root of this polynome. Without the (x^3+y^3) I am totally lost.

The problem beeing that, if I am correct, there are many other formulas in your post to change.

Arthur,

But x^3 + y^3 = 1. Right?

Never share an idea at MIT. Period.

Bad idea at so many different levels.

RE: the ABS paper on unique games, I think this recent pre-print (http://eccc.hpi-web.de/report/2010/112/) by Khot and Moshkovitz is particularly interesting and deserves to be part of the discussion. Dana gave a presentation of their work at Barriers II, and the “plan of attack” for proving the UGC (in the post-ABS world) works something like this (I’m simplifying the concepts involved just a touch; the actual theorems/analysis are much more complex):

1) Approximately solving 3-LINEAR EQUATIONS over the REALS is hard. (Unconditionally proven in the linked paper)

2) Approximately solving 2-LINEAR EQUATIONS over the reals is hard. (No one knows if this is true! Maybe this follows somehow from hardness for 3-LIN(R)? That’s the hope.)

3) Derive the Unique Games Conjecture via parallel repetition. (If you get a clean enough result for Step #2, this one’s practically in the bag.)

In particular, the reduction used in the paper to prove hardness of 3-LIN(R) maps SAT instances of size n to 3-LIN(R) instances of size n^poly(1/delta)^O(1), where delta is the standard type of completeness parameter. This actually exactly matches the Very Strange reduction properties required by ABS.

In some sense, I suppose you could read the ABS paper as a partial result (the UGC is still unresolved, after all!) that’s inspired a new direction of work toward proving the conjecture (whether this was actually the case or not :)).

The pressure to publish has a number of consequences that I did not understand until recently. The first is that there are more papers of lesser quality.

The second is the unwillingness to share or discuss results before they are more or less complete. Most of my academic colleagues are less than forthcoming about their work in progress. But they tend also not to want to discuss ideas generally. It is exceedingly rare for mathematicians or computer scientists to discuss proofs on the spot, or discuss techniques or methods–at least at my institution. The level of discourse among people who do not collaborate is not especially high or enlightening.

The third consequence is that attempts to be forthcoming can be taken advantage of by more experienced researchers and even by students. I once had an extended email correspondence with a researcher who was working on an area related to my own field. The researcher subsequently published results that were superior to mine. There was nothing in the correspondence that hinted at these developments. I had made a mistake pursuing that field, but it might have saved me some time had I known that others–others with whom I was corresponding–were pursuing better methods. And perhaps I should have foreseen them, had I been intellectually better equipped, and had my command of the literature been greater.

A few years later, having forgotten the lesson of my correspondence, I prematurely mentioned a construction to a student. The student wanted to share this with others, saying that “information wants to be free.” I asked the student not to, and he agreed–reluctantly, but I had made a foolish mistake. The Internet has compounded errors of admission.

And so I have become more like my colleagues, who never let on about their work in progress until they can publish final results. Perhaps my new unwillingness to discuss my own work in progress conveys to my colleagues the same lack of trust that, until recently, I have found difficult to reciprocate. A null message on a time-bounded communication channel convey some information, after all.

I had forgotten to mention that a paper of mine, and some unpublished results were cited by my correspondents, but there was little point in publishing them by then. My correspondence started with my pointing out that an axiomatization equivalent to theirs had been developed and published in the top journal in a related field. There was no mention to this earlier work in any of their publications.

Was their more elegant axiomatization sufficient justification for not citing obviously related prior work, even if they were working independently? Not in my view, but if you possess a certain aggressive professionalism and would prefer not to acknowledge that others had similar ideas around the same time and managed to publish first, then it can pay to neglect to cite prior work–especially if this isn’t published in quite the same area with the same readership. While this can be rationalized in various ways (it was: “we were working on this years previously”–as if their group had acquired a momentum and focus that another citation would have dissipated), it distorts history.

I took the liberty of setting the record straight by mentioning both axiomatizations in one of my own papers.

In the language of game theory used by Robert, this seems an example of agents acting rationally to maximize self-interest. It is perhaps not surprising that with a market mechanism with certain rules, some types of outcomes are going to be more likely than others.

Chunking reputation transfer into paper-sized parcels seems contrary to efficient scientific progress. Papers are also increasing in size… But perhaps efficiency in science is not something we should strive to maximize, reduced friction has consequences too. Finally, it is debatable whether we should even be thinking about science in terms of the terminology of markets, as politicians and funding bodies now seem to be doing.

It seems to me that there are two attitudes to doing science. In response to your open questions, I will hypothesise that there are two poles of attraction regarding one’s motivations.

Mindset A seems to regard the scientific enterprise as a zero-sum game. This means the focus is on publishable results, which positions one can secure, being concerned about credit, striving to secure intellectual property rights, and being careful about sharing results. Further, only token attention is paid to where information was gathered from, support activities like preparing experiments or compiling archives of experimental results are not valued highly. Finally, there is a lot of attention on fashion, as areas which are currently prominent are more likely to be favoured by funding agencies and job search committees. The scarcity of academic jobs relative to the number of people in postgraduate programmes means that this may be the default (and rational) stance of most young researchers. The anonymous comments on many blogs seem to take a tone that is consistent with this mindset, and the peer review process perhaps encourages this kind of thinking also, with its focus on scarcity.

Another way to regard scientific progress is as mainly about contributing to human progress. From this point of view, anything that benefits the overall enterprise is good, and any breakthrough helps everyone as they now have new territory to explore. Partial results should be shared as early as possible, so that the most proficient problem solver can settle the problem, and because this helps to avoid several people working on the same thing without being aware of each other. Credit is less important than dissemination; property rights may be seen as largely antithetical to scientific progress; and reproducibility and provenance are important. Older researchers who have established their careers often seem to act in this manner. Again this seems rational, since many researchers seem originally motivated by some kind of altruism, otherwise they would have left to earn better salaries elsewhere. I think this blog exemplifies this Mindset B, as do the Polymath projects, and Q&A sites like MathOverflow and CSTheory.

If there is indeed such a rough division into A and B mindsets (this is definitely debatable), then I wonder how someone who has done well with mindset A is supposed to suddenly switch gears halfway through their career, revert to their original motivations, and work within mindset B. Further, if mindset A were to prevail in the long run, would science again become the preserve of private laboratories, “amateur” part-time scientists, and those funded by patrons, as was the case in the 19th century? Or if mindset B became prevalent, would science suffer due to scientists being less sharply motivated, and papers becoming bland committee camels?

(These ideas are not especially original. Some precedents are Richard Hamming’s famous 1986 talk, and several of Paul Graham’s essays, including “Hackers and Painters”, “Wealth”, and “After Credentials”.)

I would prefer to work in mindset B, even accepting the negative consequences to some extent.

I would like to know how open were senior people before their tenure? Standa Zivny’s comment is one case, it would be nice to know this about more people.

I have a few comments. First, mindset A is not playing a zero-sum game. Many people can benefit from that person’s work, it’s just that the overall benefit takes a back seat to the individual’s benefit. I hope this doesn’t come across as libertarian or whatever, but I don’t see that as a bad thing. It seems to me that more people will be willing to play the game if they themselves stand to benefit from it. If more people are playing, more will benefit in the long run.

Second, I disagree that many senior researchers have mindset B. That is definitely not my experience at all. If anything, the most successful senior researchers I know are also the most guarded; if they appear more altruistic it’s because they’re good at getting things out quick and making sure they get credit for their contributions.

Just to reiterate, even if everybody has extreme mindset A, the existing incentive structure still encourages them to disseminate, collaborate, and communicate. Maybe not as fast as we’d all like, but science is a long-term thing anyway.

To extend the “Type A” versus “Type B” classification …

Denis Serre’s recent Bull. AMS article

Von Neumann’s (1949) comments about existence and uniqueness for the initial-boundary value problem in gas dynamicscan be read as an in-depth case study of “Type B: mathematical behavior by John von Neumann.To appreciate this, with the help of hindsight we see that by 1953 the ideas that were freely shared by von Neumann in 1949 provided the mathematical foundation for the

Strategic Missile Evaluation Committee(SMEC), also variously called theTeapot CommitteeandThe von Neumann Committee; this was the committee that—as much as any single institution—launched the world into the Space Age. Two in-depth studies of this period are Neil Sheehan’sA Fiery Peace in a Cold Warand Stephen B. Johnson’sThe Secret of Apollo: Systems Management in American and European Space Programs.Here two cardinal point s are: (1) strong mathematical foundations were characteristic of all great enterprises of the latter half of the 21 century, and plausibly the same will be true of the 21st century, and (2) historically “Type B” mathematicians have tailored their researches precisely to provide the basis for such enterprises.

Two natural (and fun) questions then arise: Who are the early 21st century’s most active “Type B” mathematicians? What enterprises are they catalyzing?

I admire Perelman for refusing the Millenium prize on the grounds that Hamilton should have gotten at least half of the credit. The Millenium prizes probably aren’t helping the field.

Re “mindset A” becoming “mindset B”: the two phases might be “before” and “after” getting tenure. :)

I have seen several university departments closing, with tenured staff losing their jobs. Tenure is apparently also being phased out in some countries, or being made more difficult to attain. Rich DeMillo is even more pessimistic. Suppose you are right that caricature A/B accurately describes a pre/post tenure mindset dichotomy. Then this seems to lead inexorably to Mindset A becoming dominant, unless tenure can stop being treated as some kind of phase transition?

Paul Erdős is a good example.

I could say a lot on this topic, but let me confine myself to saying that one of the aims of the Polymath experiment was to provide a context in which it would no longer be in people’s interest to hold on to their ideas (about the specific problem under discussion). And the experience showed that this can work, and can lead to a big speedup in how quickly a problem is solved.

Also, because the discussion is public and is preserved for posterity, one can in principle give people exactly the credit their contribution deserves. In practice, however, things are a lot less simple: hardly anyone is going to wade through a Polymath discussion to work out exactly how much one particular person contributed to it, and our present system of credit, CVs, publication lists, etc., does not have room for, “I made significant contributions to the following Polymath projects.” Maybe if they happen more then this will change, but for now it is a problem for less established mathematicians who are trying to get jobs.

One thing the mathematical community could do to encourage earlier sharing of ideas is to make more of an effort to recognise the earlier contributions that made the headline results possible. I think the Wiles case is interesting: it is clearly right that Wiles gets most of the attention, but it isn’t obvious that the proportions are exactly correct. I like the following thought experiment: who would have got how much attention if Wiles had solved the Shimura-Taniyama-Weil conjecture without realizing its significance to FLT, and only later had Frey, Serre and Ribet made their contributions? I think Ribet would have been hailed as the solver of FLT. In an ideal world, it would not make all that much difference, and Wiles and Ribet would be credited in proportion to the difficulty of their contributions.

Tim,

I think your thought experiment is very interesting. I do think that the credit for many results is given out in a less than far way. Certainly in both FLT and Poincare there could have been a different assignment of credit.

This is the keystone effect, which refers to the tradition that whoever placed the keystone in an arch was deemed to be the builder of the entire edifice.

Wiles and Perelman placed the last stone in their respective efforts and thus they get the majority of the credit. Had Ken Ribet come after Wiles, the keystone step would had been his, and Ken would have gotten all the credit.

Regarding the issue of sharing scientific ideas or keeping them secret, there were two interesting posts on “secret blogging seminar” the second was http://sbseminar.wordpress.com/2008/07/19/followup-working-in-secret/ . Overall, while it is fun to share ideas there are also good reasons both for individual researchers and for the community not to share ideas.

Sharing ideas in a private conversation is different from sharing them on public sites like blogs. If someone uses your idea, it is possible to prove that the idea was yours, this might even lead those of us who care about credit more to express their ideas sooner to make sure that they receive the credit for their idea.

It seems to me that those who spend years working secretly on big problems take the risk that all of a sudden another researcher or group will make rapid public progress, rendering their private work obsolete. So if it works, great. If not, then they wasted a lot of time.

Ultimately the individual/group that completes an important result ought to deserve the majority of the credit, since otherwise those who produced partial results would have completed the work.

Still, perhaps in the future a system can be developed that weighs all partial results leading to a given proof or discovery!

The examples seem to suggest that there is nothing wrong with sharing. Pretty much anyone who has heard of Perelman has heard of Hamilton. And, certainly no one would deny the precursors to Wiles’ proof tenure (although they are less famous).

The question, I guess, is if they deserve more credit. I would say no. The predecessors have no scientific basis for believing that their theories were any more likely to produce the answer than the billion other attempts that didn’t work. That it turned out so seems more like their luck to me than anything else.

In theoretical CS there are quite a few examples of great results that had been acheieved by remarkable sequences of partial results. The zero-knowledge/interactive proofs/hardness of approximation/PCP is certainly such an example. (So perhaps we will see another such sequence towards undestanding Khot’s unique game conjecture).

Measuring the precise influence of a partial result or of an individual in the great overall achievement is a complex matter. (And quite an interesting matter: http://gilkalai.wordpress.com/2008/05/26/natis-influence/ ) Overall, while there are a very few cases of clear unproper credits, the notions of “proper credits”, or “credits in an ideal world” or the statement that “one can in principle give people exactly the credit their contribution deserves”, seem rather naive.

Since what I wrote could perhaps be construed as being precisely this naive suggestion, let me be clear that the qualification “in an ideal world” was important: I don’t for a moment believe that this ideal world could be conjured into existence. A more realistic possibility, however, is that the current balance could be changed.

Even then, I think that most of the time the current system probably doesn’t do too bad a job. For instance, in the case of the PCP theorem, I think pretty well all accounts of it tend to give a long list of contributors to the earlier partial progress.

So how could things improve? Well, suppose for example that I suddenly had a brilliant idea about how, in principle, to defeat the natural-proofs barrier to P versus NP. And suppose that that reduced the P versus NP problem to a problem that seemed difficult, but the kind of problem that could be solved after a couple of years of sustained effort by the right person. It might be that the problem I had reduced it to was something that people in TCS basically knew how to do, or at least would find much easier than I would find it. So in a mindset-B world the obvious thing to do would be for me to make public my idea and wait for some expert to solve the for-me-hard but for-that-expert-moderately-hard problem. In a mindset-A world (by which I mean the real world) I would either try to learn the right part of TCS and try to solve it myself, probably taking a lot longer than the TCS people would take, or I would try to find a suitable collaborator in TCS with whom I would do a kind of bargain: I would share my idea, they would then complete the proof, and we would publish the whole lot jointly. But if the real breakthrough was the understanding in principle of how to get round the natural-proofs barrier and the completion of the proof was, by comparison, more routine (but still hard), then it wouldn’t be a great bargain.

The point here is that the way the current system of credit operates would be encouraging a mode of behaviour that is clearly suboptimal from the point of view of the progress of mathematics as a whole.

Having said that, one can also argue that this suboptimality is necessary for encouraging people to go into mathematics (in the hope of fame, glory, reputation, good job, etc.) in the first place. So it could be that if one looks at the picture more widely then it is closer to optimal than it might at first appear.

I’d also make a wider point in the other direction. As well as (perhaps sometimes) not giving enough credit to partial results, I think we definitely don’t give enough credit to efforts by mathematicians to clarify the proofs of other mathematicians. If we were to regard what we are doing as spreading understanding, then proving theorems for the first time is hugely important of course, but it is by no means everything.

An idea that could work again in an ideal world. For every jointly authored paper, journals require percentage contribution of authors and those are printed in the paper. And for every important result say PCP, a panel of experts in the field decide the percentage contribution of each paper (partial results) that culminated to it.

Dear Tim, as the popular saying goes, In love and in science being naive is a great merit. Overall, I do not think there is too little credit for partial results (perhaps the contrary is true) and I think the “ideal world” thoughts are naive even in principle. (E.g. in the “ideal world” should credit be really proportional to the difficulty of the proofs? or perhaps to the difficulty of the easiest possible proof of the theorem?)

“But if the real breakthrough was the understanding in principle of how to get round the natural-proofs barrier and the completion of the proof was, by comparison, more routine (but still hard), then it wouldn’t be a great bargain.”

This will be a super great bargain! Don’t even hesitate to take it.

And should people get negative credit for partial results/ideas that led nowhere?

Here’s another possibility. Write up the idea and share it, but make sure to do a good PR job for it. If you were to solve PvNP, then you’d get an awful lot of free attention, without having to expend any effort. If you’re willing to put more investment into marketing, you could still get a lot of attention and credit for a partial result.

You could think of this kind of mindset being held by an idealist at heart, who wants to share the idea, but who is a realist in practice, and wants to get fair credit. You could also rationalize a PR approach without a desire for personal gain. In order for your idea to get proper attention and be further developed, it’s going to have to compete with other ideas for researchers’ time and attention. If you really believe in the idea, it’s critical that you convince other people to as well

gowers wrote:

“I think we definitely don’t give enough credit to efforts by mathematicians to clarify the proofs of other mathematicians. If we were to regard what we are doing as spreading understanding, then proving theorems for the first time is hugely important of course, but it is by no means everything.”

I think this is an important point in a wider context. One essential thing which mathematicians do without necessarily publishing papers is that they keep mathematical knowledge alive and propagate it. Imagine if the branch of algebraic number theory in which the Taniyama-Shimura conjecture sat had gone out of fashion, or had just had a handful of people working in it who understood the conjecture. Then probably Frey, Serre, Ribet, Wiles, and Taylor would never have thought to make their contributions, and FLT would still be an open problem. Maybe these people should take the lions’ shares of the credit, but they could only really have operated so effectively because there was a community of (in this case) number theorists who were keeping the ideas alive and communicating them.

@Jonathan KirbyOne essential thing which mathematicians do without necessarily publishing papers is that they keep mathematical knowledge alive and propagate it.Isn’t that “proof” that this

essential thingis not at all a mathematical construct (i.e. not a mathematical object as formalization is supposed to build) otherwise formal definitions and proofs kept in print would be enough.The Thurtston paper (cited above by Dave Lewis and previously by John Sidles and Terence Tao) is pretty clear about that.

I am not saying here that mathematics is a

social constructbecause, obviously, there cannot be “fake mathematics”, but given that mathematicians are unable toexplainwhat is “interesting”, “beautiful”, “elegant” or what is mathematical understanding (how itworksnot just how itfeels) is it not the case that mathematics ARE those “hidden things” in the mind of mathematicians and that those things ARE NOT mathematical structures?If they were mathematical structures they would be amenable to mathematics or at least to

metamathematics, but whenever mathematicians try to investigate these “mind contents” they end up (are stopped, unable to go further) in psychology and intuition, not any kind of formalizable content.This is the most basic stumbling block for Artificial Intelligence, even something as unambiguous and “straightforward” as mathematics actually defies formalization in that formalization of mathematical structures is a

byproductof mathematicsnotmathematics proper.I bet all mathematicians will agree.

[retrying comment, locked in moderation due to two links,

silly ineffective anti-spam, several older threads have spam comments]@Jonathan KirbyOne essential thing which mathematicians do without necessarily publishing papers is that they keep mathematical knowledge alive and propagate it.Isn’t that “proof” that this

essential thingis not at all a mathematical construct (i.e. not a mathematical object as formalization is supposed to build) otherwise formal definitions and proofs kept in print would be enough.The Thurtston paper (cited above by Dave Lewis and previously by John Sidles and Terence Tao) is pretty clear about that.

I am not saying here that mathematics is a

social constructbecause, obviously, there cannot be “fake mathematics”, but given that mathematicians are unable toexplainwhat is “interesting”, “beautiful”, “elegant” or what is mathematical understanding (how itworksnot just how itfeels) is it not the case that mathematics ARE those “hidden things” in the mind of mathematicians and that those things ARE NOT mathematical structures?If they were mathematical structures they would be amenable to mathematics or at least to

metamathematics, but whenever mathematicians try to investigate these “mind contents” they end up (are stopped, unable to go further) in psychology and intuition, not any kind of formalizable content.This is the most basic stumbling block for Artificial Intelligence, even something as unambiguous and “straightforward” as mathematics actually defies formalization in that formalization of mathematical structures is a

byproductof mathematicsnotmathematics proper.I bet all mathematicians will agree.

Regarding polymath projects I think that indeed the credit issue is interesting. Usually in collaboration the fact that who did precisely what remains unknown serves the joint effort. It is not so desirable to have too much competition between people while they collaborate. In open projects like polymath, on the one hand almost everything is open (except possibly attempts by participants that did not even became a comment) so indeed, as Tim mentioned, it is possible to get a pretty good picture of the chronological developement. On the other hand, if there are hundrends of participant and only half a dozen ot so became pivotal in the critical stages we indeed want to differentiate at least roughly between the contributions of different participants.

Anyway, I tend to think it is perhaps correct to regard large collaborative efforts such as polymath, and raising ideas on blogs’ asking and answering questions on polymath, etc. as largely altruistic activities. (But I am not sure about it.)

In general sharing ideas on a possible solution of an notorious conjecture, particularly in graph theory, is quite risky for the both parties e.g., short proof of FLT, short proof of the 4CT etc. What would happened if Vinay Deolalikar’s shared his views on his solution of P=!NP with his colleagues beforehand?

The amount of recognition given to Grigori Perelman is well connected to the lack of recognition given to John Forbes Nash in his time. Both Perelman and Nash were quite aware of the unconfortable situation they were put in and both reacted in odd ways.

As far m0re of a hobbyist than even an amateur, credit isn’t a strong incentive for me. At most it’s an odd line in the Interests section of my resume. It’s the thrill of the chase that matters. I’m interested precisely because I want to figure it out. To be able understand it. In that sense, partial results feel more like I am admitting defeat. That I’ve given up and am willing to dump the work onto someone else (if there is anyone else who is even interested).

The other problem from an amateur’s perspective is that by definition they have an incomplete view of the problem. There is a huge amount of work out there, and much of it is highly complex. Far beyond their reach. It would take a full-time concentrated effort to understand it all, and that is precisely the resource that they lack. If they publish something boring or wrong there seems to be a strong negative blow-back. They risk getting permanently labeled as either a crank, an idiot or a charlatan. So, it becomes a strong counter-incentive for them to check their work thoroughly, and only publish when they are completely sure it is right. But because their view is incomplete, they can’t ever be sure they haven’t missing something important.

I originally wrote something much more verbose than this, but I lost the form when I hit submit, due to an error. Oh well.

I am applying to graduate schools very soon. It’s my understanding that the maths and sciences are something an individual _sacrifices_ his time and effort for – that the pursuit and advance of the field is wholly more precious than individual want/fame/back-pad-ibility/ego (almost by definition) and that in some sense this is the scientists/mathematicians “purpose.” However, I’ve been privy to a number of conversations (like this one) where people I respect discuss how “broken” mathematics and sciences are with respect to this ideal – that personal gain creeps in. For example, Fortnow and Gasarch linked to an article about scientific review (and how scientists do often try to block work that might endanger their own progress).

I’m interested in going into academia not only for the love of the subject and the love of teaching, but also so that I can care for my character and the excellence of my being. But if I were to be a bleeding-heart romantic and those around me exploited this fact for their own personal whoop-taa, well that would twist my stomach in knots.

Can a complexity/cryptography giant on here give it to me straight before I sign these applications? What, exactly, is the academic environment? What are the characters of academic men? (Of course, anonymous answers are welcome).

If you are in this line of thought you might be interested in the musings of Cal Newport, but may be you know about him already.

I had not heard of Cal Newport. Thanks for the link. This does seem very relevant.

LOL … the musings of Cal Newport hold up William Herschel as an academic role model … and yet … my BibTeX database contains a lively and even hilarious example of what kind of academic role model the Herschel siblings (William and Caroline)

reallywere.The following excerpt is from M. Hoskin’s

The Herschel Partnership, as Viewed by Caroline, where we read (p.82) a letter that William Herschel wrote to his patron King George III:———————

“In a letter which Sir J. Banks laid before his Majesty, I have mentioned that it would require 12 or 15 hundred pounds to construct a 40-ft telescope, and that moreover the annual expenses attending the same instrument would amount to 150 or 200 pounds.”“As it was impossible to say exactly what some might be sufficient to finish so grand a work, I now find that many of the parts take up so much more time and labour of workmen, and more materials than I apprehended they would have taken, and that consequently my first estimate of the total expence will fall short of the real amount.”———————

As Michael Hoskin wryly comments: “Not for the last time in the history of astronomy, an astronomer seeking support had been modest in his initial demands, knowing that the funding body, confronted later with a choice between writing off all the money spent so far or coughing up more, would cough up.”

To sum up, as long as we’re considering the just allocation of academic credit … we can ask (with the benefit of hindsight) how much credit do the Herschel siblings (William and Caroline) deserve for their fund-raising and enterprise-building prowess, relative to their mathematical and observational prowess?

Perhaps the Herschels are remembered today because they were outstandingly skilled at

bothactivities.the dreams of an academic career could prove to be a heart-bleeding romanticism for some. and for some it can fetch rewards (not money :) ) beyond imagination. when you confront the beauty of truth, first through the eyes of others, and then through your own efforts, there is no match to it.

every choice will have pro/con. don’t get yourself into that analysis right now.

credits, partial creadits, absolutely no-credit for proving P/NP – all those scrutinies should not obscure your path, especially if you have a passion for math and theory. once you get a good grad school, take your time, and find a mentor that’d match your wavelength. they’re all great faculties out there, with remarkable accomplishments of their own. but it is better if there’s a true frequency match; else you’ll see damping – a very common reason for that 95% attrition rate.

Now I start understanding your estimation of the “attrition rate”, because you have explained it in terms I know pretty well: “wavelength” and ” frequency”.

Academics are generally nice people. They all went into it for the right reasons. However, any field with a 95% attrition rate is going to be tough, and is going to favor the tough.

I am not able to quantify the “attrition rate” in the field I worked (engineering, Portugal) but my perception is that though relatively high apparently was lower.

I’d like to be part of some shadowy Bourbaki-like math group that solves the remaining Millenium problems one by one and publishes the results anonymously. I wonder how the actual original Bourbaki collaborators dealt with the credit issue?

I think an empirical proof about releasing incomplete ideas is available in the “Open Source” / “Free Software” world: the Linux Unix-like kernel, versus GNU Hurd.

Hurd was (is still?) in development since about 1984 by a small, closed group. By 1991 it was still not ready (not “complete”) and had not been released for general use.

Linux was developed in 1991 and released that same year, incomplete.

Linux took off.

The Linux development philosophies “release early, release often” and “many eyes” are what drives it’s innovation and improvement, and computer science (dubious application of the word, if you take note of Dijkstra) has advanced greatly because of releasing ideas early, even when incomplete and later shown to be incorrect.

However, the drivers for releasing in this fashion are not necessarily supported in Scientific circles. Should a professor be paid to publish incomplete work? How is the final correct answer attributed? In “Open Source” the first question is side-stepped and the second question is taken care of by source control systems and a CONTRIBUTORS text file. These don’t translate well in academics. There is also the thorny issue of plagiarism to weed out.

So to my mind: the benefits of releasing early are clear for Science, as a whole. The benefits to individuals are (ironically?) an open question.