Skip to content

Innovating an Innovative Conference on Innovation

June 8, 2011


Innovation in the theory of computing

Adolph Ochs was an American newspaper publisher, and is credited with creating the famous motto of the New York Times: “All the news that’s fit to print.”

Today I have been asked by Shafi Goldwasser to announce a new conference, well the third version—with a new name—of a conference on innovation in theory.

As a general rule we do not make announcements here, since there are better places to do that—places we all go to see the latest news, such as here. But this is the link to the conference, which is now called ITCS (Innovations in Theory of Computer Science). The reason for the name change from ICS to ITCS is according to Shafi:

{\dots} it’s now an ACM sponsored conference and they didn’t want us to use that name.

As I said, Ken and I do not usually do announcements, but I feel that there are some interesting issues that this conference raises. Reversing Ochs’ famous motto, I want to suggest that we consider a new motto:

All the news that fits we print.

This is a much used old joke—but relevant for today’s discussion. We will suggest that there be some venues in theory that implement a different take on “survival of the fittest”: let all the ideas have free range on the safari first, then see which survive.

The Past

It seems to me that the talks at the last two I[T]CS conferences were by terrific people, on neat topics, and I wish I could have been there. But it also seems that they were not all that different from talks at STOC or FOCS or your favorite theory conference. This is not a complaint, yet it does seem like a lost opportunity. If the goal of the conference is to promote new ideas, new directions, and new research programs, then I think it did as well as our usual conferences. But not vastly better. Indeed I have heard comments from attendees that echo this opinion—a great collection of talks, but {\dots}

The Future

I have a suggestion on how to organize the conference. The neat thing about having no formal connection to the conference is I can make suggestions, without any constraints, without any worry that they might actually be followed.

The conference should be semi-parallel: some talks are non-parallel and most are parallel.

{\bullet } Non-Parallel Talks: These should be invited special talks by leaders in the community. I am always interested to hear what X, Y, and Z have to say. Hopefully they will lean toward innovation—but that is up to them.

{\bullet } Parallel Talks: These talks will be all the submitted papers. All. Not most, nor half. All. No papers will be rejected. None. The only control I would suggest is that a submitted paper must also be public: it must be posted to the “http://arxiv.org/.” This forces the authors to stand behind their work at least to the extent that they are willing to have it archived forever.

What is the advantage of accepting all? Clearly not all papers will be great, nor will all be correct, nor will all be on-topic. But researchers with anything to say will have a chance to say it. Crazy ideas will be there, crazy ideas that are just silly. Along with crazy ideas that years from now will be viewed as seminal. Ideas that may change the theory landscape. If a talk is out there, then it may attract few listeners. That is the risk the author of the paper takes.

The Association of Symbolic Logic has used exactly this format for their conferences: some invited talks by leaders and other talks which are completely unrefereed. They survive having no barrier. I suspect that many of the talks are not landmarks, but that is to be expected. Few talks are. I once attended a talk at one of their meetings on:

An Inconsistent Number.

The author had previously claimed that set theory was inconsistent, then that arithmetic was inconsistent, and finally that even primitive recession functions were inconsistent. It was wrong, but so what. Funny it is the only talk I recall from the meeting. Oh well.

Open Problems

What do you think? How should a conference on innovation be organized?

About these ads
35 Comments leave one →
  1. Abstract Type permalink
    June 8, 2011 2:59 pm

    Well, you could start by recognizing that “theoretical computer science” encompasses far more than just complexity theory and algorithms. There is a vasty and growing body of work on logic, semantics, verification, mechanized reasoning, and programming languages that has been shunned in the U.S. theory community for decades. It’s a pity for a number of reasons, but for the present discussion the main reason is that this body of theory presents a compelling vision that inspires a lot of innovation. One distinguishing feature of “the other theory” is that it has direct bearing on running code, whereas much work in conventional theory is innocent of actual applications, and seems often to be done by people who have no experience in writing programs (there are exceptions, of course). Another, more negative, observation is that the emphasis on (relative) lower bounds has amounted to little in practice. For a FOCS-STOC person, SAT is a hard problem from which one reduces. But in the world of practical program verification, SAT is an easy problem to which one reduces! SAT solvers work great, and are by no means intractably difficult or impractical to use. Another example: polymorphic type inference as used in ML or Haskell, for example, is DEXPTIME complete. Yet is is the single most effective formal method ever devised, and is used every single day all over the world without the slightest difficulty—it runs fast, every time, and the lower bound is never a problem. There is a disconnect between theory, as practiced in the U.S., and that which it is supposed to be a theory of, namely programming. Perhaps a conference on innovation in theory could start by recognizing and helping to bridge this gap?

    • Elements permalink
      June 8, 2011 10:38 pm

      You made excellent points. However, I don’t think that “Theory B” as you describe it is “non-standard theory”. It *is* the main-stream theory in most places around the world–especially Europe, excluding the US (and the US is only one part of an entire community).

    • June 8, 2011 11:13 pm

      Nice point about SAT. I think the EXP-completeness of ML uses unbounded “let”-depth or things like that, whereas practical ML or Haskell programs nest at most 2 or 3 deep.

      • neelk permalink
        June 9, 2011 7:32 am

        This is how it is commonly described, but it’s not actually the case.

        The real reason is that type variables can occur more than once in a type scheme. Suppose that you have a type schema T(‘a) with k occurrences of ‘a, and a type schema S(‘b) with n occurrences of ‘b. Then instantiating ‘a in T(‘a) with S(‘b) will yield a type schema with n*k occurences of ‘b. Furthermore, polymorphic generalization lets you reuse a term at multiple types, so you can iterate this process to square the size of the type of a term with a constant increase in the size of the program. This yields a type whose tree-size grows doubly-exponentially with the size of the program. Representing the tree as a DAG, you get a singly-exponential growth in the size of the output.

        For example, here’s an SML program which exhibits the fast growth I described. Note that the nesting depth of this program in 1 — the key feature is that each f_i is used at two different types.

        fun delta x = (x, x)        (* this has type 'a -> ('a * 'a), so its return value
                                       has a type double the size of its argument *)
        
        fun f1 x = delta (delta x) (* Now we use functions to iterate this process *)
        fun f2 x = f1 (f1 x)          (* 16x blowup *)
        fun f3 x = f2 (f2 x)          (* 256x blowup *)
        fun f4 x = f3 (f3 x)          (* 65536x blowup *)
        

        The reason for the difference between theory and practice is that programmers do not seem to write programs whose types are much larger than the size of the whole program itself. I think this is likely worth studying from a complexity pov, since it is a natural restriction on the kinds of proofs we allow.

    • pswoo permalink
      June 9, 2011 3:34 am

      Register allocation is NP-complete. The best algorithms go by 0/1 linear programming or k-coloring, and needless to say are exponential in the worst case. Nevertheless in practice they’re very good algorithms, and the problem is often solved at runtime.

    • June 9, 2011 4:55 am

      I admired the assertion

      “In the world of practical program verification, SAT is an easy problem to which one reduces!”

      Surely this is a Great Truth in Neils Bohr’s sense (that is, a truth whose opposite also is a Great Truth).

      And it is associated to a meta-Great Truth , that for a very broad class of algorithms, imposing restrictions relating to verification interchanges the roles of “hard” versus “easy”, “included” versus “excluded”, “infeasible” versus “feasible”, and even “undecidable” versus “decidable.”

      As a result, complexity theory has evolved what looks to outsiders like a “Two Cultures” schism with respect to broad classes of practical problems relating to randomness, sampling, quantum simulation, deciding class membership, etc.

      • June 9, 2011 7:43 am

        LOL … as a followup to the above, perhaps a “Two Cultures” complexity-theory conference would be fun.

        This conference would impose a strict requirement that articles be submitted in pairs “A” with “B”, with each article dealing with the same subject; the difference between them being that article “A” is allowed to use oracles and nonconstructive existence arguments, and article “B” is restricted to explicit constructions having feasible verifications.

        Article “A” could be (for example) Shannon’s A mathematical theory of communication (1948) and article “B” could be Berrou, Glavieux, and Thitimajshima’s Near Shannon limit error-correcting coding and decoding: turbo codes (1994).

        Article “A” could be Kolmogorov’s Three approaches to the quantitative definition of information (1968) and article “B” could be Blum, Blum, and Shub’s A Simple Unpredictable Pseudo-Random Number Generator (1986).

        Article “A” could be Feynman’s Simulating physics with computers (1982) and article “B” could be Kohn and Sham’s Self-consistent equations Including exchange and correlation effects (1965).

        The above “A” and “B” articles are justly renowned (for example, the Kohn-Sham article is the most-cited of all math-and-science articles) … and yet it is striking that decades often pass between “A” and “B” insights, and moreover, that “B” sometimes achieves (in practice) effects that “A” proves to be (formally) impossible.

        That is why a conference intended to accelerate innovation might do well to pair them … and thus encourage the “A” and “B” communities to exchange ideas and foster a more rapid synthesis of the “A” and “B” points-of-view.

    • June 10, 2011 4:41 pm

      I have met a reasonably large number of people that work on both European and American style theory, and I would not say that one of the kinds sucks any less than the other at programming.

  2. June 8, 2011 3:16 pm

    To me, your proposition seems like you would like to have (T)CS conferences looking a bit more like Math conferences (or at least you’d like to have this conference looking so). This also means that conferences (or this conference) would not have a selection role, but only an expository role. In a word, act like in all major fields, or to paraphrase Lance, act like in all grown-up fields.

    As I totally agree with Lance that it is time for CS to grow up, I totally agree with what you say. But I am even more radical as I would propose that this apply to any (T)CS conference. And thus, we could begin publishing only in journals, like everybody does.

    Some time, maybe…

  3. Anonymous permalink
    June 8, 2011 3:29 pm

    And the papers submitted to the conference will not be considered publications. So the authors need to either resubmit them to journals or to other conferences. Not a bad idea. I think it can be a good experiment. Trying to experiment for the next generation TCS conference without a radical change to established ones.

  4. Lance Fortnow permalink
    June 8, 2011 4:07 pm

    The ACM did not require the conference to change the name, just the acronym because ACM already has an ICS, the International Conference on Supercomputing.

  5. DKB @ NYU permalink
    June 8, 2011 5:34 pm

    One thing that’s worked well in my field (economics) is to let youngish people run it — max of 10 years since PhD. Not the norm for us either, but it works really well with the SED, one of our summer conferences.

  6. Anonymous Coward permalink
    June 9, 2011 2:16 am

    Dick,

    I think that’s a great idea, not the least of which because it would shake things up. anyone who thinks that it wouldn’t shake things up isn’t really thinking clearly about how things currently work.

    I’d like to imagine that whomever originally considered putting the “I” at the front of the conference name really intended to try to break up the status quo. either that, or they have a great ear for marketing and had no such useful idea in their head.

    I think that objections to the journal system run along the lines of: “It’s too slow. I want to know tomorrow what happened yesterday”. I’m not sure that’s a real problem. The major questions are still unanswered. Waiting a few months is not going to kill anyone. Maybe it’ll encourage people to make their proofs easier to read. One nice thing about old fields in mathematics is that people generally are super careful to write in an easy-to-understand way. One really bad thing is that the smaller the field, the more likely that there is a lot of unpublished knowledge that is super secret to the uninitiated. That’s not cool.

    To “Abstract Type”: I hate to break it to you, but ML and Haskell are hardly in use in practice. So as much as you’d like for me to believe that you’re operating in “the real world”, what matters is C, and how fast a program in C can run. That’s the reality. Yes, I realize that you were simply giving an example, but your example is flawed. Asymptotics actually matter. Deal with it. The argument that “real people actually run SAT solvers and can solve real problems in practice” is simply unrelated to the asymptotic question of whether or not SAT can be solved in polynomial time. It’s a mathematical question, and your heuristics are neither relevant nor part of the formalism of the question.

    • June 9, 2011 10:10 am

      Recognizing that it is neither necessary, nor feasible, nor even desirable that everyone think alike (because innovation would cease, for one reason) both Leslie Valiant and Misha Gromov have eloquently argued that messy real-world “heuristics” provide us with vital clues to beautiful & innovative mathematical ideas. As Gromov puts it:

      “The brain is hardly able to generate mathematical concepts by itself; it needs an input of “raw structures”, and Nature has much to offer.”

      See the recent Fortnow/GASARCH weblog topic A Valiant Weekend for references.

    • Semanticist permalink
      June 10, 2011 6:21 am

      Dear “Anonymous Coward”, whether Haskell or the ML-family of languages “are not used in practise” is irrelevant to Abstract Type’s (Robert Harper) point. C-like languages could easily have a Hindley-Milner polymorphic type-inference mechanism, and proposed C/C++ successors such as J.S. Shapiro’s BitC, or G. Hoare’s Rust do feature polymorphic type inference. Arguably, C with suitably adapted polymorphic type-inference would be a better C, and not suffer run-time performance penalties from better handling of types. There are many reasons for the continuing dominance of C/C++ but the compile-time cost of static type-inference is not one of them.

  7. Anonymous permalink
    June 9, 2011 9:47 am

    Most mathematicians do indeed enjoy doing mathematics “for the sake of mathematics” and most sub fields of pure maths are not actively motivated by applications. However, a lot of us are sympathetic about applied stuff, that we would very much hope what we do eventually lead to useful applications in the real world. The SAT example is more than valid, and it disgusts me that someone came up with such waffy argument claiming it’s bloody “unrelated”.

    On the other hand, a lot mathematicians are reluctant to get into theoretical computer science mainly because of combinatorial nature, that a lot of times messy proofs are unavoidable. Now for the sake of pure maths, things like P/NP aren’t really the most elegant problems in the sense of G H Hardy, the reason why it’s so important, apart from its sheer difficulty, is its practical consequence.

    That said, I don’t see how much SAT actually matters to the US TCS community, see that once every few months in STOC/FOCS people pile up huge swathe of lower bound results on all sorts of toy models, yet very few people actually work on showing that SAT is hard?

    • June 11, 2011 10:39 am

      It’s bizarre that there’s so much pent-up hostility. Initially, the question was about what would make a good conference. Somehow that got twisted into, “you should give us theory B people more respect”. As an immediate first comment.

      I remember going to a conference and refusing to answer a questioner as to whether or not I was a “theory A person” or a “theory B person” (the questioner considered himself to be in theory B). (I honestly was so new to theory that I didn’t know what he was asking or whatever implications he was assuming were involved). He even suggested that my refusal to answer indicated very strongly that I was in theory A. It was such a ridiculous question, but was somehow very important to him. Why are you guys so uptight?

      Is this _honestly_ the strongest argument for how to fix a conference? “Make sure that theory B gets in”.

      I think that the suggestion (in the column) to let everyone in pretty much addressed this via set inclusion.

      s.

      • rjlipton permalink*
        June 11, 2011 1:03 pm

        Steve,

        I agree. I just wanted to raise the question of how to structure a meeting that is geared toward innovations. It should be different from STOC/FOCS, since we already have those meetings and other ones like them. I had other ideas that choose to leave out: papers should have conjectures rather than theorems; papers could report on failed ideas; papers could raise directions that are half-baked. All the kinds of things that have zero chance of being accepted at a “normal” theory conference—of type A or B.

  8. June 9, 2011 2:09 pm

    I’ve heard the idea of ‘let’s have a conference where every paper is accepted’ before, and I’m not enthusiastic about it for TCS, because I’m also part of other communities where this is done — namely, INFORMS — and I don’t much like the outcome. There are 40 parallel tracks, almost all of which are sparsely attended (10 people in the audience). It’s pretty hard to tell in advance which talks are high quality, and very hard to switch rooms in the middle of a session (to get to one of the other 40 rooms). So one hears a few talks that are worthwhile and a lot of dreck. Whereas I find most talks at a FOCS/STOC conference interesting in one way or another. For all the difficulties and randomness of the selecting papers via program committee, the PC really does perform a valuable editorial service.

  9. Panos permalink
    June 9, 2011 2:37 pm

    What are the big results of “theory B” uring the last 10 years?

    • June 10, 2011 7:36 pm

      Agrawal–Kayal–Saxena PRIMES is in P?

      Cohn-Kleinberg-Szegedy-Umans Group-theoretic algorithms for matrix multiplication?

      I would agree that “big results” aren’t common … in either “A” or “B” … (by definition?) … also it commonly happens that appreciation of “B” results increases only slowly … the Kohn-Sham “B” result is one example … see S. Redner Citation Statistics From More Than a Century of Physical Review for details.

      • Elements permalink
        June 11, 2011 9:22 am

        John, these are “Theory A” results.

      • June 11, 2011 5:39 pm

        Hmmm … acknowledging that “A” and “B” (whatever they are) delimit a range of mathematical styles … and that the AKS, CKSU, and KS articles all would quality for an imaginary conference that barred oracles, nonconstructive existence proofs, undecidable and/or unverifiable class membership criteria, and Complexity Zoo inclusion relations that are assumed but not proved … aren’t all three of these results toward the “B”-ish end of the result spectrum? That was the intent, at least, in choosing these particular examples.

  10. June 10, 2011 2:30 pm

    For parallel talks, the idea of admitting all looks infeasible, as either the meeting space and time are both spare resources; and the economical cost may go up too much.
    But the idea of not using a selection committee is worth more pursuing.
    A public, online, and non-anonymous voting mechanism may be good here.
    The selection power can be given to the public. Given the enforced online archiving with real identity in advance, it may be innovation promoting if we allow the public to vote with comments and fixed bullet choices somewhere either on the archiving site or some other site. The control committee may design the fixed choices or weights and weight calculation formulas or set some other guidelines. But all these need to be done in advance.

  11. Abstract Type permalink
    June 11, 2011 3:09 pm

    Someone asked for a summary of “theory b” results in the last ten years. The restriction to a decade seems arbitrary, but let me mention a few innovations that have had a lot of impact. Most have built on developments over decades, rather than a “point event” to which a date can be attached.

    The highest-impact developments have been in verification, language design, and mechanization of mathematics. To take one example, model checking has been applied to hardware verification, software verification, and analysis of biological and physical systems. This body of work builds on many, many results, including logical formalisms, algorithms for checking, semantic methods such as abstract interpretation, building decision procedures, and applying these tools to practical problems. To take another, the development of machine-assisted proof for a variety of applications, including program verification and mechanization of mathematics, has taken off in the last decade with the development of full-scale tools such as the Isabelle prover, ACL2, NuPRL, Coq, HOL, and PVS, to name just a few. Here again these systems depend on decades of work on fundamental logic. One line of development is constructive type theory, which is the foundation for Coq and NuPRL, has had enormous influence not only on mechanization of mathematics, but also programming language design, starting with ML and continuing with Haskell, F#, and Scala, to name a few. The semantics of polymorphism lies at the heart of modular program development, and has inspired and informed the design of these languages. There is a very beautiful theory of how and why this works, and there is much ongoing effort on extending the main ideas to wider classes of languages. Very recently there has been a surge of interest in the close connections between constructive type theory, which was conceived of as a foundational theory of computation, and homotopy theory, which lies at the intersection of algebra and geometry. In yet another direction the development of separation logic, which is based on the concept of substructural logic, has revolutionized the verification of concurrent and sequential imperative program verification, and the ideas are being extended to other domains, such as computer security (which is not reducible to just cryptography).

    This is just a very brief, high-level view of innovative developments in theory, but which are not at all represented in the narrow confines of “theory A”. The advances I’ve mentioned have changed the way that software is developed in the industry, and may even change the way we do mathematics, even in areas that, seemingly, have little connection to computing. Since the post asked for ideas of how to promote and advertise innovation in theory, I thought it worthwhile to point these out.

  12. June 11, 2011 11:49 pm

    Maybe all the presentations should first be given in cyberspace, as extended blog posts or youTube videos or podcasts (your choice). Then attendees sign up to the fan-list of each of their favorite presenters. Then the physical conference schedules a time and room where each presenter gets to meet and answers questions from his or her fans.

    • June 12, 2011 12:01 am

      (This would address the limited resources objection. The presenters with the longest fan-list get priority)

      • June 12, 2011 12:09 am

        Maybe priority can be based on more than just length of fan-list, to avoid the problem of tyranny of the majority.

  13. June 12, 2011 12:20 am

    Presenters that “lose” can still come to the conference and meet their fans at a local pub.

  14. June 12, 2011 12:10 pm

    Regarding {A} versus {B}, the appended announcement came in the mail today; this conference’s focus on innovation provides a productive context for discussions of “A” and “B”. And how appropriate that a conference upon such this rare topic takes place in June …

    Call for Papers

    Submissions to the annual conference “{A\cong{B}}  2012: “What is so Rare as a Day in June?” are solicited by the sponsoring agency, the Erewhon Mathematical Education Trust (EMET).

    What is so rare as a day in June?Then, if ever, come perfect days;There is no price set on the lavish summer,And June may be had by the poorest comer.            James Russell Lowell

    Program for {A\cong{B}} Day 1

    Conciliation 1:   (A) Random numbers cannot be generated …                 … and (B) how to generate random numbers

    Conciliation 2:   (A) Turing machines cannot be verified or validated …                 … and (B) how to verify and validate Turing machines

    Program for {A\cong{B}} Day 2

    Conciliation 3:   (A) Learning from data is infeasible …                 … and (B) how to learn from data

    Conciliation 4:   (A) Undecidable attributes of the complexity class {P} …                 … and (B) deciding attributes of {P} with resources in {P}

    Program for {A\cong{B}} Day 3

    Conciliation 5:   (A) Distributions that cannot be classically sampled …                 … and (B) how to classically sample distributions

    Conciliation 6:   (A) Simulating dynamical trajectories is infeasible …                 … and (B) how to simulate dynamical trajectories

    Program for {A\cong{B}} Day 4: The “State of Sin” Prize Competition

    Enterprise I:   (A) To “see every atom” is infeasible …                 … and (B) how to “see every atom.”

    Enterprise II:   (A) Understanding biological dynamics is infeasible …                 … and (B) how to understand biological dynamics

    Enterprise III:   (A) Regenerative healing is infeasible for humans …                 … and (B) how to heal humans regeneratively

    SUBMISSION REQUIREMENTS

    I. Manuscripts must be submitted in pairs: “A” with “B” (unpaired submissions will not be reviewed).

    II. The EMET acceptance committee will weigh equally: (a) the joint rigor and generality of submissions “A” and “B,” (b) the clarity, consistency, and unity of the nomenclature adopted by “A” and “B,” (c) the combined contribution of “A” and “B” to mathematical understanding, innovation, and enterprise.

    III. On the final day of the conference, the John von Neumann A-and-B Prize (known also as the von Neumann “State of Sin” Prize) will be awarded to the A-and-B manuscript pair that (in the judgment of the EMET committee) has contributed most to advancing mathematical understanding, learning, innovation, and enterprise.

    The EMET conference thanks the Google, IBM, Microsoft, Intel, GE, and Bruker corporations for their contributions to this year’s A-and-B Student Travel fund, and to the A-and-B von Neumann Prize fund.

    • rjlipton permalink*
      June 12, 2011 1:40 pm

      Great comment. Wonderful.

    • June 13, 2011 2:34 pm

      Thanks, Dick! Since not a single idea in that post is original to me, I’ll mention three quotations that greatly influenced it (this time hopefully avoiding bad linebreaks). In the introduction to Michael Spivak’s A Comprehensive Introduction to Differential Geometry (1st edition, 1970) we read:

      “No one denies that modern definitions are clear, elegant, and precise; it’s just that it’s impossible to comprehend how anyone ever thought of them. What is intended [in this volume] is a presentation of the subject along the lines that its development might have followed … When modern termininology finally is introduced, it should be as an outgrowth of this (mythical) historical development.”

      This motivates the first two goals of the {A\cong{B}}  2012 Conference:   (1) synthesize a mythical-but-functional history for complexity theory, and (2) naturalize and conciliate the nomenclature of the {A} and {B} communities. In Edsger Dijkstra’s essay On the Cruelty of Really Teaching Computing Science we read:

      A program is no more than half a conjecture. The other half of the conjecture is the functional specification the program is supposed to satisfy. The programmer’s task is to present such complete conjectures as proven theorems.

      Dijkstra’s principle provides the romantic attraction between {A} and {B}: the rigorous verification provided by {A}‘s theorems attracts {B}; the vigor of {B}‘s practical applications attracts {A}. In Scott Aaronson’s recent Buhl Lecture Quantum Computing and the Limits of the Computable he asserts

      There’s a question here, what exactly does it mean to solve an NP-complete problem? … If you can’t actually get the answer out, by some actual physical measurement that one could make, then it doesn’t count!

      The {A\cong{B}}  2012 Conference further strengthens this criterion, by replacing “actual” by “verifiable and/or reproducible.” Here the point is that {B} is attracted to {A} precisely to the extent that {A}‘s methods provide such a high level of proof, that {B} need take nothing on faith. As with all great romances, {A\cong{B}}  2012 Conference would (hopefully) witness stormy and passionate courtships … overcome impossible obstacles … and (in the long run) be fertile of great enterprises! :)

  15. GASARCH permalink
    June 15, 2011 5:45 pm

    Many math conferences use this model or something like it (AMS meeting, MAA meeting).
    I’ll mention one PRO that has not been mentioned: some schools only give you money if you are
    PRESENTING at the conference. If everyone is presenting, everyone gets funded to go.

    An odd CON: Math does not value conferences so they do not have the problem of too many submissions
    (or maybe its that math is… HARD!). Since Comp Sci has the notion that conferences are very important to get into, you may have the problem of too many submissions.

    • June 15, 2011 10:13 pm

      “too many submissions.”
      My idea (see above) addresses this objection. I think the main problem is that aristocracies don’t want to try new things. Why would they?

Trackbacks

  1. Coloring Graphs Made Easy? « Gödel’s Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,250 other followers