Skip to content

Triple Century Post

July 8, 2011


Does continuous mathematics impact discrete complexity?

Godfrey Harold Hardy was an avid fan of cricket, besides being one of the 20th Century’s most influential mathematicians. During cricket season at Oxford and Cambridge, he would work on mathematics in the morning, and then spend a long afternoon watching a match at the university cricket ground.

Today is this blog’s 300th post. In cricket terms this means we have scored a triple century and are still not out. Until recently a score in the 300’s was the highest by any individual batsman in a Test Match. Brian Lara, however, scored 400 not out for the West Indies against England in 2004. This gives us our next objective.

Cricket was invented in England but has gone global. For instance, cricket is the true national sport of India’s 1.2 billion inhabitants, and of the surrounding 355 million in Pakistan, Bangladesh, and Sri Lanka besides. This puts baseball, the subject of our last post, in some perspective.

Test Matches run for five days, with three two-hour sessions per day. Whereas baseball games have nine discrete innings of three outs apiece, and last only a few hours, Test Matches have two innings of ten outs apiece, and thus feel continuous. Drawing on some of Hardy’s work, we discuss whether continuous mathematics may have any vital impact on {\mathsf{P} = \mathsf{NP}}, even though that question belongs to discrete mathematics.

I (Ken) have more material on baseball (handout used for a new-graduates welcoming outing), cricket, and how they compare (humorous) on my personal webpages.

Hardy and Cricket

Hardy himself was a good cricket batsman in his youth, and played various sports actively until a heart problem in 1939 at age 62. He also needed to be outdoors for sunshine, especially in winter. In his late forties he ranked the accomplishments he most desired to achieve in this order:

  1. Prove the Riemann Hypothesis.
  2. Make a match-winning score of 211 not-out in the last innings at The (Kennington) Oval, which traditionally hosts the last Test of a major series in England.
  3. Find an argument for the non-existence of God that would convince the general public.
  4. Be the first to climb Mount Everest.
  5. Lead a communist unified Britain and Germany.
  6. Assassinate Benito Mussolini.

Note that he did not ask for a triple century. At the time no one had scored one. The record then was 287 at the Sydney Cricket Ground by England’s Tip Foster in 1904, while the previous record had been 211 by Australia’s Billy Murdoch at The Oval in 1884. The number 211 is the first prime after 200, while being not-out (written 211*) would still make it better than Murdoch’s score.

*

Later the man billed as the “Babe Ruth of cricket,” Sir Donald Bradman, would hold the individual Test batting record by scoring 334 for Australia against England at Leeds in 1930. Hardy’s own reaction to Bradman’s feats, compared to England’s “Master” Sir John Hobbs, was:

Bradman is a whole class above any batsman who has ever lived: if Archimedes, Newton and Gauss remain in the Hobbs class, I have to admit the possibility of a class above them, which I find difficult to imagine. They had better be moved from now on into the Bradman class.

Bradman remained active in cricket administration and promotion and commentating well into the 1990’s. I (Ken) remember this version of a story that has become subject to Internet “phone tag”: In 1992 or 1993 as a guest on a Test telecast he was asked how much he thought he would score on a wicket that looked benign but had batsmen getting themselves out cheaply. He replied, “about 50.” Commentator: “Only 50?” Bradman: “Well, I am eighty-four years old!”

Hardy-Littlewood and {O,\Omega} Notation

The computer science Donald in the Bradman class is of course Donald Knuth. Knuth credited Hardy and his longtime collaborator John E. Littlewood with introducing the “Big-{\Omega}” asymptotic notation, though with the different meaning {f = \Omega(g) \iff f \neq o(g)}. The difference from the currently accepted meaning

\displaystyle  f = \Omega(g) \iff (\exists C > 0)(\exists x_0 > 0)(\forall x \geq x_0) Cf(x) \geq g(x)

is that {f} could oscillate. For instance, if {f(x) = g(x)\sin(x)}, then {f} would meet Hardy-Littlewood’s definition of being {\Omega(g)} but not Knuth’s now-standard one.

One can similarly use trigonometric functions to define {f(x)} and {g(x)} for which none of {f = o(g)}, {g = o(f)}, and {f = \Theta(g)} hold, indeed neither {f = O(g)} nor {f = \Omega(g)}. This is summarized by saying that “trichotomy fails” for asymptotic comparisons. However, Hardy and Littlewood proved that if one limits attention to real functions defined by arithmetic, exponentiation, and logarithms, then trichotomy does hold. Here is a “modern computer science-y” way of stating their result:

Theorem 1
Let {f} and {g} be defined recursively from { \mathsf{Fn} } in the grammar:

\displaystyle  \mathsf{Fn} ::= const | x | \mathsf{Fn}+\mathsf{Fn} | \mathsf{Fn} - \mathsf{Fn} | \mathsf{Fn}*\mathsf{Fn} | \mathsf{Fn}/\mathsf{Fn} | \mathsf{Fn}^{\mathsf{Fn}} | \log(\mathsf{Fn}).

Then either {f = o(g)}, {f = \Theta(g)}, or {g = o(f)}.

This says that almost all of the functions we use to define complexity classes fall into a nice total linear ordering. Thus for these so-called “log-exp functions” or “Hardy-Littlewood” functions, there is no difference between their {\Omega} definition and Knuth’s.

We would like to find a nice, accessible proof of this theorem. An aside, Dick used this theorem with his co-author Rich DeMillo once in a paper on independence issues surrounding the {\mathsf{P}=\mathsf{NP}} question.

From Analysis to Primes

The Riemann Hypothesis itself is stated for continuous functions but has major implications for prime numbers, which are discrete. The Prime Number Theorem (PNT) was first proved via continuous analysis, before an “elementary” proof was found. The PNT states that the number {\pi(x)} of prime natural numbers below the real number {x} satisfies {\lim_{x\rightarrow\infty} (\pi(x)\ln x)/x = 1}.

Hardy and Littlewood themselves made two notable conjectures about the distribution of prime numbers. For the first, call a set {H} of {k} distinct natural numbers “nice” if for every prime number {p}, the number {v_H(p)} of distinct values of {h} mod {p} for {h \in H} is at most {p-1}. Define

\displaystyle  \pi_H(x) = |\{n \leq x: (\forall h \in H)[n + h \textrm{ is prime}]\}|.

Then when {H = \{0\}}, {\pi_H(x)} is just {\pi(x)}. Thus the following conjecture generalizes the PNT:

Conjecture 1
For every nice {H} of size {k},

\displaystyle  \lim_{x \rightarrow\infty}\frac{\pi_H(x)(\ln x)^k}{x S_H(x)} = 1,

where

\displaystyle  S_H(x) = \prod_{\textrm{primes }p}\left(1 - \frac{v_H(p)}{p}\right)\left(\frac{p}{p-1}\right)^k.

The second conjecture is simpler: it is just that for any real numbers {x} and {y}, there are always no more primes from {x+1} to {x+y} than from {1_{\,}} to {y}. In symbols, this says

Conjecture 2
For all natural numbers {x} and {y}, {\pi(x+y) - \pi(x) \leq \pi(y)}.

This seems obvious, since the sequence of prime numbers thins out. However, it contradicts certain cases of the first conjecture. It seems odd that Hardy-Littlewood would have advanced two mutually contradictory conjectures, but perhaps one came from Hardy while the other came from Littlewood.

From Primes to Complexity

Further versions of the Riemann Hypothesis have even more direct implications in computational complexity. The generalized Riemann hypothesis (GRH) states that for any group character {\chi} on {\mathbb{Z}_k} the corresponding complex L-function

\displaystyle  L(\chi,s) = \sum_{n = 1}^{\infty} \frac{\chi(n)}{n^2}

has all of its nontrivial zeroes on the line {\mathsf{Re}(s) = 1/2}. Here {\chi(a)\chi(b) = \chi(ab)}, {\chi(n) = \chi(n \text{ mod } k)}, and {\mathsf{gcd}(n,k) > 1 \implies \chi(n) = 0}. (We have talked about group characters on this blog before. GRH is different from the extended Riemann hypothesis (ERH) which involves zeta functions over number fields.)

Pascal Koiran proved (note erratum) that GRH places the problem of whether a set of polynomial equations has a solution over the complex numbers into the second level of the polynomial hierarchy, indeed into the Arthur-Merlin class {\mathsf{AM}}.

Koiran’s main GRH-dependent technique intuitively transfers solvability over {\mathbb{C}} into statements about solvability modulo many primes {p}. Let {L} bound the binary length of coefficients in the finite set {S} of {s}-many polynomial equations in {n}-many variables, and let {d} bound their degree. Set

\displaystyle  A(c) = d^{cn}s(\lceil\log_2 s\rceil + L),\qquad X(b,e) = L^b 2^{(n\log_2(nd))^e}.

Theorem 2 For any finite set {S} of polynomial equations there exist {b,c,e > 0} such that:

  • if the equations are not solvable over {\mathbb{C}}, then the number of primes {p} such that they are solvable over {\mathbb{Z}_p} is at most {A};
  • if they are solvable over {\mathbb{C}}, then the number of primes {p \leq X(b,e)} such that they are solvable over {\mathbb{Z}_p} is at least {8A(3 + \log_2 A)}.

From Continuous to Discrete

We view Koiran’s theorem as a way of translating properties of continuous mathematics into discrete domains. The theory of poles and winding numbers that is basic to complex analysis is one of the ways to connect the discrete and the continuous, while Knuth himself co-authored the book Concrete Mathematics which marries them. Three other avenues by which people are applying tools from continuous mathematics to do research on computational complexity are:

We would be interested to know of other such avenues and opinions about them. There is also some relation to the subject of “Algebrization”.

Open Problems

Is it cricket to try to get discrete-complexity consequences out of continuous mathematics?

Are there other striking applications of Koiran’s density-gap phenomenon?

What would Hardy have accomplished in computational complexity?

[fixed typo in L-function, clarified Braverman-Cook refc. and added BSS paper]

About these ads
25 Comments leave one →
  1. July 8, 2011 4:33 pm

    To pursue your analogy, is there any way, in principle, that you could lose your wicket? The only thing I can think of is that someone sends in a comment with a quick proof that P does not equal NP, and that after an initial flurry of interest you decide that the blog has lost its raison d’etre. But something tells me you’ll be getting that quadruple century …

  2. Anonymous permalink
    July 8, 2011 6:44 pm

    Much of what one does in continuous mathematics is manipulations of discrete objects. For example, for many analytic estimates that involve discrete objects (such as a sum over the elements of a set, weighted by integers), whenever one writes down an inequality (Quantity A) T, where |S| = (Quantity A) and |T| = (Quantity B),” where f can be explicitly written down by tracing through the proof. Furthermore, many “analytic formulas”, having big-oh error terms, can be converted into algorithms: often, a formula of the type (Quantity A) = (Quantity B) + O(Quantity C) can be interpreted as the algorithm, “To compute Quantity A, first compute quantity B, then the O(Quantity C) “error”, and lastly add them together”; in many cases the “error” and Quantity B are much easier to compute than the “naive algorithm” for computing Quantity A (by “naive algorithm”, I mean the defining algorithm — e.g. the “naive algorithm” to compute pi(x) is to, one-by-by, enumerate the number of primes up to x).

    I do not think that the solution to the P vs NP problem will be solved by any of the methods written on this blog. It will be solved by someone unexpected… trying ideas orthogonal to all of these. No amount of reading up on this or that nifty trick or on the opinions of “big thinkers”, or writing “Will such and so crack P vs NP?”, will get you there. Only luck, profound insight, and a lot of hard work will.

    • Elandalussi permalink
      July 9, 2011 9:47 am

      “I do not think that the solution to the P vs NP problem will be solved by any of the methods written on this blog. It will be solved by someone unexpected… trying ideas orthogonal to all of these. No amount of reading up on this or that nifty trick or on the opinions of “big thinkers”, or writing “Will such and so crack P vs NP?”, will get you there. Only luck, profound insight, and a lot of hard work will.”

      Yes. Indeed! It will not. The “lot of hard work” is mainly the formalisation of new & simple concepts about Computation.
      IMHO, Jean-Yves Girard with Geometry of Interactions is much closer than any one. Unfortunately his formalisation is too technical & lacks of new concepts.

      Every one of us trying to get an answer about P versus NP is like this fly going fast on the glass & knocking it hard. And then after, trying again to fly through the window, forgetting what happens earlier & again & again … The Fly Syndrom. Why not to calm down and say: “yes there is an obstacle we don’t see because today Theory of Computation makes it INVISIBLE!

      It lies down in the very beginning of the formalisation of what we call “computation”. Through P vs NP Nature is answering the same response it gave some decades ago …

      • July 9, 2011 5:15 pm

        Elandalussi, what you are talking about has been described in Humberto Maturana and Francisco Varela’s Autopoiesis and Cognition in a beautiful and much-quoted phrase:

        … the always gaping trap of not saying anything new because the language does not permit it.

        My database has multiple quotations upon this theme, of which among the darkest is Ahab’s soliloquy that begins “All visible objects, man, are but as pasteboard masks …” and among the most plainly stated and practically useful is Spivak’s

        There are good reasons why theorems should all be easy and the definitions hard … Definitions serve a twofold purpose: they are rigorous replacements for vague notions, and machinery for elegant proofs … Stokes’ Theorem shares three important attributes with many fully evolved major theorems: (1) It is trivial. (2) It is trivial because the terms appearing in it have been properly defined. (3) It has significant consequences.

      • Anonymous permalink
        July 9, 2011 7:05 pm

        Elandalussi: I agree that we need new and simple concepts and structures about computation; and it is hard to see where to look for them. But good definitions/structures alone will probably not be enough to see how to crack the P vs NP problem. The resolution of many great problems do often require new definitions and structures; but they also often require a lot of really hard, technical, grinding-through-identities-and-calculations (e.g. the resolution of the Poincare Conjecture and the proof of Fermat’s Last Theorem [by Wiles]). Even in algebra, a subject full many deep definitions and concepts, most papers require lots of formal manipulations to prove their theorems, and these often aren’t just simple exercises in “diagram chasing”

        I think that these new definitions/structures will not come wholly from any exiting sub-field of mathematics, but rather will need to be invented “from scratch” (as perhaps Girard has done). There is this tendency that some people have to assume that some particular deep theory they know about must somehow have a bearing on whatever deep problem they come in contact with, no matter how unrelated the theory and problem seem to be. It is not unlike the quantum consciousness fallacy: consciousness is a mystery (to some), and quantum physics is deep and what it “means” seems mysterious; and therefore these two mysteries must have something to do with one another.

      • Serge Ganachaud permalink
        July 11, 2011 6:10 pm

        If we knew what a computation really is, Church’s thesis would be a theorem.

      • Serge Ganachaud permalink
        July 23, 2011 12:14 pm

        “Every one of us trying to get an answer about P versus NP is like this fly going fast on the glass & knocking it hard. And then after, trying again to fly through the window, forgetting what happens earlier & again & again … The Fly Syndrom. Why not to calm down and say: “yes there is an obstacle we don’t see because today Theory of Computation makes it INVISIBLE!”

        I completely agree. Not only does the P!=NP conjecture imply the existence of trapdoor functions, but it also behaves like a trapdoor to the brains who try to prove it!

      • July 23, 2011 12:32 pm

        Serge, on a later Gödel’s Lost Letter and P=NP topic titled “Make Your Own World”, Tim Gowers posted a short three-paragraph essay on P vs NP that I admire greatly and recommend highly … very seldom in my experience has any comparably brief essay so enjoyably inspired new reflections on these classic problems.

  3. Pascal Koiran permalink
    July 8, 2011 8:27 pm

    Thanks for highlighting my work, but may I point out that the survey by Braverman and Cook is not about the Blum-Shub-Smale model. Their survey is about an approach to real analysis based on the usual (discrete) bit model of computation. This line of work was pioneered by no one but Turing himself.

    The BSS model is an algebraic (as opposed to discrete) model of computation: the basic entities are field elements and field operations, and the field may well may be infinite (think of the real or complex numbers).
    In my opinion this model is especially well suited to the study of the complexity of problems from computational algebraic geometry, such as e.g. the feasibility of systems of polynomial equations in several variables.

    • July 8, 2011 11:27 pm

      Thanks Pascal—didn’t yet get time to look in your later papers for technical improvements (owing to events in your country:), so feel welcome to mention… I added a link to the 1989 BSS paper in Bull. AMS, on “jump-started” which I was careful to write rather than “originated”.

      • Pascal Koiran permalink
        July 9, 2011 9:02 am

        Thanks Kenneth for the update. What’s going on in my country ?
        The view that these two models (the bit model and the Blum-Sub-Smale model) are competing seems to be commonplace, but my own opinion is different. The reason is that these models are best-suited to address very different types of questions.
        Let me give a couple of examples to make this point.

        Bit model questions:

        a) for a real number x, how many bit operations do we need to compute
        an approximation of x to some given acuracy ?

        b) for a continuous function from the reals to the reals,
        how many bit operations do we need to compute f(x)
        to some given accuracy, given (oracle) access to approximations of x?

        BSS model questions:

        1) is it possible to decide with a polynomial number of arithmetic operations and comparisons whether a degree 4 real polynomial (in many variables) has a real root ?

        2) given a system of polynomial equations over the complex numbers,
        is it possible to compute the dimension of the solution set with a polynomial number of arithmetic operations and equality tests ?

        You can see that questions a-b and 1-2 are of a very different nature.
        So I view these models are complementary rather than competing.

  4. Anonymous permalink
    July 8, 2011 8:55 pm

    Some of what I wrote didn’t parse. The “(Quantity A) T” was actually “(Quantity A) is less than or equal to (Quantity B)”.

  5. Luonnos permalink
    July 8, 2011 9:31 pm

    I think the n^2 in your L-function was supposed to be n^s.

  6. Paul permalink
    July 9, 2011 9:17 am

    Until now I had only heard the Ty Cobb version:

    “You’ve got to remember – I’m seventy-three.” On why he would only hit .300 against today’s pitchers.

    http://www.baseball-almanac.com/quotes/quocobb.shtml

  7. Cristopher Moore permalink
    July 10, 2011 9:36 am

    This isn’t really about computational complexity, but some of our favorite discrete algorithms have continuous roots. For instance, one way to view Karmarkar’s interior-point method is as Newton’s method for minimizing a kind of potential energy in the polytope, or even as discrete-time snapshots of a continuous flow. See

    Gill et al., Mathematical Programming 36 (1986) 183-209
    Anstreicher, Mathematical Programming 41 (1988) 367-373

    Another nice use of continuous methods is the technique of differential equations for analyzing the behavior of greedy algorithms on random graphs, random Boolean formulas, etc. To my knowledge, this started with Karp and Sipser’s algorithm for Max Matching and Max Independent Set in random graphs.

  8. Pascal Koiran permalink
    July 10, 2011 4:57 pm

    > Are there other striking applications of Koiran’s density-gap phenomenon?

    Peter Burgisser has shown that a collapse of complexity classes in Valiant’s model (namely, VP=VNP) implies a similar collapse for boolean complexity classes.
    This is covered in his book: Completeness and reduction in algebraic complexity theory.

  9. July 11, 2011 10:39 am

    Before this wonderful post is superseded, please let me join the chorus of appreciation and thanks for Gödel’s Lost Letter and {P=NP}‘s “triple century.” :)

    Regarding Gödel’s Lost Letter’s coming century, the following Juris Hartmanis phrase provides a “burning arrow” template for future columns:

    Results about the complexity of computations change quite radically if we consider [insert burning arrow here].

    Hartmanis’ burning arrow is (of course) “properties of computations which can be proven formally.”   Braverman and Cook’s burning arrow is “computational models of functions over the [bit-model] reals.”   Blum, Shub, and Smale’s burning arrow is to “present a model of computation over the [continuum] reals” and thereby “bring the theory of computation into the domain of analysis, geometry and topology.”   Traub and Wozniakowski’s burning arrow is that “information is partial, contaminated, and it costs.“

    A major attraction of Gödel’s Lost Letter for systems engineers is that we dearly cherish all four of these burning arrows, as follows: (1) computations are useful only if their correctness can be validated (Hartmanis’ arrow); (2) numerical simulations are essential to the engineering of large systems (Braverman & Cook’s arrow); (3) we engineers understand our simulations mainly via the toolset of analysis, geometry and topology (Blum, Shub & Smale’s arrow); and (4) in practice we systems engineers more commonly sample noisy trajectories of thermalized/turbulent/chaotic systems than we compute deterministic functions (Traub & Wozniakowski’s arrow). It’s not uncommon nowadays for research groups in synthetic biology and/or materials science to generate terabytes of simulated classical and/or quantum trajectories on a daily basis. That is why we engineers enthusiastically lash all four complexity-theoretic arrows together and fire them at practical problems … this computational strategy proves to be both fun and exceedingly useful.

    As engineers devote more-and-more computational power to sampling trajectories, our attention is drawn increasingly to the practical verification and validation of sampling computations. In this respect we stand in need of … more Gödel’s Lost Letter arrows! Consider for example the following dialog associated to Braverman & Cook’s discussion of the extended Church-Turing thesis (ECT):

    Alice:    Here is my experimental sample of truly random binary digits computed by my (one-photon) linear optical network.

    Bob:    Here is my simulated sample of pseudo-random digits computed by a classical Turing machine.

    Alice:    Sorry Bob … your sample is algorithmically compressible, and mine isn’t. Therefore my experimental data demonstrate that the ECT is false!

    In the absence of any formal association of verification to sampling, and under the assumption that the state-space of quantum mechanics is a linear Hilbert space have exponentially great dimension, Alice’s reasoning is impeccable. So should complexity theorists regard the ECT as having already been formally disproved … decades ago? Formally “yes,” and yet we have the practical intuition that Alice’s reasoning is dubious.

    Thus, a future Gödel’s Lost Letter on the general topic of validation in the context of computational sampling would be very welcome. For engineers such topics are both enjoyable and have great practical importance. The more burning arrows the better, and so thanks are given for the wonderful burning arrows that Gödel’s Lost Letter columns have already shot! :)

  10. July 11, 2011 12:36 pm

    A challenge for Gowers (he was asking for one in some previous post)! As a crank, I promised not to follow rules, except for one – the counter-example is a reason to shut-up for me. As Dick was asking for something small, here are BASIC (as programming language) arguments ( as everybody knows I cannot code in “math assembly”, and I think there are enough translators out there, if needed). As I told, I do not need neither recognition, neither prizes (they are for assembly coders), as a user, I see a lack of attention to a particular interface, which I want to draw attention to serious math coders.I would put it into polymath, if I would have enough knowledge to communicate – the math here should be elementary, according to modern standards.

    The claim is: \sum \limits_k (x_k^2-1)^2 +\alpha x^T Q x + t is sum of squares (sos) of polynomials when non-negative (psd) for small enough \alpha.

    Proposition 1. The claim is true for \alpha=0 – obvious.
    Proposition 2. The claim is true for q_{1, 2}= 1, q_{i,j \neq (1,2)}=0 – the minimum for x^*_{k \neq 1,2} = \pm 1, and is independent from x_1, x_2. Due to Hilbert (x_1^2-1)^2 +(x_2^2-1)^2 +2 \alpha x_1 x_2 + t is sos when psd.

    Weighting decompositions of Proposition 2 with w_{i,k} : \sum_{i,k} w_{i,k}= 1 gives a SOS representation of polynomial in the claim for small enough \alpha.

    PS. polynomial in the claim can be used to encode partition problem. Use SDP to solve for the minimum (lower bound, this will be good enough, throwing away terms with smaller then decision precision w_{i,k}), and check that you need polynomial precision for the decision (since the problem is polynomial, there is no inversions, and Q is integral, there is no place for exponential precision to appear).

    Sincerely yours,
    Crackpot Trisector.

  11. July 11, 2011 5:13 pm

    Regarding the transfer of results or tools from the continuous to the discrete, one little favourite of mine (from when I was doing lower bounds) is the “topological method” used to obtain lower bounds on algebraic computation trees: Ben-Or did it for computation over reals; then Yao refined the method so that it applies to integer data (still using the topology in the real space).

  12. Srivatsan Narayanan permalink
    July 20, 2011 3:49 pm

    I recently came across Michael’s Freedman’s paper “Complexity classes as mathematical axioms” ( http://arxiv.org/pdf/0810.0033v4 ) through StackExchange. It assumes the separation “axiom” NP \neq P^{\# P} and derives a knot-theoretic theorem that–I am quoting the Introduction here–sounds both “very plausible” and “impossible to prove”. I am surprised this paper is not as well-known as I expected it to be. But I must confess I unfortunately do not understand even elementary knot-theory and hence I didn’t get much of the technical aspects.

    While your post is about applying classical tools to understand complexity better, this paper takes the seemingly less popular converse approach. Indeed, the possibility that TCS can “give back” to classical, continuous math is as exciting to me as resolving the difficult open questions in complexity. Do you know of other such “TCS to classical math” theorems that may be more accessible for computer science audience?

Trackbacks

  1. Weekly Picks « Mathblogging.org — the Blog
  2. It’s Still the Slime Mold Story « Gödel’s Lost Letter and P=NP
  3. Greg Kuperberg: It is in NP to Tell if a Knot is Knotted! (under GRH!) | Combinatorics and more
  4. It is in NP to Tell if a Knot is Knotted! (under GRH!) Posted on April 10, 2012 Wolfgang found s, and my paper s ays, the Hara-Tani-Yamamato argument doesn’t work. I personally do not think that they came very close to their AM ∩ coAM claim.ifolds for

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,920 other followers