Skip to content

The Primes Strike Again

April 1, 2016


And lead to new kinds of cheating and ideas for our field

FaadoslyPolir
src

Faadosly Polir is the older brother of Lofa Polir. He is now working as a full time investigative reporter for GLL.

Today Ken and I are publishing several of his recent investigations that you may find interesting.

You may wonder how GLL is able to afford a reporter on staff. We can’t. All our work we bring you at the same price we always have. Reporters, however, are burgeoning. Maybe it’s the political cycle. Maybe “Spotlight” winning the Oscar helped. Or maybe leprechauns have created a surplus—that would explain the political shenanigans we’re seeing.

Faadosly didn’t take long to earn his keep. He spotted an announcement in a deep-web online journal:

Second Hardy-Littlewood Conjecture (HL2) Proven.

We just covered the recent new and overwhelming evidence for the First Hardy-Littlewood Conjecture (HL1) about the distribution of the primes. Together, these conjectures have wonderful consequences, which unfortunately are already being turned for ill by some of our young peers.

Productivity

That’s right, we are saddened to report here at GLL that the huge productive work of a few young theorists derives from a kind of cheating. To protect their names we will refer to them collectively as X.

It is now known that X have been able to write so many beautiful papers over the recent years owing to the discovery that ZF is actually inconsistent. Since ZF, basic set theory, is used to prove everything in computer science theory, it follows that anything can be proved. At my old university they are running supercomputers to do massive derivations in Vladimir Voevodsky’s formulation of Homotopy Type Theory (HoTT). Although HoTT embeds the constructive version of PA this doesn’t prove PA’s consistency let alone that of ZF—yet it can model derivations using statements like HL2 as axioms. Faadosly’s investigative reporting showed that what happened is that the ZF proof of HL2 enabled HoTT to prove that a bug in Edward Nelson’s argument for inconsistency in PA goes away when lifted to ZF via known analytical facts about HL1.

This isn’t how X were caught. Rather they were caught completely separately by running software called MORP, for “Measure of Research Productivity.” MORP is not simply a plagiarism detector like MOSS, rather it applies equations determined by regression over many thousands of papers in the literature. The equations can determine when someone’s productivity is way too high. Our own cheating expert Ken reports:

There is over {5\sigma} confidence that the results of X could not be obtained by a human mathematician without deep computer assistance.

MORP is interfacing with a project to determine the likelihood of conjectures by analyzing historical proof attempts on them. By deep learning over attempts on past conjectures before they were proved, and Monte-Carlo self-play of thousands of proof attempts with reinforcement learning when things are proven, they have achieved a success rate comparable to that of Google DeepMind’s AlphaGo, which defeated a top human champion at Go.

When run on the contents of Gerhard Woeginger’s {\mathsf{P}} versus {\mathsf{NP}} page, whose 107 proofs are evenly split between {\mathsf{P = NP}} and {\mathsf{P \neq NP}}, the output gives confidence comparable to IBM Watson’s in its “Jeopardy” answers that both are theorems.

Thus our field already had strong evidence for the inconsistency that we are now reporting. This is the first major inconsistency result in mathematics since 1901 and 1935, an unusually long gap of 81 years. Reached for comment, Voevodsky told Faadosly that he was not surprised: “After all, if ZF had been consistent then ZF + {\neg}Con(ZF) would also be consistent, and the situation we have now is practically speaking not much different from that.”

Reverse Prizes

This news has heightened discussions already long underway among prize committees at top organizations in math and theory about new policy for the awarding of their large prizes. We have already covered feelings by many that paying large sums for past achievements (often long past) is inefficient for stimulating research and community interest—while at the same time they are being upstaged by startup upstarts giving way bigger prizes.

Doing away with prize money altogether was rejected as lame—the large sums are important for the public eye. The committees concluded it is vital to maintain the absolute values of the prizes. So the prizes will still be awarded as before by a blue-ribbon panel and carry large dollar amounts. The only thing different will be the sign. The winner will be required to pay the prize money to the organization. Thus a Turing Award winner will owe one million dollars to the ACM.

The motivation is two-fold. One, the prize money being paid to ACM will be used to hire extra needed administrative personnel. Second, it can be used for greater outreach. Also, a generous payment plan is being considered: payment of the prize money over periods as long as twenty years.

With help from Faadosly we have been running a private poll of past winners of the Turing and lesser prizes. Over 80% have said something equivalent to:

I would have been happy to pay the prize amount. It’s the Turing Award after all.

“I could always have sold my house or perhaps a kidney,” said one recent winner with a smile. There is talk that this new kind of prize may be especially appropriate for the new result on ZF, to cover a some of the costs it will cause. Quantum may institute its own i-Prize, in which the real amount is indeterminate.

For now the committees have decided only to proceed on a provisional basis with the lesser prizes. Some are being given reversed names—for instance, nominations are now being accepted for the 2017 Thunk prize. The University of Michigan has proposed a reverse prize to benefit Ann Arbor’s innovative but consolidating Le Dog restaurants.

JesterAwardsLong2
info

Zero and Negative Acceptance Rates

I was just recently at Simons in Berkeley, and of course several people from Stanford joined in. They discussed the recent New York Times article on Stanford’s new policy of having a 0% admission rate, shooting its undergraduate program to the zenith for exclusivity. I mentioned that at Tech our enormously popular remote Master’s in CS program is allowing us to emulate Stanford as regards on-site admission. This in turn will free up resources currently used on student housing and enable allocating more teaching staff to the online courses.

We began talking about similar ideas to increase the prestige of major conferences. Several conferences already have acceptance rates verging under 10%, so going to 0% is not so big an affine step. Indeed, several thought it too small and that we should go all out for negative acceptance rates (NAR).

Conferences using NAR work on a simple system. Anyone wishing to submit creates an item on EasyChair or a similar logging service the way we do today. The Program Committee then sends that person one of their current original papers to referee. The best {n\%} of the referee reports are then selected for oral presentation and double-blind publication in the proceedings, thereby achieving a {-n\%} acceptance rate. People making the very top submissions, who previously would win Best Paper awards, become co-authors of the papers they referee.

This is considered an excellent way to promote research by talented people—now being selected for a PC needn’t be a sacrifice of research time. It also galvanizes the conference reviewing process. Faadosly has learned that the upcoming STOC Business Meeting will propose its use on a rotating basis by FOCS or STOC in alternate years.

Open Problems

What do you think of these developments? Are they all plausible? Do you believe the conjectures of Godfrey Hardy and John Littlewood? After all, these two giants conjectured both of them.

Have a happy April Fool’s Day.

[some word changes]

11 Comments leave one →
  1. April 1, 2016 11:22 am

    Dear Professor,

    We consider two new complexity classes that are called equivalent-P and equivalent-NP which have a close relation to the P versus NP problem. The class equivalent-P contains those languages that are ordered-pairs of instances of two specific problems in P, such that the elements of each ordered-pair have the same solution, which means, the same certificate. The class equivalent-NP has almost the same definition, but in each case we define the pair of languages explicitly in NP. We demonstrate that equivalent-P = equivalent-NP, but this brings as a consequence that P and NP will have the same cardinality. Since we already know that NP contains every language in P, then we can conclude that P = NP.

    You can see the details of this proof in

    https://hal.archives-ouvertes.fr/hal-01296435/document

    Best Regards,
    Frank.

    • AdC (Alexandre de Castro) permalink
      April 1, 2016 4:56 pm

      Frank,

      P and NP will have the same cardinality, i.e., |P|=|NP|, provided |P| is less than or equal to |NP| and |P| is greater or equal to |NP|.

      OK?

  2. April 1, 2016 12:38 pm

    Frank, using your technique, I have concluded that tomorrow is also April Fool’s day. The map f(n)=2n is a bijection from the natural numbers to the even natural numbers. Since the even naturals are a subset of the naturals, it must be that {even naturals} = {naturals}. So every number is even! Since n+1 and n are both even, their difference is also even, hence 1=2. We conclude that April 2nd is ALSO April Fool’s day. Remarkable.

    • April 1, 2016 5:08 pm

      Even naturals and naturals have different cardinalities, although there is a bijection between them (Cantor).

    • AdC (Alexandre de Castro) permalink
      April 1, 2016 5:25 pm

      This occurs because the inclusion map, f(n)=2n, is injective, but behaves as if were a bijection between sets with different cardinalities.

      • AdC permalink
        April 2, 2016 1:58 am

        interesting…
        as even numbers grow faster than natural numbers, then, there must be less even numbers than natural numbers because even numbers are a subset of the natural numbers.

        Is this in ZFC?

    • April 2, 2016 10:36 am

      My argument is incorrect. Thanks.

      • AdC permalink
        April 2, 2016 3:47 pm

        curiosity…
        If P is not equal to NP, bijections are noninversible. Paradox?
        If P is equal to NP, bijections are inversible !!!

        Is this intuitive in set theory?

      • AdC permalink
        April 2, 2016 3:56 pm

        What the notion of “equal to”?

        …a reducible problem

  3. kamouna permalink
    April 2, 2016 5:41 am

    Yes, the cardinality of P is equal to that of NP. But this does not imply P=NP. As the cardinality of the natural numbers is equal to that of even numbers. Clearly, the set of naturals does not equal to that of the evens.

    Best,

    Rafee Kamouna.

  4. April 5, 2016 4:41 am

    If somebody is interested…an evidence that there is a permutation that is asymptotically hard to invert:

    https://doi.org/10.13140/RG.2.1.1155.8809

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s