All ideas on sale—most 50% off

Asa Candler invented the coupon in 1895. Candler, a pharmacist by trade, later purchased the Coca-Cola company from the original inventor Dr. John Pemberton, also a pharmacist. Candler created the notion of coupons to help promote his new drink. He later became the mayor of Atlanta.

Today Ken and I want to talk about a special July Fourth Sale on mathematical ideas. All ideas are 50% off, this holiday through the weekend.

We made up a coupon that is good for 50% off on all ideas this Fourth of July.

Since we usually charge zero for what we tell you, it is also 100% off. We hope that you will still value the ideas even though they are on sale. As is usually the case we make no warranty on how useful these ideas are—all sales are final. No refunds of any kind. We do however allow you to send in your own ideas as comments, so think of that as a “return-policy.”

July Fourth in the USA is our celebration of independence. It is an official holiday commemorating the adoption of the Declaration of Independence on July 4, 1776, which separated us from the UK.

There are many important traditions on this day: parades, fireworks, cook-outs, and more. One that I like since I grew up in the New York City area is the annual Nathan’s Hot Dog Eating Contest in Coney Island, which started in 1916. Their hot dogs are great, but I could only eat a couple.

Of course most nations have yearly celebrations like the 4th of July is for the US. In France it is the 14th of July, in India the 15th of August, and in Mexico it is the 16th of September—but the Mexicans celebrate Cinco de Mayo on May 5th. Ken’s family once celebrated Quebec’s National Day (Fête St.-Jean) in Quebec City on June 24, with fireworks on the Plains of Abraham battle site.

I found the following question and answer on the web:

• When did July 4th start?

• The 4th of July started on July 4th, 1776.

But it is wrong. It seems a joke to say July 4 started with the guy who invented the calendar, but it’s actually true—and he even got July named after himself.

Okay not too funny, so let’s move on to our ideas that are on sale today.

## Ideas On Sale

${\bullet }$ Factoring: This is still one of my favorite problems. At the Princeton Turing meeting I asked Peter Sarnak directly what he believed and he immediately answered: “it is obviously in polynomial time.” This will be hard to solve, of course, but there seems to be an intermediate idea that he started. Consider

$\displaystyle \frac{1}{x}\sum_{k \le x} \mu(k)f(k),$

where ${f(k)}$ is an “easy” to compute integer function, and ${\mu}$ is the Möbius function. Can we show that this tends to zero as ${x}$ goes to infinity? For functions ${f(k)}$ from ${\mathsf{AC}^{0}}$ this was resolved beautifully by Ben Green—see here. There are two ideas on sale:

• Try to extend his result to a larger class of functions. I believe that even ${\mathsf{AC}^{0}[2]}$ is still open.

• The other approach is to try and show that the sum will not go to zero for functions that are sufficiently powerful. For even polynomial time computable functions we cannot show that it does not tend to zero. Note that if ${f(k)}$ is the indicator function for the primes, then the expression goes to zero only as

$\displaystyle \frac{1}{\ln x}.$

But can we do better? This seems to be an approachable problem.

${\bullet }$ Lonely Runner: I have discussed this problem before here and previously. Can we extend the ideas there by using triples instead of pairs of runners? I believe that this should be worked out. Actually I would love someone to step up, grab the coupon for this problem, and help write a full paper.

${\bullet }$ Jacobi Circuits: These may just be a knock-off of something already on the market, but here goes. Fix a composite number ${m}$, and consider this variation on circuits with unbounded fan-in Boolean and modulo-${m}$ gates. The input wires to a gate ${g}$ may carry values -1 as well as 0 or 1, so that their sum ${a}$ can be negative. For Boolean gates -1 is the same as 1, but the mod-${m}$ gates compute the Jacobi symbol

$\displaystyle \left(\frac{a}{m}\right) = \left(\frac{a}{p_1}\right)^{\alpha_1}\cdots\left(\frac{a}{p_k}\right)^{\alpha_k}$

where ${m = p_1^{\alpha_1}\cdots p_k^{\alpha_k}}$ is the unique prime factorization of ${m}$, and for prime ${p}$, ${\left(\frac{a}{p}\right)}$ is the Legendre symbol giving 0 if ${a}$ is a multiple of the prime ${p}$, otherwise +1 or -1 according as ${a}$ is a quadratic residue mod ${p}$ or not.

For polynomial size and constant depth, are these just the same as ${\mathsf{ACC}^0[m]}$ circuits? They could be more general, but could also be more restricted. We haven’t looked at the warranty very closely—besides, it’s in French.

Our last sale item needs its own section.

## Manic Monomials

${\bullet }$ Minimal Monomials: If you are given polynomials ${p_1,\dots,p_s}$ in variables ${x_1,\dots,x_n}$, can you combine them to make a monomial? By combine we mean finding polynomials ${\alpha_1,\dots,\alpha_s}$ to use as multipliers and forming the expression

$\displaystyle r = \alpha_1 p_1 + \alpha_2 p_2 + \cdots + \alpha_s p_s.$

When can you make ${r}$ a monomial, and how many different monomials ${r}$ can you make this way? Note that if ${r'}$ is a monomial multiple of ${r}$, then you can make ${r'}$ too by defining ${\alpha'_i = (r'/r)\alpha_i}$ for each ${i}$, so the question is really how many monomials ${r}$ can you make that are minimal with this property—no proper divisor of ${r}$ can be made.

It follows from theorems of David Hilbert that the number ${\nu = \nu(p_1,\dots,p_s)}$ of minimal monomials is always finite. Actually there is a sense, namely measure in the real or complex space of coefficients of terms of the ${p_i}$, in which the number is almost always zero. Here are two important cases when ${s = n = m^2}$ and ${A}$ is the ${m \times m}$ matrix of variables:

• The ${p_i}$ are the ${(m-1)\times(m-1)}$ sub-determinants of ${A}$. Then ${\nu = 0}$. The same goes for the set of ${k \times k}$ sub-determinants, any ${k < m}$.

• The ${p_i}$ are the ${(m-1)\times(m-1)}$ sub-permanents of ${A}$. Then ${\nu}$ is ${\dots}$ not ${0}$. In fact it is gigantic as a function of ${m}$.

How gigantic? No one knows. For ${m = 4}$ Ken did a computation that found at least 2,196 minimal monomials. For ${m = 5}$ Ken estimated that the analogous computation with the ${4 \times 4}$ sub-permanents would take about 100 years on the hardware he had. So he ran with the ${3 \times 3}$ sub-permanents, and after 37-1/4 days he found about 128,000 minimal monomials. Fireworks indeed.

For a simple example with the ${3 \times 3}$ permanent, consider

$\displaystyle \begin{array}{|ccc|} a & b & c\\ d & e & f\\ g & h & i \end{array}\;,\quad p_1 = ae+bd,\quad p_2 = af+cd,\quad p_3 = bf+ce.$

Then ${\frac{1}{2} (f p_1 - e p_2 + d p_3)}$ yields the monomial ${bdf}$. Whereas for the sub-determinants ${ae-bd}$, ${af-cd}$, and ${bf-ce}$, the analogous expression using a positive multiplier ${e}$ on the second one cancels everything away. Curiously it is completely impossible to get any monomials from sub-determinants, and the simple proof is that the sub-determinants form a Gröbner basis, which (in reduced form) always has at least one monomial if there are any.

Clearly this is a wild structural difference between the permanent and determinant. But what can be proved with it? Ken thought ${\log \nu}$ could be a lower bound on arithmetical complexity analogous to the log of solution-set sizes in Volker Strassen’s lower-bound theorem, which we covered here. But this got falsified for a family ${F_n}$ of constant-degree polynomials with linear-size circuits whose ${n}$-many first partial derivatives combine to make ${2^{2^n}}$ minimal monomials. Hence this idea is part of our sale.

## Open Problems

Our sale items also have an optional service agreement with us. Does that influence you to buy them?

Will this July 4th also be remembered for Higgs fireworks?

Have a safe and fun Fourth if celebrating it, and if not have a safe day anyway.

8 Comments leave one →
1. July 4, 2012 4:42 am

Nice post as usual.

Question: Do people consider computation with polynomial ideals over $\mathbb{C}$? This paper http://www.math.ucdavis.edu/~deloera/RECENT_WORK/jsc09_issac08.pdf suggest the algorithm to find infeasibility certificate for the polynomial system. On the other hand, Lasserre relaxation show how to check if system is feasible. The combination of two can guarantee the answer – feasible/not feasible. One can create an ideal that accept only 0 and 1, e.g. $x_i(x_i-1)$ plus additional equations that restricts possible 0-1 solutions – encoding the problem in the ideal of polynomials. The advantage, is that this system is testing all solution space at once, similar to quantum computers. Moreover, the intermediate results does not have to be unique, and be represented by the null space of possible monomial values. For example, is it possible to represent the Quantum Fourier transform in this way, and use the resulting null space for “interference” through the additional set of polynomials. More specifically, whether it is possible to write the system of polynomials in $\mathbb{C}$, such that ideal in $\mathbb{C}$ of this system is Fourier transform of the integer n, and than add additional polynomials that restrict the ideal to be the encoding of factors of n.

2. July 6, 2012 6:46 am

Proving Mobius randomness for the Rudin-Shapiro sequence will already be a major breakthrough!
http://mathoverflow.net/questions/97261/mobius-randomness-of-the-rudin-shapiro-sequence

In more details: Let $a_n = \sum \epsilon_i\epsilon_{i+1}$ where $\epsilon_1,\epsilon_2,\dots$ are the digits in the binary expansion of $n$.
$WS(n)$, the $n$th term of the Rudin Shapiro sequence is defined by
$WS(n)=(-1)^{a_n}.$

The question is:

Prove that $\sum_{i=0}^n WS(i) \mu (i) = o(n).$

3. Thomas Spencer permalink
July 22, 2012 7:30 pm

Re: the lonely runner. There are known extensions of the probabilistic method that might apply here. I believe that these methods are not well known, since I have not seen any other potential application, but they are described in “Generalized Bonferroni inequalities” in the Journal of Applied Probability 1994, pp 409-417.

I no longer have easy access to this paper, so I do not remember exactly what it said. However, it dealt with the following situation. Let $Z=\sum X_i$, where the $X_i$ are 0-1 random variables that are not necessarily independent. Then let $S_1 = \sum Pr(X_1=1)$, $S_2=\sum_{i < j} Pr(X_i=1 and X_j=1)$, and so on. We want to bound probability that $Z=0$ based on the$S-k$. The probabilistic method says that if $S_1 0$. It is also known that if $S_1-S_2+S_30$. There are potentially stronger results in the cited paper.

4. P Devlin permalink
October 12, 2012 9:40 am

I believe that there is a very simple solution to the lonely runner conjecture which proves it to be true for any n, without making any assumptions concerning the runners speeds etc. It was just recently that i came across the problem. However, it was related to something that i gave consideration to a number of years back. I am an engineer and not a mathematician so whilst the explanation of the solution is as easily stated as the problem itself i am not in a position to explain it using strictly mathematical formalism. I would be interested to share this idea. email details are provided.