Avoiding Monsters and Non-Monsters
A new insight into the structure of some exotic mathematical objects
Karl Weierstrass is often credited with the creation of modern analysis. In his quest for rigor and precision, he also created a shock when he presented his “monster” to the Berlin Academy in 1872.
Today I want to talk about the existence of strange and wonderful math objects—other monsters.
Weierstrass’s monster is defined as
where , is any odd integer, and . This function is continuous everywhere, but is differentiable nowhere.
The shock was that most mathematicians at the time thought that a continuous function would have to be differentiable at a significant number of points. Some even had tried to prove this. While Weierstrass was the first to publish this, it was apparently known to others as early as 1830 that such functions existed.
This is a picture of the function—note its recursive structure, which is the key to seeing both continuity and non-differentiability:
There is a detailed and fun discussion of the reaction to this monster—see here. A quote with quotes shows some of the initial thoughts about such functions:
Charles Hermite wrote, “I turn with terror and horror from this lamentable scourge of functions with no derivatives.” Henri Poincaré—who was the first to call such functions monsters—denounced Weierstrass’s work as “an outrage against common sense.” He claimed the functions were an arrogant distraction, and of little use to the subject.
Monster or not, there came after 1872 a swarm of similar monsters. Stefan Banach proved in 1931 a sense in which most continuous functions have this pathology, although if most have some property then perhaps it cannot be called a pathology.
Monsters Are Everywhere
Not only are there many examples of such functions, not only are most continuous functions nowhere differentiable, but things are even worse. In analysis there are many strange and wonderful functions, for which it seems surprising that they even exist: space-filling curves, discontinuous additive functions, through to differentiable functions that are nowhere monotone. The last were discovered relatively recently—is 1958 recent?—and a short proof of their existence is here.
What we have learned in analysis—and the same lesson applies to many if not all parts of mathematics—is that monsters are often very common. The pathological case is often what happens for most objects of the type being considered. Thus while we often initially find one monster, many times someone follows up by proving that most—in some sense—objects actually have the undesired properties.
There is a terrific survey article on monsters in the Bulletin of the AMS this past January. The title is: Linear subsets of nonlinear sets in topological vector spaces, by Luis Bernal-González, Daniel Pellegrino and Juan B. Seoane-Sepúlveda. It is here.
The key point of this paper is that not only are the monsters common, they often have algebraic structure. Say a subset of a vector space is lineable provided it has an infinite-dimensional subspace. The name is due to Vladimir Gurarity—it does not roll off the tongue as well as one might like. But it is an amazing insight. He proved in 1966 for example that the set of continuous nowhere-differentiable functions over is lineable. I wonder what Weierstrass and also his critics would have thought of the fact that there is a linear subspace all of whose members are nowhere differentiable. Very cool—no?
By the way it seems that the term “lineable” is not yet in Wikipedia. I guess it is still a fairly new concept. A Google search yields: do you mean wiki pineapple?
I have thought some about what our versions of the above would be. What is our definition of “lineable?” We all know, thanks to Claude Shannon, that almost all Boolean functions are hard. But the following seems to be in the same spirit as the quoted theorems.
Consider the set of all Boolean functions on variables. There is a subset of these functions that is closed under addition, in which all the members, except the zero function, have Boolean complexity that grows super-polynomially in .
Thus there is a linear subspace of high-complexity functions, excluding the zero function. The addition here is modulo 2, that is bitwise-. A more precise version is:
Let be a monotone function so that tends to infinity. Then there is a linear space of boolean functions on variables of dimension , so that every non-constant function in the space requires super polynomial size circuits.
To prove this, let . Divide the Boolean variables into blocks of variables each. For each block let be a Boolean function on that block that is hard, i.e., requires order size circuits. Note this is super-polynomial in growth. Extend the functions to all of the Boolean cube by defining them to be whenever a variable outside their block is .
Now consider the linear space formed by all combinations of these Boolean functions. We claim that every non-trivial member of the space is hard. Let be equal to , which we can do by renumbering if needed. Suppose that has a circuit of size . The circuit must compute on the first block, since on that block, the other functions contribute zero. (That is, they contribute nothing except for the all-zero assignment on that block, but this is just one table entry.) But then must be super-polynomial, and we are done.
Are Non-Monsters Lurking?
A possible important difference going back to the Weierstrass monsters is that for continuous functions on the real line, non-monsters are always lurking nearby. Weierstrass himself proved that every continuous function on an interval can be approximated arbitrarily closely (in the sup-norm) by a polynomial, which is as smooth as can be. This was later generalized by Marshall Stone to arbitrary compact Hausdorff spaces and other sub-algebras besides polynomials. This is one reason why Fourier methods are so effective in functional analysis—non-monsters are also everywhere.
With Boolean complexity, however, there seems not to be a similar property of “nice” functions being close by in a relevant sense. At least we do not know of one. It is possible, however, that a kind of omnipresence of “ghosts” of non-monsters is responsible for much of the difficulty of proving lower bounds. There might be some way that hard-to-prove non-approximation masquerades as a kind of approximation, with a similar blanketing effect to Weierstrass’s polynomials.
On the first day of this blog, I posted about “Natural Proofs” and a bait-and-switch argument involving , and continued a month later. Note that one condition for a hardness predicate to avoid the natural-proofs barrier is that it hold for a sparse set of Boolean functions. The bait-and-switch idea may make it difficult to avoid the illusion of approximation. But this also makes me wonder whether the idea of “lineable” may give enough structure to avoid this -based illusion.
I believe that the theorem above can in fact be improved to a sub-ring of Boolean functions, meaning closed under multiplication (that is, ) as well as addition. Also the bound on the dimension in this theorem is probably far from optimal. Ultimately better would be to bump the hardness up to really exponential, but the super-polynomial bound in the theorem is enough to be happy with for now.
How can we improve the theorem? For instance, what is the largest subspace so all non trivial functions are hard? And how useful is it?