# Avoiding Monsters and Non-Monsters

* A new insight into the structure of some exotic mathematical objects *

Karl Weierstrass is often credited with the creation of modern analysis. In his quest for rigor and precision, he also created a shock when he presented his “monster” to the Berlin Academy in 1872.

Today I want to talk about the existence of strange and wonderful math objects—other monsters.

Weierstrass’s monster is defined as

where , is any odd integer, and . This function is continuous everywhere, but is differentiable nowhere.

The shock was that most mathematicians at the time thought that a continuous function would have to be differentiable at a significant number of points. Some even had tried to prove this. While Weierstrass was the first to publish this, it was apparently known to others as early as 1830 that such functions existed.

This is a picture of the function—note its recursive structure, which is the key to seeing both continuity and non-differentiability:

There is a detailed and fun discussion of the reaction to this monster—see here. A quote with quotes shows some of the initial thoughts about such functions:

Charles Hermite wrote, “I turn with terror and horror from this lamentable scourge of functions with no derivatives.” Henri Poincaré—who was the first to call such functions monsters—denounced Weierstrass’s work as “an outrage against common sense.” He claimed the functions were an arrogant distraction, and of little use to the subject.

Monster or not, there came after 1872 a swarm of similar monsters. Stefan Banach proved in 1931 a sense in which *most* continuous functions have this pathology, although if *most* have some property then perhaps it cannot be called a pathology.

## Monsters Are Everywhere

Not only are there many examples of such functions, not only are most continuous functions nowhere differentiable, but things are even worse. In analysis there are many strange and wonderful functions, for which it seems surprising that they even exist: space-filling curves, discontinuous additive functions, through to differentiable functions that are nowhere monotone. The last were discovered relatively recently—is 1958 recent?—and a short proof of their existence is here.

What we have learned in analysis—and the same lesson applies to many if not all parts of mathematics—is that monsters are often very common. The pathological case is often what happens for *most* objects of the type being considered. Thus while we often initially find one monster, many times someone follows up by proving that most—in some sense—objects actually have the undesired properties.

There is a terrific survey article on monsters in the Bulletin of the AMS this past January. The title is: *Linear subsets of nonlinear sets in topological vector spaces*, by Luis Bernal-González, Daniel Pellegrino and Juan B. Seoane-Sepúlveda. It is here.

The key point of this paper is that not only are the monsters common, they often have algebraic structure. Say a subset of a vector space is *lineable* provided it has an infinite-dimensional subspace. The name is due to Vladimir Gurarity—it does not roll off the tongue as well as one might like. But it is an amazing insight. He proved in 1966 for example that the set of continuous nowhere-differentiable functions over is lineable. I wonder what Weierstrass and also his critics would have thought of the fact that there is a linear subspace all of whose members are nowhere differentiable. Very cool—no?

By the way it seems that the term “lineable” is not yet in Wikipedia. I guess it is still a fairly new concept. A Google search yields: *do you mean wiki pineapple?*

## A Theorem

I have thought some about what our versions of the above would be. What is our definition of “lineable?” We all know, thanks to Claude Shannon, that almost all Boolean functions are hard. But the following seems to be in the same spirit as the quoted theorems.

Theorem 1

Consider the set of all Boolean functions on variables. There is a subset of these functions that is closed under addition, in which all the members, except the zero function, have Boolean complexity that grows super-polynomially in .

Thus there is a linear subspace of high-complexity functions, excluding the zero function. The addition here is modulo 2, that is bitwise-. A more precise version is:

Theorem 2

Let be a monotone function so that tends to infinity. Then there is a linear space of boolean functions on variables of dimension , so that every non-constant function in the space requires super polynomial size circuits.

To prove this, let . Divide the Boolean variables into blocks of variables each. For each block let be a Boolean function on that block that is hard, i.e., requires order size circuits. Note this is super-polynomial in growth. Extend the functions to all of the Boolean cube by defining them to be whenever a variable outside their block is .

Now consider the linear space formed by all combinations of these Boolean functions. We claim that every non-trivial member of the space is hard. Let be equal to , which we can do by renumbering if needed. Suppose that has a circuit of size . The circuit must compute on the first block, since on that block, the other functions contribute zero. (That is, they contribute nothing except for the all-zero assignment on that block, but this is just one table entry.) But then must be super-polynomial, and we are done.

## Are Non-Monsters Lurking?

A possible important difference going back to the Weierstrass monsters is that for continuous functions on the real line, *non-*monsters are always lurking nearby. Weierstrass himself proved that every continuous function on an interval can be approximated arbitrarily closely (in the sup-norm) by a *polynomial*, which is as smooth as can be. This was later generalized by Marshall Stone to arbitrary compact Hausdorff spaces and other sub-algebras besides polynomials. This is one reason why Fourier methods are so effective in functional analysis—non-monsters are also everywhere.

With Boolean complexity, however, there seems not to be a similar property of “nice” functions being close by in a relevant sense. At least we do not know of one. It is possible, however, that a kind of omnipresence of “ghosts” of non-monsters is responsible for much of the difficulty of proving lower bounds. There might be some way that hard-to-prove non-approximation masquerades as a kind of approximation, with a similar blanketing effect to Weierstrass’s polynomials.

On the first day of this blog, I posted about “Natural Proofs” and a bait-and-switch argument involving , and continued a month later. Note that one condition for a hardness predicate to avoid the natural-proofs barrier is that it hold for a sparse set of Boolean functions. The bait-and-switch idea may make it difficult to avoid the illusion of approximation. But this also makes me wonder whether the idea of “lineable” may give enough structure to avoid this -based illusion.

I believe that the theorem above can in fact be improved to a sub-ring of Boolean functions, meaning closed under multiplication (that is, ) as well as addition. Also the bound on the dimension in this theorem is probably far from optimal. Ultimately better would be to bump the hardness up to really exponential, but the super-polynomial bound in the theorem is enough to be happy with for now.

## Open Problems

How can we improve the theorem? For instance, what is the largest subspace so all non trivial functions are hard? And how useful is it?

The set of nowhere differentiable functions on [0, 1] can not form a vector (sub)space. The zero function is not part of this set. It also seems that you can ”flip” the first part of a function and still get a nowhere differentiable one. Then add it to itself and you get a function which is differentiable in the first part.

I’m probably missing something here…

I think that you can do a lot better in the following way. Pick an f1 that requires a large circuit. Then pick f2 such that it requires a large circuit if its inputs are x1,..,xn,f1. The same counting argument shows that most functions are like that. Then in a general step, pick f{i+1} such that it requires a large circuit if its inputs are x1,..,xn,f1,..fi.

Now suppose that you had a small circuit for some sum{i\in I} fi. Then using this circuit, we could also compute f{max i\in I} from x1,..,xn and the other fi with a small circuit, contradiction.

I believe this shows that the dimension can be as large as $\Omega(2^n/n)$ such that every member requires an exponential size.

ps. I think your theorem 2 only gives a family of SIZE $2^{n/\omega(n)}$ but the dimension is only n/w(n). Maybe I misunderstood something…

I get “lienable” …

Poincaré believed that nowhere smooth functions were useless.

But today, they’re acknowledged as the first historical examples of fractals… and fractal geometry is indispensable to describe the strange attractors of chaotic trajectories… which Chaos Theory was precisely initiated by Poincaré! Had he accepted the fractals, he might have come up already with the whole of modern Chaos Theory.

Poincaré is also credited with having paved the road to Relativity for Einstein. So, had he accepted in the first place to revisit the physical concepts, he might also have invented Relativity by himself. This is why mathematics on its own isn’t sufficient to explain the history of science. There’s a human factor too…

By the way, the transformation equations which Einstein used to describe space and time were discovered by Hendrik Lorentz… whereas the fractal attractors in Chaos Theory were discovered by Edward Lorenz. Some names seem to be predestined… :)

its interesting there are not really strict definitions of “fractals”, they seem not studied much in CS whereas they should be studied quite closely, and have always thought that Turing machine computational tableaus are the only sensible way to define fractals. eg “small world graphs” found to have major significance seem to have fractal properties.

also it occurs to me there is probably some natural connection between fractal dimensions and complexity measurements. might have seen that explored somewhat in a paper somewhere….

Reblogged this on nebusresearch and commented:

R J Lipton has an engaging post which starts from something that rather horrified the mathematics community when it was discovered: it’s a function which is continuous everywhere, but it’s not differentiable anywhere, no matter where you look. Continuity and differentiability are important concepts in mathematics, and have very precise definitions — motivated, in part, by things like the difficult function Lipton discusses here — but they can be put into ordinary language tolerably well.

If you think of a continuous function as being one whose graph you could draw without having to lift the pencil from the paper you’re not doing badly. Similarly a function is differentiable at a point if, from that point, you know what way the curve is going. This function, found by Karl Weierstrass, is one example of the breed.

Lipton points out the somewhat unsettling point that it’s much more common for functions to be like this than they are to be neat and well-behaved functions like or even , in much the same way a real number is much more likely to be irrational than it is to be rational, and Lipton goes on to give an example in an area of mathematics I’m not familiar with of the “pathological” case being the vastly more common one. Fortunately, it turns out, we can usually approximate the “pathological” or “monster” function with something easier to work with — a very fortunate thing or we could get done very few computations that reflected anything actually interesting — and that’s another thing we can credit Weirstrass with discovering.

Your first theorem as stated is trivial: Just take one function f a high boolean complexity, then the set S = {0,f} is closed under addition, and all functions except 0 are of high complexity.

In the same way, Weierstrass would not be surprised that there exists a subspace where all members (except 0) are continuous and nowhere differentiable: Juste take one such function f and consider Span f.

What’s interesting in both cases is that these subspaces have high dimension.

App-arrantly, there’s an app for that …

Questions about a suitable analogue of differential calculus for boolean spaces keep popping (or pipping) up. Having spent a fair amount of time exploring the most likely analogies between real spaces like and the corresponding boolean spaces where I can’t say I’ve gotten all that far, but some of my first few, halting steps are recorded here:

☞ Differential Logic and Dynamic Systems

The objections to monsters by Poincare and others were likely the notion that they would never occur in actual applications. In those days math and the real world had a tighter connection. However, a monster comes up in a real application in the paper How to Gamble if you Must by Kyle Siegrist. The monster is a function with domain [0,1] which is discontinuous at the binary rationals, but cont. at the binary irrationals. Its on page 15. You can Google it.

Other topic- I once tried to Google `world’s greatest logician’ and it asked if I wanted `world’s greatest magician’

am interested in this subj & have long thought there may be a concept of “pathological languages” in CS as the corresponding concept to “monster” functions in mathematics & think the topic is understudied in CS. iirc goldreich remarks on languages related to blums speedup theorem as “pathological”.

Dear all,

just a little comment about the Weierstrass’ function. Condition $b/a > 1+ 3\pi/2$ for the nowhere differentiability has been relaxed to $b\geq a>1$ by Hardy. The new condition is optimal. A very quick proof of this result has been proposed recently by Jon Johnsen in J. Fourier Anal. Appl. 16(1) (2010), 17–33.

Weierstrass’ monster is a delight but I prefer the Wiener process when I think of a function being no-where differentiable due to the richer, exotic structure that the Wiener process possesses.

The Dirichlet monster is also interesting – it is extremely intuitive (unlike Weierstrass’s monster or the WIener process). Define f to be 1 iff x is rational and f to be 0 iff x is irrational. You cannot really plot f. It has area zero, but is not Riemann-integrable. You treat this monster by measure theory.

In a field of inquiry which has been active since the invention of

writable symbols,yes, 1958 is recent.