# A Simple Fact

*Can we find a simplest proof?*

Composite crop from src1, src2 |

Joseph Wedderburn and Leonard Dickson proved Wedderburn’s “Little” Theorem: that every finite ring with the zero-product property is a field. Which of them proved it first is hard to tell.

Today we discuss the issue of finding simple proofs for simple facts—not just Wedderburn’s theorem but the zero-product property on which it leans.

Dickson was a professor at the University of Chicago when Wedderburn visited on a Carnegie Fellowship in 1904–05. To judge from several sources, what happened is that Wedderburn first claimed a proof. Dickson did not believe the *result* and tried to build a counterexample. Instead he found a lemma that convinced him it was true and used it to give a simpler proof. They gave back-to-back presentations on 20 January, 1905, Wedderburn wrote up his paper with his proof and two more based on the lemma, which Wedderburn ascribed to an earlier paper by George Birkhoff and Harry Vandiver. Dickson wrote a paper with his approach, saying in a footnote:

First proved by Wedderburn … Following my simpler proof above, and using the same lemma, Wedderburn constructed two further simpler proofs.

However, Wedderburn’s first proof was found to have a gap. As detailed by Michael Adam and Birte Specht, the gap was a statement that was not false *per se* but whose vague argument used only properties from a class of weaker structures in which it can fail. So:

Who first proved the theorem?

## Other Proofs, Other Properties

Our sources linked above differ on whether the gap was noticed at the time or anytime before Emil Artin remarked on it in 1927. Artin didn’t discuss the gap but gave his own proof instead. There seems to be consensus that the “proof from the Book” was given by Ernst Witt four years later in 1931. But two other proofs, by Israel Herstein and by Hans Zassenhaus, receive prominent mention.

As our sources attest, interest in finding other proofs continues. The surprise in the theorem, which is that finiteness and the zero-product property force the multiplication to be commutative, informs what happens in other mathematical structures. Proofs that draw on results about these structures create connections among many areas. The shortest proof drawing on deeper results was a two-page paper in the summer 1964 *Monthly* by Theodore Kaczynski (of ill note). We won’t try to compare all these proofs, but will try to flip the question by focusing on the other property involved—the zero-product property:

If then or .

Given a ring with this property, how easy is it to prove? If has inverses then it is immediate, else multiplying both sides of by in front and in back would yield the contradiction . And we know if is finite then this property makes it a field. So our quest for a different angle leads us to thinking about infinite rings.

Incidentally, a ring with the zero-product property is called a *domain*. If the multiplication is commutative then it is an *integral domain*, though Serge Lang preferred the term *entire ring*. Our example goes beyond the integers though.

## No Zero-Divisors I

A natural example that arises is the ring of integer polynomials over multiple variables. Thus for two variables the elements of this ring are

It is easy to see that they form a ring with the usual high school rules for adding and multiplying polynomials. The ring is an integral domain in general.

To show this, we claim that it has no zero-divisors. Suppose that it does: let . Then let

and

Assume that the maximum degree in for is and is for . Then and are both non-zero polynomials in one variable. It is clear by induction that is not zero. This proves the claim.

## No Zero-Divisors II

The trouble with the above property of polynomials is that it is only about formal objects. In many applications to computer science we want to view polynomials as objects that can be evaluated. So a natural issue is a slightly different property: Suppose that and are two integral polynomials. Suppose further that

for all integers . Then it clearly must be the case that either identically or identically. Right?

The trouble is that this seems to be obvious but how do we prove that it is true? Note, it must be the case that some use of the fact that and are polynomials. The fact is not true for more complex functions. For example,

for all integers . But neither the function nor is identically zero for all integers.

## A Proof

Here is a relatively simple proof of the fact. It uses the famous Schwarz-Zippel (SZ) Theorem. See here for our discussion of the theorem.

Theorem 1Let be in be a non-zero polynomial of total degree over a field . Let be a finite subset of and let be selected at random independently and uniformly from . Then

The S-Z lemma of course has manifest applications throughout complexity theory. Often it is used in design of randomized algorithms. What we found interesting is that here we use it for a different purpose: to prove a structural property of polynomials.

Back to our fact. Assume that

for all integers and for some finite set. Now either or must be true for at least one half of the values in . Assume that it is . Then by the S-Z theorem it must be that always if the set is large enough. Therefore, the fact is proved.

## Open Problems

Is there a simpler proof of this key fact? Is it possible to find an easier reference in the literature of this basic fact of polynomials? We have yet to discover one—any help would be appreciated.

What the locus of x^y <= y^x looks like?

It is clear that it does contain all the points: y=x

What else?