# The Derivative Of A Number

* Are you kidding? *

Edward Barbeau is now a professor emeritus of mathematics at the University of Toronto. Over the years he has been working to increase the interest in mathematics in general, and enhancing education in particular. He has published several books that are targeted to help both students and teachers see the joys of mathematics: one is called *Power Play*; another *Fallacies, Flaws and Flimflam*; and another *After Math*.

Today I want to discuss his definition of the derivative of a number, yes a number.

We all know the concept of a derivative of a function. It is one of the foundational concepts of calculus, and is usually defined by using limits. For the space of polynomials it can be viewed as a linear operator so

The derivative operator in general satisfies many properties; one is the product law:

This rule is usually credited to Gottfried Leibniz. Somehow the great Issac Newton did not know this rule—at least that is what is claimed by some.

## Definition

Barbeau defined the derivative of a natural number in 1961. Define for a natural number by the following rules:

- for all primes .
- .
- for all numbers .

Here is a picture from his paper:

This proves that he really did it a long time ago. Note the typewriter type face: no LaTeX back then. He proved the basic result that is well defined. This is not hard, is necessary to make the definition meaningful, but we will leave it unproved here. See his paper for details.

A simple consequence of the rules is that for a prime. This follows by induction on . For it is rule (1). Suppose that :

Unfortunately this does not hold in general. Also is not a linear operator: and . This double failure, the derivative of a power is not simple and the derivative is not linear in general, makes difficult to use. One of the beauties of the usual derivative, even just for polynomials, is that it is a linear operator.

## Results

The derivative notion of Barbeau is interesting, yet it does not seem to have been intensively studied. I am not sure why—it may be because it is a strange function—I am not sure.

There is hope. Recently there have been a number of papers on his notion. Perhaps researchers are finally starting to realize there may be gold hidden in the derivative of a number. We will see.

Most of the papers on have been more about intrinsic properties of rather than applications. A small point: most of the papers replace by : so if you look at papers be prepared for this notation shift. I decided to follow the original paper’s notation.

The papers have results of three major kinds. One kind is the study of what are essentially differential equations. For example, what can we say about the solutions of

where is a constant? The others are growth or inequality results: how fast and how slow does grow? For example, for not a prime,

A third natural class of questions is: can we extend to more than just the natural numbers? It is easy to extend to integers, a bit harder to rationals, and not so easy beyond that.

Here are two interesting papers to look at:

*Investigations on the properties of the arithmetic derivative*, which is a paper by Niklas Dahl, Jonas Olsson, and Alexander Loiko.*How to Differentiate a Number*, which is a paper by Victor Ufnarovski and Bo Åhlander.

## An Application

I tried to use to prove something interesting. I think if we could use to prove something not about but about something that does not mention at all, that would be exciting. Tools must be developed in mathematics, but the key test of their power is their ability to solve problems from other areas. One example: the power of complex analysis was manifest when it was used to prove deep theorems from number theory. Another example: the power of the theory of Turing machines was clear when it was used to yield an alternate proof of the Incompleteness Theorem.

The best I could do is use to prove an ancient result: that is not rational. Well I may be able to prove a bit more.

We note that from the product rule: , for any . Recall if were prime this would be .

Now assume by way of contradiction that is rational number. Then for some positive numbers we have

As usual we can assume that are co-prime.

So let’s take derivatives of both sides of the equation—we have to use sometime, might as well start with it.

Note that it is valid to apply to both sides of an equation, so long as one is careful to obey the rules. For example allows but there is no additive rule to make the right-hand side become which would make the equation false.

The result of taking the derviative of both sides is:

Now square both sides and substitute for to get:

This implies that divides . This leads to a contradiction, since it implies that are not co-prime. Whether we also get that divides is possibly circular, but anyway this is enough. The point is that owing to , the derivative removed the problematic factor of .

Note from the original equation, we only get that divides which is too weak to immediately get a contradiction. Admittedly ours is not the greatest proof, not better than the usual one especially owing to the squaring step, but it does use the derivative of an number.

One idea: I believe that this idea can be used to prove more that the usual fact that has no nonzero solutions over the integers. I believe we can extend it to prove the same result in any ring where can be defined, possibly modulo issues about lack of unique factorization. This handles the Gaussian integers, for example.

## Open Problems

Can we use this strange function to shed light on some open problem in number theory? Can we use it in complexity theory? A simple question is: what is the complexity of computing ? If where and are primes, then by the rules. But we know that and thus we have two equations in two unknowns and we can solve for and . So in this case computing is equivalent to factoring . What happens in the general case? An obvious conjecture is that computing is equivalent to factoring.

[fixed error in proof that owed to typo , as noted by user “Sniffnoy” and others, and changed some following text accordingly; further fix to that proof; fixed typo p^k to p^{k-1}]

Isn’t 2*D(y^2) wrongly derived as 4y^2D(y) instead of 4yD(y) as per the rules mentioned above ? Or am I wrong in thinking so ?

Yes—this is now fixed. Thanks.

In his early attempt to forge a link between logic and mathematics, Leibniz represented simple predicates or primitive propositions as prime numbers and logical conjunction as multiplication.

See, for example :

http://permalink.gmane.org/gmane.comp.inquiry/408

In that spirit and in the present context we may well ask, What is the derivative of a proposition?

Here is the context of that quotation:

☞ Leibniz • “Elements of a Calculus” • April 1679

Should it be considered *strange* that D is not linear? That has always struck me as one of the key properties of the derivative in other settings. (Are there natural “derivatives” in settings I’m not familiar with that aren’t linear?)

There seems to be an error in your proof; D(y^2) is 2yD(y), not 2y^2 D(y^2).

As for why it it may not have been studied so much — I suspect it has a lot to do with it not being additive. This is the most obvious way to define an arithmetic derivative, but because it’s not additive, it leaves open the possibility that there are better ways; it’s not clear that this is

thearithmetic derivative. And indeed I’m pretty sure other people have defined other arithmetic derivatives. Making this one seem a bit less natural and a bit more arbitrary.Yes, I also think that there has been a mistake.

Here is another definition of arithmetic derivative: http://mathoverflow.net/questions/142808/strange-or-stupid-arithmetic-derivation

That one seems distinctly worse than Barbeau’s, though. It doesn’t satisfy the Leibniz rule, and, well, really I’m failing to see any advantage over it. The resemblance to a derivative appears to be superficial.

Thanks!—fixed—we think. A strange bug (maybe owing to the line with y^4 on RHS having a different overhang) affecting also the Preview feature needed hitting Update multiple times.

Typo in the equations under ‘Definition’, after ‘Suppose that k > 1’ — the last line should be = k p^{k-1}.

Thanks—fixed this too.

There is a small typo in the D(p^k) formula. There is a missing “-1” in last exponent.

Where it says: D(p^{k}) = […] = kp^{k}

It should say: D(p^{k}) = […] = kp^{k-1}

Perhaps a more “natural” notion is the logarithmic derivative of a number, defined as . This is “log linear”, in the sense that $\latex d(xy) = d(x) + d(y)$ for any two positive integers $x$ and $y$. Starting from $\latex d(1) = 0$ and for any prime , this also shows that it is an immediate consequence of unique factorization that is well defined.

Reblogged this on Anurag's Math Blog and commented:

An interesting notion. It might be a good idea to prove some other results from elementary number theory using this so called derivative.

It seems like D(p) for prime p can be defined freely and still keep the function well-defined (i.e. obeying the product rule). For example, D(p)=p gives D(n) = n * (number of prime factors of n, with multiplicity).

Yes, this also follows immediately if you consider the function in my comment above (which is essentially “linear” over the lattice of prime factorizations).

Aaron, combining your proposal with PS’s indicates that the logarithmic derivative of a number counts its prime factors with multiplicity. Is it too far-fetched to see an analogy with the “argument principle” in complex analysis? (1/2pi.i * contour integral of log derivative counts zeros with multiplicity; poles are zeros with negative multiplicity.)

I think in some sense all that’s really going on is: if a = a1.a2…an, then log derivative of a = sum of log derivatives of aj. So if you arrange that the log derivative of a foo is 1, then a product of n foos has logarithmic derivative n. Substitute foo = “prime number” for Aaron’s comment; substitute foo = “thing of form (z-z0)” for the argument principle (modulo some important technicalities).

But maybe we can use the idea more directly. Take a family of complex functions, indexed by the prime numbers, where fp has a pole at zp of residue ap. Define fn = product fp where n = product of primes p (with repetition as required). Then the Aaron-generalized “derivative” of n, where D(p) = ap, is 1/2 pi i times the contour integral of fn around a region containing all the zp. Alternatively, fix the function — a single f with poles of residue ap at points zp — and vary the contour, so that Cn winds about zp the number of times that p divides n; then D(n) = contour integral around Cn of f. Could there be something useful there? (I have to confess I rather doubt it.)

Thank you for this very interesting article. Isn’t there a little mistake in the proof that sqrt(2) is irrational when 8y^3*D(y) = 4x^2 * y instead of 4x^2 * y *D(y).

Thanks. Putting in the missing D(y) did not change the essence this time.

Serious paper about the P versus NP Problem

Abstract:$UNIQUE \ SAT$ is the problem of deciding whether a given Boolean formula has exactly one satisfying truth assignment. The $UNIQUE \ SAT$ is $coNP-hard$. We prove the $UNIQUE \ SAT$ is in $NP$, and therefore, $NP = coNP$. Furthermore, we prove if $NP = coNP$, then some problem in $coNPC$ is in $P$, and thus, $P = NP$. In this way, the $P$ versus $NP$ problem is solved with a positive answer.

Paper: http://hal.archives-ouvertes.fr/hal-00984866

@Paul Rio

I went through most of it. The writeup doesn’t appear to be valid to me. It fails a couple sanity checks and seems like it is too easy to generalize towards contradictory results (Replace polynomial time computable with decidability). Putting these aside, I found a basic conceptual error. It’s sort of distributed throughout the approach. I feel like whatever one thing I could pull out could be rebutted using an error of similar flavor elsewhere in the paper. I’m not interested in getting into that. Others might be willing to highlight something specific, though.

I Googled the author’s name afterwards. He’s submitted at least two P vs NP papers (showing that P != NP). You can links to them here (http://www.win.tue.nl/~gwoegi/P-versus-NP.htm). Make of that what you like.

TL;DR: Looks incorrect due to some conceptual errors.

I found the same conceptual error, but I didn’t give any importance to this, because the whole result is pretty important and even though it might not be entire correct, it could bring new very interesting advances on this problem.

The conceptual error that I found was the author use the term compute many times when he must use decide. Besides, he could have another mistake in the rejection of N_(SAT) (I should not put the state of rejection in the Turing Machine N_(SAT), because it is a proved result every Turing machine which accepts in polynomial time decides in polynomial time too). However, these errors could be fixed in a simple way.

I would put in his case the Turing machine N_(SAT) rejects only when N_(SAT) overpass a considerable amount of actions with the input. I would not put the state of rejection in the Turing Machine N_(SAT), because it is a proved result every Turing machine which accepts in polynomial time decides in polynomial time too. But, this conceptual errors does not change the whole result of this article. That’s why I consider is a serios result of the P versus NP Problem.

@Paul Rio

The conceptual errors kill the entire proof. I wasn’t looking at the difference between compute/decide, though. These errors basically end up assuming what he is trying to prove.

He isn’t using reductions correctly (at all, tbh). Instead of just doing deterministic preprocessing of the input (to transform it into something else), the author is modifying the entire machine. His new machines aren’t simply non-deterministic (in the NP sense), but are more powerful machines with alternations on the outside. For example, he existentially searches for 1 satisfying assignment then uses that information to do additional computation. This is actually using an NP machine as an oracle. If you equivocate something higher up in the PH with NP, then obviously its going to collapse to the 1st level.

With the second part of the proof, he reasons that the set of NP-complete problems is closed under union. This is definitely NOT true.

A big tip off for papers like these is what information we get out of it. While reading this paper, I learned absolutely nothing interesting about the nature of polynomial time. There wasn’t even serious discussion of it. The mechanics just happened to be oriented towards the P vs. NP for little specific reason. I could go in and replace the word polynomial with exponential or computable; the logic would probably check out (assuming it was good in the first place).

Interesting article! Question: I intuitively understand the concept of a ‘classic’ derivative as the slope of a function.

The function D(n) introduced here seems to be familiar to the ‘classic’ derivative function in that rule (3) is similar. The rest of the function seems quite random to me.

Am I missing something?

Derivatives and differentials have to do with local, linear approximations to functions, beginning with functions of type and moving on to ever fancier generalizations.

Algebraic operators that obey reasonable facsimiles of the Leibniz formula are usually called

derivations.There are many strange functions that aren’t very well understood in mathematics. Continuations of hyperpowers come to mind.

If playing with this function, it might be interesting to define the antiderivitive, then differintegral and extend it to fractional calculus via the gamma (or other continuations of factorials) function.

If hyperpowers are any guide, even something with a simple definition presents large challenges to understanding.

I think computing is equivalent to factoring in general also.

It should not be difficult to see by induction on on number of

the form , that $D(N)=N\cdot\left(\sum_{i=1}^{i=n}\frac{k_{i}}{p_{i}}\right)$.

So it is easy to compute if we are given the complete factorization

of . Now for the other direction, we use to denote

and to denote . Hence we are given

and , and we want to compute the ‘s. Let us use to

denote the sum . So if

then gcd of and gives us a non trivial factor of .

If , then it is not hard to see that .

So let us say . In this case number of bits

in is . Hence

we can test if is divisible by primes up to , giving

us all prime factor of . Seems correct or are there mistakes?

I think this fails only when all are 1.

As regards open problems, there is a famous link to Giuga’s Conjecture which is explored very nicely here: https://cs.uwaterloo.ca/journals/JIS/VOL15/Oller/oller5.pdf