Skip to content

The Derivative Of A Number

August 19, 2014

Are you kidding?

Unknown

Edward Barbeau is now a professor emeritus of mathematics at the University of Toronto. Over the years he has been working to increase the interest in mathematics in general, and enhancing education in particular. He has published several books that are targeted to help both students and teachers see the joys of mathematics: one is called Power Play; another Fallacies, Flaws and Flimflam; and another After Math.

Today I want to discuss his definition of the derivative of a number, yes a number.

We all know the concept of a derivative of a function. It is one of the foundational concepts of calculus, and is usually defined by using limits. For the space of polynomials it can be viewed as a linear operator {D} so

\displaystyle D(x^{n}) = nx^{n-1}.

The derivative operator in general satisfies many properties; one is the product law:

\displaystyle D(fg)=fD(g) + gD(f).

This rule is usually credited to Gottfried Leibniz. Somehow the great Issac Newton did not know this rule—at least that is what is claimed by some.

Definition

Barbeau defined the derivative of a natural number in 1961. Define {D(n)} for a natural number by the following rules:

  1. {D(p)=1} for all primes {p}.
  2. {D(1)=0}.
  3. {D(kl)=kD(l)+lD(k)} for all numbers {k,l}.

Here is a picture from his paper:

def

This proves that he really did it a long time ago. Note the typewriter type face: no LaTeX back then. He proved the basic result that {D(n)} is well defined. This is not hard, is necessary to make the definition meaningful, but we will leave it unproved here. See his paper for details.

A simple consequence of the rules is that {D(p^{k}) = kp^{k-1}} for {p} a prime. This follows by induction on {k}. For {k=1} it is rule (1). Suppose that {k>1}:

\displaystyle \begin{array}{rcl} D(p^{k}) &=& pD(p^{k-1}) + D(p)p^{k-1} \textit{ by the product rule;} \\ &=& p(k-1)p^{k-2} + p^{k-1} \textit{ by rule (1) and by induction;} \\ &=& kp^{k-1}. \end{array}

Unfortunately this does not hold in general. Also {D} is not a linear operator: {D(5)=1} and {D(3)+D(2)=2}. This double failure, the derivative of a power is not simple and the derivative is not linear in general, makes {D} difficult to use. One of the beauties of the usual derivative, even just for polynomials, is that it is a linear operator.

Results

The derivative notion of Barbeau is interesting, yet it does not seem to have been intensively studied. I am not sure why—it may be because it is a strange function—I am not sure.

There is hope. Recently there have been a number of papers on his notion. Perhaps researchers are finally starting to realize there may be gold hidden in the derivative of a number. We will see.

Most of the papers on {D} have been more about intrinsic properties of {D} rather than applications. A small point: most of the papers replace {D(n)} by {n'}: so if you look at papers be prepared for this notation shift. I decided to follow the original paper’s notation.

The papers have results of three major kinds. One kind is the study of what are essentially differential equations. For example, what can we say about the solutions of

\displaystyle D(D(n)) = a,

where {a} is a constant? The others are growth or inequality results: how fast and how slow does {D(n)} grow? For example, for {n} not a prime,

\displaystyle \sqrt{n} \le D(n) \le n\log(n)/2.

A third natural class of questions is: can we extend {D} to more than just the natural numbers? It is easy to extend to integers, a bit harder to rationals, and not so easy beyond that.

Here are two interesting papers to look at:

  • Investigations on the properties of the arithmetic derivative, which is a paper by Niklas Dahl, Jonas Olsson, and Alexander Loiko.
  • How to Differentiate a Number, which is a paper by Victor Ufnarovski and Bo Åhlander.

An Application

I tried to use {D} to prove something interesting. I think if we could use {D} to prove something not about {D} but about something that does not mention {D} at all, that would be exciting. Tools must be developed in mathematics, but the key test of their power is their ability to solve problems from other areas. One example: the power of complex analysis was manifest when it was used to prove deep theorems from number theory. Another example: the power of the theory of Turing machines was clear when it was used to yield an alternate proof of the Incompleteness Theorem.

The best I could do is use {D} to prove an ancient result: that {\sqrt{2}} is not rational. Well I may be able to prove a bit more.

We note that from the product rule: {D(w^{2}) = wD(w) + wD(w) = 2wD(w)}, for any {w}. Recall if {w} were prime this would be {D(w^{2}) = 2w}.

Now assume by way of contradiction that {\sqrt{2}} is rational number. Then for some {x,y} positive numbers we have

\displaystyle x^{2} = 2y^{2}.

As usual we can assume that {x,y} are co-prime.

So let’s take derivatives of both sides of the equation—we have to use {D} sometime, might as well start with it.

Note that it is valid to apply D to both sides of an equation, so long as one is careful to obey the rules. For example 5=2+3 allows D(5) = D(3+2), but there is no additive rule to make the right-hand side become D(3)+D(2), which would make the equation false.

The result of taking the derviative of both sides is:

\displaystyle \begin{array}{rcl} D(x^{2}) &=& D(2y^{2}) \\ 2xD(x) &=& D(2)y^{2} + 2D(y^{2}) \\ 2xD(x) &=& y^{2} + 2D(y^{2}) \\ 2xD(x) &=& y^{2} + 4yD(y). \end{array}

Now square both sides and substitute {x^{2}} for {2y^{2}} to get:

\displaystyle \begin{array}{rcl} 4 x^{2} D(x)^{2} &=& y^{4} + 8 y^{3} D(y) + 16 y^{2} D(y)^{2} \\ 4 x^{2} D(x)^{2} &=& y^{4} + 4x^{2} y D(y) + 8 x^{2} D(y)^{2} \\ x^{2} (4D(x)^{2} - 4yD(y) - 8D(y)^{2}) &=& y^{4}. \end{array}

This implies that {x^2} divides {y^{4}}. This leads to a contradiction, since it implies that {x,y} are not co-prime.  Whether we also get that {x} divides {y^{2}} is possibly circular, but anyway this is enough.  The point is that owing to {D(2) = 1}, the derivative removed the problematic factor of {2}.

Note from the original equation, {x^{2}=2y^{2}} we only get that {x} divides {2y^{2}} which is too weak to immediately get a contradiction. Admittedly ours is not the greatest proof, not better than the usual one especially owing to the squaring step, but it does use the derivative of an number.

One idea: I believe that this idea can be used to prove more that the usual fact that {x^{2}=2y^{2}} has no nonzero solutions over the integers. I believe we can extend it to prove the same result in any ring where {D} can be defined, possibly modulo issues about lack of unique factorization. This handles the Gaussian integers, for example.

Open Problems

Can we use this strange function {D(n)} to shed light on some open problem in number theory? Can we use it in complexity theory? A simple question is: what is the complexity of computing {D(n)}? If {n=pq} where {p} and {q} are primes, then {D(n)=D(pq)=p+q} by the rules. But we know that {n=pq} and thus we have two equations in two unknowns and we can solve for {p} and {q}. So in this case computing {D(n)} is equivalent to factoring {n}. What happens in the general case? An obvious conjecture is that computing {D} is equivalent to factoring.

[fixed error in proof that owed to typo {D(y^2) = y^2 D(y)}, as noted by user "Sniffnoy" and others, and changed some following text accordingly; further fix to that proof; fixed typo p^k to p^{k-1}]

About these ads
31 Comments leave one →
  1. August 19, 2014 11:26 am

    Isn’t 2*D(y^2) wrongly derived as 4y^2D(y) instead of 4yD(y) as per the rules mentioned above ? Or am I wrong in thinking so ?

  2. August 19, 2014 12:04 pm

    In his early attempt to forge a link between logic and mathematics, Leibniz represented simple predicates or primitive propositions as prime numbers and logical conjunction as multiplication.

    See, for example :

    http://permalink.gmane.org/gmane.comp.inquiry/408

    In that spirit and in the present context we may well ask, What is the derivative of a proposition?

  3. August 19, 2014 12:19 pm

    Should it be considered *strange* that D is not linear? That has always struck me as one of the key properties of the derivative in other settings. (Are there natural “derivatives” in settings I’m not familiar with that aren’t linear?)

  4. August 19, 2014 12:42 pm

    There seems to be an error in your proof; D(y^2) is 2yD(y), not 2y^2 D(y^2).

    As for why it it may not have been studied so much — I suspect it has a lot to do with it not being additive. This is the most obvious way to define an arithmetic derivative, but because it’s not additive, it leaves open the possibility that there are better ways; it’s not clear that this is the arithmetic derivative. And indeed I’m pretty sure other people have defined other arithmetic derivatives. Making this one seem a bit less natural and a bit more arbitrary.

    • August 19, 2014 2:42 pm

      Yes, I also think that there has been a mistake.

      Here is another definition of arithmetic derivative: http://mathoverflow.net/questions/142808/strange-or-stupid-arithmetic-derivation

      • August 19, 2014 2:54 pm

        That one seems distinctly worse than Barbeau’s, though. It doesn’t satisfy the Leibniz rule, and, well, really I’m failing to see any advantage over it. The resemblance to a derivative appears to be superficial.

    • August 19, 2014 5:15 pm

      Thanks!—fixed—we think. A strange bug (maybe owing to the line with y^4 on RHS having a different overhang) affecting also the Preview feature needed hitting Update multiple times.

  5. Colin Percival permalink
    August 19, 2014 4:06 pm

    Typo in the equations under ‘Definition’, after ‘Suppose that k > 1′ — the last line should be = k p^{k-1}.

  6. Gustavo Massaccesi permalink
    August 19, 2014 4:07 pm

    There is a small typo in the D(p^k) formula. There is a missing “-1″ in last exponent.

    Where it says: D(p^{k}) = [...] = kp^{k}

    It should say: D(p^{k}) = [...] = kp^{k-1}

  7. August 19, 2014 4:45 pm

    Perhaps a more “natural” notion is the logarithmic derivative of a number, defined as d(n) := D(n)/n. This is “log linear”, in the sense that $\latex d(xy) = d(x) + d(y)$ for any two positive integers $x$ and $y$. Starting from $\latex d(1) = 0$ and d(p) = 1/p for any prime p, this also shows that it is an immediate consequence of unique factorization that D is well defined.

  8. August 19, 2014 6:37 pm

    Reblogged this on Anurag's Math Blog and commented:
    An interesting notion. It might be a good idea to prove some other results from elementary number theory using this so called derivative.

  9. August 19, 2014 8:01 pm

    It seems like D(p) for prime p can be defined freely and still keep the function well-defined (i.e. obeying the product rule). For example, D(p)=p gives D(n) = n * (number of prime factors of n, with multiplicity).

    • August 20, 2014 3:52 am

      Yes, this also follows immediately if you consider the function d(n) = D(n)/n in my comment above (which is essentially “linear” over the lattice of prime factorizations).

  10. August 20, 2014 1:52 am

    Aaron, combining your proposal with PS’s indicates that the logarithmic derivative of a number counts its prime factors with multiplicity. Is it too far-fetched to see an analogy with the “argument principle” in complex analysis? (1/2pi.i * contour integral of log derivative counts zeros with multiplicity; poles are zeros with negative multiplicity.)

    I think in some sense all that’s really going on is: if a = a1.a2…an, then log derivative of a = sum of log derivatives of aj. So if you arrange that the log derivative of a foo is 1, then a product of n foos has logarithmic derivative n. Substitute foo = “prime number” for Aaron’s comment; substitute foo = “thing of form (z-z0)” for the argument principle (modulo some important technicalities).

    But maybe we can use the idea more directly. Take a family of complex functions, indexed by the prime numbers, where fp has a pole at zp of residue ap. Define fn = product fp where n = product of primes p (with repetition as required). Then the Aaron-generalized “derivative” of n, where D(p) = ap, is 1/2 pi i times the contour integral of fn around a region containing all the zp. Alternatively, fix the function — a single f with poles of residue ap at points zp — and vary the contour, so that Cn winds about zp the number of times that p divides n; then D(n) = contour integral around Cn of f. Could there be something useful there? (I have to confess I rather doubt it.)

  11. cyru permalink
    August 20, 2014 2:42 am

    Thank you for this very interesting article. Isn’t there a little mistake in the proof that sqrt(2) is irrational when 8y^3*D(y) = 4x^2 * y instead of 4x^2 * y *D(y).

    • August 20, 2014 2:09 pm

      Thanks. Putting in the missing D(y) did not change the essence this time.

  12. Paul Rio permalink
    August 20, 2014 7:33 am

    Serious paper about the P versus NP Problem

    Abstract:$UNIQUE \ SAT$ is the problem of deciding whether a given Boolean formula has exactly one satisfying truth assignment. The $UNIQUE \ SAT$ is $coNP-hard$. We prove the $UNIQUE \ SAT$ is in $NP$, and therefore, $NP = coNP$. Furthermore, we prove if $NP = coNP$, then some problem in $coNPC$ is in $P$, and thus, $P = NP$. In this way, the $P$ versus $NP$ problem is solved with a positive answer.

    Paper: http://hal.archives-ouvertes.fr/hal-00984866

    • August 20, 2014 3:15 pm

      @Paul Rio

      I went through most of it. The writeup doesn’t appear to be valid to me. It fails a couple sanity checks and seems like it is too easy to generalize towards contradictory results (Replace polynomial time computable with decidability). Putting these aside, I found a basic conceptual error. It’s sort of distributed throughout the approach. I feel like whatever one thing I could pull out could be rebutted using an error of similar flavor elsewhere in the paper. I’m not interested in getting into that. Others might be willing to highlight something specific, though.

      I Googled the author’s name afterwards. He’s submitted at least two P vs NP papers (showing that P != NP). You can links to them here (http://www.win.tue.nl/~gwoegi/P-versus-NP.htm). Make of that what you like.

      TL;DR: Looks incorrect due to some conceptual errors.

      • Paul Rio permalink
        August 21, 2014 10:06 am

        I found the same conceptual error, but I didn’t give any importance to this, because the whole result is pretty important and even though it might not be entire correct, it could bring new very interesting advances on this problem.

      • Paul Rio permalink
        August 21, 2014 10:54 am

        The conceptual error that I found was the author use the term compute many times when he must use decide. Besides, he could have another mistake in the rejection of N_(SAT) (I should not put the state of rejection in the Turing Machine N_(SAT), because it is a proved result every Turing machine which accepts in polynomial time decides in polynomial time too). However, these errors could be fixed in a simple way.

      • Paul Rio permalink
        August 21, 2014 11:13 am

        I would put in his case the Turing machine N_(SAT) rejects only when N_(SAT) overpass a considerable amount of actions with the input. I would not put the state of rejection in the Turing Machine N_(SAT), because it is a proved result every Turing machine which accepts in polynomial time decides in polynomial time too. But, this conceptual errors does not change the whole result of this article. That’s why I consider is a serios result of the P versus NP Problem.

      • August 25, 2014 5:31 pm

        @Paul Rio

        The conceptual errors kill the entire proof. I wasn’t looking at the difference between compute/decide, though. These errors basically end up assuming what he is trying to prove.

        He isn’t using reductions correctly (at all, tbh). Instead of just doing deterministic preprocessing of the input (to transform it into something else), the author is modifying the entire machine. His new machines aren’t simply non-deterministic (in the NP sense), but are more powerful machines with alternations on the outside. For example, he existentially searches for 1 satisfying assignment then uses that information to do additional computation. This is actually using an NP machine as an oracle. If you equivocate something higher up in the PH with NP, then obviously its going to collapse to the 1st level.

        With the second part of the proof, he reasons that the set of NP-complete problems is closed under union. This is definitely NOT true.

        A big tip off for papers like these is what information we get out of it. While reading this paper, I learned absolutely nothing interesting about the nature of polynomial time. There wasn’t even serious discussion of it. The mechanics just happened to be oriented towards the P vs. NP for little specific reason. I could go in and replace the word polynomial with exponential or computable; the logic would probably check out (assuming it was good in the first place).

  13. Martijn permalink
    August 20, 2014 10:47 am

    Interesting article! Question: I intuitively understand the concept of a ‘classic’ derivative as the slope of a function.

    The function D(n) introduced here seems to be familiar to the ‘classic’ derivative function in that rule (3) is similar. The rest of the function seems quite random to me.

    Am I missing something?

    • August 23, 2014 8:18 pm

      Derivatives and differentials have to do with local, linear approximations to functions, beginning with functions of type \mathbb{R}^n \to \mathbb{R} and moving on to ever fancier generalizations.

      Algebraic operators that obey reasonable facsimiles of the Leibniz formula are usually called derivations.

  14. Dez Akin permalink
    August 20, 2014 4:55 pm

    There are many strange functions that aren’t very well understood in mathematics. Continuations of hyperpowers come to mind.

    If playing with this function, it might be interesting to define the antiderivitive, then differintegral and extend it to fractional calculus via the gamma (or other continuations of factorials) function.

    If hyperpowers are any guide, even something with a simple definition presents large challenges to understanding.

  15. August 21, 2014 2:51 pm

    I think computing D(N) is equivalent to factoring in general also.
    It should not be difficult to see by induction on n on number of
    the form N=\prod_{i=1}^{i=n}p_{i}^{k_{i}}, that $D(N)=N\cdot\left(\sum_{i=1}^{i=n}\frac{k_{i}}{p_{i}}\right)$.
    So it is easy to compute D(N) if we are given the complete factorization
    of N. Now for the other direction, we use P to denote \prod_{i=1}^{i=n}p_{i}
    and P_{i} to denote \frac{P}{p_{i}}. Hence we are given D(N)=N\cdot\left(\frac{\sum_{i=1}^{i=n}P_{i}\cdot k_{i}}{P}\right)
    and N, and we want to compute the p_{i}‘s. Let us use S to
    denote the sum \sum_{i=1}^{i=n}P_{i}\cdot k_{i}. So if P\nmid S
    then gcd of N and D(N) gives us a non trivial factor of N.
    If P\mid S, then it is not hard to see that \forall i\in[n]:p_{i}\mid k_{i}.
    So let us say k_{i}=p_{i}\cdot e_{i}. In this case number of bits
    in N is \sum_{i=1}^{i=n}e_{i}\cdot p_{i}\cdot\log p_{i}. Hence
    we can test if N is divisible by primes up to \log N, giving
    us all prime factor of N. Seems correct or are there mistakes?

  16. September 6, 2014 12:50 pm

    As regards open problems, there is a famous link to Giuga’s Conjecture which is explored very nicely here: https://cs.uwaterloo.ca/journals/JIS/VOL15/Oller/oller5.pdf

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,799 other followers

%d bloggers like this: