# A Post on Post

* Emil Post role in shaping modern computational theory *

Emil Post was one of founders of the modern theory of computation. He is famous for his own definition of computing based on string rewriting rules, his Post Correspondence Problem, and many other wonderful discoveries, including the notion of many-one reduction.

Today I want to talk about Post, and his great influence on modern complexity theory.

Until the other day I believed Post was a high school teacher most of his career. For me, it was unimaginable how one could turn out brilliant work, as Post did, and have to teach all day, every day.

The correct story, according to the archives at the American Philosophical Society, is Post had an attack of mental illness—manic depression —and lost a university position in the early 1920’s. He then survived by teaching high school in New York City until 1932, when he moved to a position at the City College of New York (CCNY). He stayed there the rest of his career, even though he suffered many further attacks of mental illness. At CCNY his teaching load was high—16 contact hours per week. Yet he continued to do important research and was one of CCNY’s most popular teachers.

The question of high teaching load reminds me how lucky we are, and I wonder what great other results a brilliant mind like Post’s could have achieved in a different environment. Perhaps he could have solved many more problems—unfortunately we will never know.

In the 1980’s Ravi Kannan was being recruited to come to Georgia Tech: this is when Rich DeMillo was first at Tech as a faculty member, before he returned to be the Dean of the College of Computing. Then, Tech was on the quarter system for teaching. During Ravi’s job interview he met various faculty members for the usual 30 minute slots. One slot was a meeting with a faculty member I will call X. During their meeting, Ravi quickly realized he had little in common with X who was working in systems programming. So rather than talk about research he asked X what was the teaching load at Tech? Faculty X answered it was four. Since Ravi was coming from Berkeley where the load was 2-1-1, and they were also on the quarter system, Ravi answered four courses a year sounded fine. Faculty X said it was four courses *per quarter*. Ravi said, “you mean the load here is 12?” Finally, faculty X said,

“Look, can’t you do arithmetic. The load is 16 courses a year: there are four quarters and four courses a quarter. Four times four is sixteen.”

Ravi left their meeting a bit shocked: 16 courses a year—what a load. Luckily he immediately ran into DeMillo who explained that faculty X was on a different schedule. For research active faculty the load was indeed 2-1-1. Ravi did not come to Georgia Tech: I do not think this was the reason, but who knows.

Post lost an arm as a child and suffered from mental illness as an adult. Yet, he was able to overcome all this and become one of the founders of the theory of computing. There is a wonderful tribute to Post by Martin Davis, here is a comment on Post’s teaching style:

Post’s classes were tautly organised affairs. Each period would begin with student recitations covering problems and proofs of theorems from the day’s assignment. These were handed out apparently at random and had to be put on the blackboard without the aid of textbooks or notes. Woe betide the hapless student who was unprepared. He (or rarely she) would have to face Post’s “more in sorrow than anger look”. In turn, the students would recite on their work. Afterwards, Post would get out his 3 by 5 cards and explain various fine points. The class would be a success if he completed his last card just as the bell rang. Questions from the class were discouraged: there was no time. Surprisingly, these inelastic pedagogic methods were extremely successful, and Post was a very popular teacher.

It would have been an honor to have met Post.

** Post’s Theorem **

Post proved many beautiful theorems, but perhaps the most important is the classic theorem:

Theorem:A set is recursive if and only if both and are recursively enumerable.

Recall a set is recursive if there is a decision procedure to decide if an is in or not; the set is recursively enumerable (r.e.) if there is a generator that enumerates all the elements

of the set. The generator can generate the elements with repeats and in any order at all. The modern name is computably enumerable (c.e.): my use of the older term is due to my learning the basics of this work many years ago.

You all probably know the proof of this theorem, but here is a sketch just in case you do not recall it: The interesting direction is suppose and are both r.e. Suppose you wish to test whether or not is in . There is a generator for and a generator for . Simulate both generators at the same time: run for one time step, then , and so on. Eventually must be generated by or . If by , then ; if by , then .

What a beautiful proof. This simple result is one of the cornerstones of modern computation theory. It also established a difference between “recursive” and “primitive recursive,” at a time when the latter was argued by some to be the proper notion of computation.

I have often wished we had a similar theorem in complexity theorem. The best we might hope for is what one could call a “Post++ Theorem”:

“Theorem”:A set is in P if and only if both and are in NP.

Many do not believe this, but it could be true. If it were true it would be quite powerful:

- It would give a proof that primality is in P.
- It would also show that linear programming is in P.
- It would break most crypto-systems.

Even though the first two are known now, Post++ would have supplied a proof long ago: for example, it has long been known that primality is in both NP and co-NP. Unfortunately, there does not seem to be a simple proof or even a complex proof of this theorem.

** Post’s Problem **

Post gave a landmark address at the American Mathematical Society in 1944, where he helped set the agenda of computability theory for decades. One of the key questions he raised is:

Is there a r.e. set that is not equivalent to the halting problem?

All “natural” problems seemed to be complete, and Post wanted to know if there were sets between recursive and the halting problem with respect to Turing equivalence. This question became known as Post’s Problem.

In an attempt to solve this problem Post, and others, defined a series of properties of r.e. sets. The idea was to find a property so there were r.e. sets with the property, yet such sets could not be equivalent to the halting problem. This was Post’s motivation for defining the notion of *simple* sets, for example.

All of these attempts failed. Note, it is not hard to define explicitly a set that is harder than the halting problem: Consider the set of all programs so that the Turing machine defines a **total** function: thus, is in the set if an infinite number of halting problems are all true. This set can be shown to be harder than the halting problem; it is at level of the arithmetical hierarchy.

** Solutions to Existence Problems in General **

Post’s problem was to construct a set with given properties. Problems of this type abound in mathematics: construct an object X with properties For example, suppose that we want to show there is a function defined on the interval that is continuous but nowhere differentiable. There are at least three approaches:

**Explicit Construction** Try and define an explicit function that is continuous and yet at every point fails to have a derivative. Using standard functions like: , , , and so on does not work. A function like,

will never solve the problem.

**Limit Construction:** Try and use a limiting process. This is the way that such functions were first constructed. Karl Weierstrass, the famous analyst, defined as

where , is a positive odd integer, and

He proved this function was continuous and yet had no derivative at any point.

**Random Construction:** Try and use a random process. Simply selecting a random continuous function works. The problem with this approach is defining rigorously what it means for a continuous function to be randomly selected. Once this was solved by Norbert Wiener this approach showed “most” continuous functions had no derivatives.

** Solutions to Post’s Problem **

Let’s see how the three approaches worked on Post’s Problem:

**Explicit Construction** Post tried to define an explicit property to solve his problem. He failed. Many years later, in 1991, Leo Harrington and Robert Soare showed such a property existed.

**Limit Construction:** The first solutions to Post problem were found—almost at the same time—by Richard Friedberg (in 1957) and Albert Muchnik (in 1956). Curiously, their methods were almost the same, and they used a limiting process. This process is one of the fundamental tools in computability theory—it is called the priority method.

Actually the method is the finite injury priority method. I will explain, at a high level, the method in a future discussion. The critical insight for us is the method constructs the required set as the limit of a series of finite sets. Very similar in spirit to how Weierstrass constructs his function. Take a look at this article by Robert Soare for a detailed description of the problem, and the various theories stemming from it.

**Random Construction:** I do not know if there are solutions to Post’s Problem based on this approach.

** Open Problems **

Prove Post “Theorem,” or disprove it. A very interesting question in this area is the question of whether or not there is a nontrivial automorphism of the sets under the Turing equivalence relationship: do the Turing degrees have an automorphism?

Finally, why do open problems stay open for years and then get solved independently at about the same time? The Friedberg-Muchnik resolution of Post’s Problem is just one of the many examples of this behavior. I wonder why this happens. Any ideas?

[fixed typo]

Is it really the case that Friedberg proved his result in his undergraduate thesis? (So claimed Sacks in his undergraduate recursion theory course.)

I have read the same story.

Just a quick nitpick. For the Weierstrass function, I think that you want \sum_{n=0}^{\infty}… instead of \sum_{a=0}^{\infty} …

Thanks for typo…

NP is in P if and only if both {S} and {\bar{S}} are in NP show there is a function defined interval {[0,1]}.

Did just post on A mathematical embarrassment at http://wp.me/r9Ir 3 months ago; Just posted on fifty years of FOCS.

Finally, why do open problems stay open for years and then get solved independently at about the same time?Not surprisingly, sociologists have thought about this problem. A few years back, Malcolm Gladwell wrote a newyorker article that touches on this subject, and cites some work on the question:

http://www.gladwell.com/2008/2008_05_12_a_air.html

(Sections 5. and 7. talk about this the most.)

While calculus and evolution are well known examples, apparently oxygen, color-photography, logarithms, sunspots, thermometers, telescopes, typewriters, steamboat, telephone, can all be attributed to more than one person.

It then seems less surprising when it happens to problems that only a few tens (or maybe hundreds) of people would work on. Once the tools are there, if two experts think about the problem around the same time, an independent solution is not that unlikely.

I guess as the time lag from coming-up-with-solution to publicizing-solution decreases, the definition of “around the same time” above changes, and maybe the likelihood of independent solutions decreases a little bit. On the other hand, with increased and faster communication, both the commonality of tools and the sharing of open problems increase, and that would increase the likelihood of independent solutions. Which effect is larger? What are the other factors? More questions for sociologists to study!

Thanks for this comment. I will read the link. Thanks again.

This is perhaps a comment for another post … but here it is anyways.

As with any field, the corpus of knowledge grows with time, and any “newcomer” has both the benefit of large corpus and the burden of learning it.

Given theorems are proved from one or more axioms and/or one or more theorems, is there any effort documenting the theorem graph. For example, assume the set of axioms and the first theorem that was proved using a subset of those axioms. Draw edges from the axioms to that theorem, and so on. I am looking for a “wiki-theoremia”

Automatic theorem provers perhaps can do this, but they may spend time exploring that part of the space which is either not interesting to us or we have not yet come to appreciate fully.

Finally, why do open problems stay open for years and then get solved independently at about the same time?Aren’t the vast majority of problems solved by just one person? Assuming that is the case, the answer to this question might merely be that the probability that n people independently solve a problem is a decreasing function of n. So when two people independently solve a problem, it’s just the law of large numbers at work.

I wrote a follow-up post.

There is sort of a random construction that solves Post’s problem, though it deviates a little bit from your definition of “random construction”.

If you merely want a Delta^0_2 set that is not computable and not equivalent to the halting problem, then the random construction is relatively easy. (Delta^0_2 is equivalent to “Turing-reducible to the halting problem”; it’s complexity analog is Delta^p_2 = P^{NP}.) The construction is as follows: take any Martin-Lof random set A. Let A_0 consist of the even numbers in A and A_1 consist of the odd numbers in A. Then A_0 and A_1 are Turing-incomparable. In particular, neither can be computable nor equivalent to the halting problem.

To actually solve Post’s problem — that is, to get *c.e.* set of intermediate degree — you can build a promptly simple set below a Delta^0_2 ML-random set.

For more details see Cor 3.4.8 and p. 150 of Nies’s book Computability and Randomness.

Sorry about posting to such an old thread, but I wanted to ask this in a relevant place. Is it known whether Post actually originated use of the term “string” for sequences of characters? I haven’t done very much research, but he’s my top candidate so far.

Recently, I got curious about the use of “string” in computer science. I found a lot of discussion at websites such as stackoverflow showing the term used as far back as the 1950s, but it was limited to the context of early programming languages. I was expecting an earlier use, and it did not take me too long to find a 1944 article by Emil Post that uses the term “string” in the current sense. See p. 286 of http://www.ams.org/journals/bull/1944-50-05/S0002-9904-1944-08111-1/S0002-9904-1944-08111-1.pdf ‘For working purposes, we introduce the letter b, and consider “strings” of 1’s and b’s …’

I also double-checked that Turing did not use “string” in “On computable numbers” but referred to “sequences.” Post’s use of quotes also suggests that it was not established terminology at the time.

Finally, there may have been a pre-existing informal use of “string.” A Google books search of the exact phrase “string of letters” turns up numerous examples in 19th century writing, typically used in contrast with meaningful words.

I think it would be nice to nail down the history of a term that is ubiquitous in computer science theory and practice. I was disappointed to see the lack of historical perspective in much of the discussion.

Thank you very much indeed. I’ve just acknowledged a similarly important year-later comment in our chess-endgames thread, and we noted (and linked) two-years-later comments on the Four-Color Theorem in one of the coloring posts this past month. Dick and I have been intending to do one or more “reader-followup” posts, including one as far back as New Year’s 2010, but various personal and medical events have kept it from the front burner.

Now somewhere—perhaps in one of Brian Greene’s books—I recall a passage where someone is quoted as objecting to string theory on grounds that “Nowhere else in Nature are strings fundamental.” The author chided that someone by saying he should have thought of the example of DNA, but to me that harks back to pure binary strings, which are arguably as fundamental as natural numbers.

The usage strikes me as natural enough coming from expressions like “string of beads” that I would expect multiple-origination of people using it, much as multiple instances of the same evolution step are being documented in paleobiology.

The emergent usage of “string” (in its modern computation/storage sense) apparently is linked to the emergent use of the word “file” in

itscomputation/storage sense.Consulting the OED, one of the earliest usages of “string” is

For “file” we find:

Thus “file” and “string”, in their record-keeping senses, are centuries-old usages.

Thanks for the replies. I’m less interested in the term “file”, which isn’t such a useful concept until you’re talking about physical computers, whereas “string” is a fundamental mathematical abstraction (not that files cannot be abstractions, but the applications in theory are more limited).

For “file”, I always found the analogy pretty obvious to paper filing systems. But a sequence of characters doesn’t even seem much like a string if you think of it as a bare string, except that both can be long and skinny. I agree that the most likely reference is to a string of beads (this was also pointed out to me offline). I also noted above that “string of letters” is found long before the 20th century.

None of this is sufficient to answer the question of how “string” became standard nomenclature. It’s clear to that the answer is not some original Algol or Lisp manual (as erroneously suggested in a few online answers pages). The 1944 Post article is close to using it as nomenclature, aside from the antiseptic quotes. Anyone know of something earlier–i.e., not just a use in passing like “string of letters” but the repeated reference to a kind of abstraction in a mathematical context?

Oh … I dunno … tracing back through the OED’s etymology, it appears that 500 years ago “fyles” were the abstraction of the educated classes and “strenghes” were the ordinary physical reality of the peasants and soldiers … then around 1950 these roles were interchanged … possibly for no other reason than the switch made the new-fangled math and technology associated to computing machines sound more “cool” … which is a deep reason (if your think about it)! 🙂

I think we’re trying to answer different questions. Existing uses of “string” can explain why a mathematician such as Post would have felt it was a good choice of terminology, but I’m more curious about the first instance of its use as terminology, which is no later than 1944, but possibly earlier. By terminology, I mean that the author first explains what is denoted by the word “string” and then goes on to use the term in a consistent way.

Just to set some rough boundaries, here is an 1840 reference that suggests the modern meaning but does not, in my opinion, rise to the level of terminology http://books.google.com/books?id=h5NJAAAAcAAJ&pg=PA253&dq=%22string+of+letters%22&hl=en&ei=y8AETrfVIsfniALhr4jBDQ&sa=X&oi=book_result&ct=result&resnum=3&ved=0CDgQ6AEwAg#v=onepage&q=%22string%20of%20letters%22&f=false “The circumstance of writing having been in early times a continuous string of letters without a break, occasioned a slight difference when this string came to be apportioned into words.”

Granted, this is a little subjective. While “string” is a good choice, it’s not obvious why the term “word” was not simply extended to refer to a series of characters (maybe because strings often have embedded blanks, making them less word-like). There may not be a single origin, but there is probably a gradual accumulation of uses such as Post’s that would have led to its adoption and standardization. It would be interesting not merely to acknowledge some precedents but actually tabulate where it is used and where it is not used but could be (e.g. in Turing’s paper) over time. I don’t think that the question has been resolved at this level.

We are beginning to realize why inventions and evolution occurred simultaneously around the globe.

This is what I posted on January 14, 2011 at 7:46pm on facebook… I may have wrote it a few months before posting. I had no previous exposure that I am aware of the theory of “Multiples,” or anything of that nature, I thought these ideas were my own.

[i]”Actually, I believe the geomagnetic field of the earth contains a holographic record of everything that has ever happened within the scope of the field. This could help explain why human beings evolved into being in multiple locations around the globe simultaneously. This is why monkeys have been known to learn how to use a tool on different islands simultaneously without any help. This is why the same or similar myths arose in different cultures of the world who had no contact with each other. This is also, I believe, where the idea of reincarnation comes from. When people are consciously aware of past lives, they are tapping into the holographic source. This can even explain the origins of ghosts.”[/i]

Information has since stumbled upon me that confirms what I suspected and channeled back in 2010… I realize now that I wasn’t the only one thinking about this and it was passed on to me remotely.

[b]Here is some of what found me:[/b]

[b][1][/b] Author Malcolm Gladwell in The New Yorker writes that the phenomenon of simultaneous discovery, called “multiples” by science historians, is very common (added quote characters for clarity): One of the first comprehensive lists of multiples was put together by William Ogburn and Dorothy Thomas, in 1922, and they found a hundred and forty-eight major scientific discoveries that fit the multiple pattern. Newton and Leibniz both discovered calculus. Charles Darwin and Alfred Russel Wallace both discovered evolution. Three mathematicians “invented” decimal fractions. Oxygen was discovered by Joseph Priestley, in Wiltshire, in 1774, and by Carl Wilhelm Scheele, in Uppsala, a year earlier. Color photography was invented at the same time by Charles Cros and by Louis Ducos du Hauron, in France. Logarithms were invented by John Napier and Henry Briggs in Britain, and by Joost Bürgi in Switzerland. [b][1][/b]

[b][2] David Wilcock; The Source Field Investigations — Full Video![/b]

“In science there is a documented phenomenon called “The Multiples Effect.” Now the phenomenon of multiples was cataloged as having occurred 148 times by 1922. Now what is multiples? Multiples refers to different scientists who are isolated from each-other who are not reading about each-others work independently coming up with exactly the same discoveries at almost exactly the same time.

I’m just going to give you a few examples. Calculus was discovered independently by more than one scientist at a time. The theory of evolution, it wasn’t just Darwin there were other scientists working on it at the same time. Decimal fractions. The discovery of the oxygen molecule. The discovery of color photography. The discovery of logarithms. The discovery of sunspots in the 1600’s. The discovery of the law of conservation of energy. Six different people simultaneously invented the thermometer, each of whom thinking that they discovered it themselves and that they deserve the credit for it. 9 different people independently discovered the telescope at the same time and all wanted to be credited with the discovery. Multiple individuals invented the typewriter at the same time and five different people invited the steamboat at the same time. So what I am telling you is that as we are working at these problems, as our mind is trying to solve something and we start to have new insights its as if those insights are actually creating an informational field… a code that goes into this global mind that we’re all sharing and other people can access that information directly.” David Willcock: “The Source Field Investigations” [2]

[b][3] Naturalistic theory: The view that progress and change in scientific history are attributable to the Zeitgeist which makes a culture receptive to some ideas but not to others.

[/b]

We can see, then, that the notion that the person makes the times is not entirely correct.

Perhaps. as the naturalistic theory of history proposes, the times make the person, or at

least make possible the recognition and acceptance of what that person has to say.

Unless the Zeitgeist and other contextual forces are receptive to the new work, its propo-

nent may not be heard. or they may be shunned or put to death. Society’s response. too,

depends on the Zeitgeist.

Consider the example of Charles Darwin. The naturalistic theory suggests that if

Darwin had died young, someone else would have developed a theory of evolution in the

mid-nineteenth century because the intellectual climate was ready to accept such a way

of explaining the origin of the human species. (indeed. someone else did develop the

same theory at the same time, as we see in Chapter 6.)

The inhibiting or delaying effet of the Zeitgeist operates not only at the broad cul-

tural level but also within science itself, where its effects may be more pronounced. The

concept of the conditioned response was suggested by the Scottish scientist Robert Whytt

in I763, but no one was interested then. Well over a century later, when researchers were

adopting more objective research methods, the Russian physiologist Ivan Pavlov elabo-

rated on Whytt’s observations and expanded them into the basis of a new system of psy-

chology. Thus, a discovery often must await its time. One psychologist wisely noted.

“There is not much new in this world. What passes for discovery these days tends to be

an individual scientist’s rediscovery of some well-established phenomenon” (Gazzaniga,

19sa, p. 231).

Instances of simultaneous discovery also support the naturalistic conception of scien-

tilic history. Similar discoveries have been made by individuals working far apart geo-

graphically, often in ignorance of one another’s work. In 1900, three investigators

unknown to one another coincidentally rediscovered the work of Austrian botanist

Gregor Mendel, whose writings on genetics had been largely ignored for 35 years.

Other examples of simultaneous discovery in science and technology include calculus.

oxygen, logarithms, sun spots, and the conversion of energy, as well as the invention of

color photography and the typewriter, all discovered or promoted at approximately the

same time by at least two researchers (Gladwell. 2008; Ogbum Br Thomas. 1922).

Nevertheless, the dominant theoretical position in a scientiﬁc ﬁeld may obstruct or

prohibit consideration of new viewpoints. A theory may be believed so strongly by the

majority of scientists that any investigation of new issues or methods is stitled.

An established theory can determine the ways in which data are organized and ana-

lyzed as well as the research results pemtitted to be published in mainstream scientiﬁc

journals. Findings that contradict or oppose current thinking may be rejected by a jour-

nal’s editors. who function as gatekeepers or censors, enforcing conformity of thought by

dismissing or trivializing revolutionary ideas or unusual interpretations.

An analysis of articles that appeared in two psychology journals (one published in the

United States and the other in Germany) over a 30-year period from the 1890s to 1920

examined the question of how important each article was considered to be at the time of

publication and at a later date. Level of importance was measured by the number of cita-

tions to the articles in subsequent publications. The results showed clearly that by this

measure, the level of scientiﬁc importance of the articles depended on whether the “re-

search topics [were] in the focus of scientiﬁc attention at the time” (Langc, 2005. p. 209).

Issues not in keeping with currently accepted ideas were judged to be less important. [3]

[b]The Myth Of Unique Ideas[/b]

http://www.podcomplex.com/blog/the-myth-of-unique-ideas/

[b]Simultaneous Inventions and Patents[b]

http://www.terminally-incoherent.com/blog/2008/05/28/simultaneous-inventions-and-patents/

[b][1] Sometimes Ideas Are in the Air[/b]

http://www.gototheboard.com/articles/Sometimes_Ideas_Are_in_the_Air

[b][2] David Wilcock: The Source Field Investigations — Full Video![/b]

[b][3] P.15, A History of Modern Psychology[/b]

By Duane P. Schultz, Sydney Ellen Schultz

http://books.google.ca/books?id=-oAzViZTvp8C&pg=PA15&lpg=PA15&dq=calculus+evolution+oxygen+%22color+photography%22+logarithms+sunspots&source=bl&ots=jsXqbqe78G&sig=SvAbiJOckD41ps0niDUs65Pib8w&hl=en&sa=X&ei=aICOT5XQIsixiQLxjKGRAw&redir_esc=y#v=onepage&q=calculus%20evolution%20oxygen%20%22color%20photography%22%20logarithms%20sunspots&f=false

You can read more about these facts here:

Scientific Proof of Telepathy & The Geomagnetic Field: Dr. Michael Persinger (No More Secrets!) http://www.godlikeproductions.com/forum1/message1816310/pg1

We are beginning to realize why inventions and evolution occurred simultaneously around the globe. everything animate and inanimate, all thoughts, everything.

This is what I posted on January 14, 2011 at 7:46pm on facebook… I may have wrote it a few months before posting. I had no previous exposure that I am aware of the theory of “Multiples,” or anything of that nature, I thought these ideas were my own.

“Actually, I believe the geomagnetic field of the earth contains a holographic record of everything that has ever happened within the scope of the field. This could help explain why human beings evolved into being in multiple locations around the globe simultaneously. This is why monkeys have been known to learn how to use a tool on different islands simultaneously without any help. This is why the same or similar myths arose in different cultures of the world who had no contact with each other. This is also, I believe, where the idea of reincarnation comes from. When people are consciously aware of past lives, they are tapping into the holographic source. This can even explain the origins of ghosts.”

Information has since stumbled upon me that confirms what ‘I’ suspected back in 2010… realizing now that I wasn’t the only one thinking about this and it was passed on to me remotely.

Here is some of what found me:

[1] Author Malcolm Gladwell in The New Yorker writes that the phenomenon of simultaneous discovery, called “multiples” by science historians, is very common (added quote characters for clarity): One of the first comprehensive lists of multiples was put together by William Ogburn and Dorothy Thomas, in 1922, and they found a hundred and forty-eight major scientific discoveries that fit the multiple pattern. Newton and Leibniz both discovered calculus. Charles Darwin and Alfred Russel Wallace both discovered evolution. Three mathematicians “invented” decimal fractions. Oxygen was discovered by Joseph Priestley, in Wiltshire, in 1774, and by Carl Wilhelm Scheele, in Uppsala, a year earlier. Color photography was invented at the same time by Charles Cros and by Louis Ducos du Hauron, in France. Logarithms were invented by John Napier and Henry Briggs in Britain, and by Joost Bürgi in Switzerland. [1]

[2] David Wilcock; The Source Field Investigations — Full Video!

“In science there is a documented phenomenon called “The Multiples Effect.” Now the phenomenon of multiples was cataloged as having occurred 148 times by 1922. Now what is multiples? Multiples refers to different scientists who are isolated from each-other who are not reading about each-others work independently coming up with exactly the same discoveries at almost exactly the same time.

I’m just going to give you a few examples. Calculus was discovered independently by more than one scientist at a time. The theory of evolution, it wasn’t just Darwin there were other scientists working on it at the same time. Decimal fractions. The discovery of the oxygen molecule. The discovery of color photography. The discovery of logarithms. The discovery of sunspots in the 1600’s. The discovery of the law of conservation of energy. Six different people simultaneously invented the thermometer, each of whom thinking that they discovered it themselves and that they deserve the credit for it. 9 different people independently discovered the telescope at the same time and all wanted to be credited with the discovery. Multiple individuals invented the typewriter at the same time and five different people invited the steamboat at the same time. So what I am telling you is that as we are working at these problems, as our mind is trying to solve something and we start to have new insights its as if those insights are actually creating an informational field… a code that goes into this global mind that we’re all sharing and other people can access that information directly.” David Willcock: “The Source Field Investigations” [2]

P.15, A History of Modern Psychology [3] Naturalistic theory: The view that progress and change in scientific history are attributable to the Zeitgeist which makes a culture receptive to some ideas but not to others.

We can see, then, that the notion that the person makes the times is not entirely correct. Perhaps. as the naturalistic theory of history proposes, the times make the person, or at least make possible the recognition and acceptance of what that person has to say. Unless the Zeitgeist and other contextual forces are receptive to the new work, its proponent may not be heard. or they may be shunned or put to death. Society’s response. too, depends on the Zeitgeist.

Consider the example of Charles Darwin. The naturalistic theory suggests that if Darwin had died young, someone else would have developed a theory of evolution in the mid-nineteenth century because the intellectual climate was ready to accept such a way of explaining the origin of the human species. (indeed. someone else did develop the same theory at the same time, as we see in Chapter 6.)

The inhibiting or delaying effet of the Zeitgeist operates not only at the broad cultural level but also within science itself, where its effects may be more pronounced. The concept of the conditioned response was suggested by the Scottish scientist Robert Whytt in I763, but no one was interested then. Well over a century later, when researchers were adopting more objective research methods, the Russian physiologist Ivan Pavlov elaborated on Whytt’s observations and expanded them into the basis of a new system of psychology. Thus, a discovery often must await its time. One psychologist wisely noted. “There is not much new in this world. What passes for discovery these days tends to be an individual scientist’s rediscovery of some well-established phenomenon” (Gazzaniga, 19sa, p. 231).

Instances of simultaneous discovery also support the naturalistic conception of scientilic history. Similar discoveries have been made by individuals working far apart geographically, often in ignorance of one another’s work. In 1900, three investigators unknown to one another coincidentally rediscovered the work of Austrian botanist Gregor Mendel, whose writings on genetics had been largely ignored for 35 years. Other examples of simultaneous discovery in science and technology include calculus. oxygen, logarithms, sun spots, and the conversion of energy, as well as the invention of color photography and the typewriter, all discovered or promoted at approximately the same time by at least two researchers (Gladwell. 2008; Ogbum Br Thomas. 1922).

Nevertheless, the dominant theoretical position in a scientiﬁc ﬁeld may obstruct or prohibit consideration of new viewpoints. A theory may be believed so strongly by the majority of scientists that any investigation of new issues or methods is stitled. An established theory can determine the ways in which data are organized and analyzed as well as the research results permitted to be published in mainstream scientific journals. Findings that contradict or oppose current thinking may be rejected by a journal’s editors. who function as gatekeepers or censors, enforcing conformity of thought by dismissing or trivializing revolutionary ideas or unusual interpretations.

An analysis of articles that appeared in two psychology journals (one published in the United States and the other in Germany) over a 30-year period from the 1890s to 1920 examined the question of how important each article was considered to be at the time of publication and at a later date. Level of importance was measured by the number of citations to the articles in subsequent publications. The results showed clearly that by this measure, the level of scientific importance of the articles depended on whether the “research topics [were] in the focus of scientific attention at the time” (Langc, 2005. p. 209). Issues not in keeping with currently accepted ideas were judged to be less important.