Skip to content

Is There a Test for Consciousness?

December 3, 2009

Blum discussed a theory of consciousness recently and this is a follow-up

Manny Blum is one of the great leaders of the theory community. He is a Turing Award winner; was the Ph.D. advisor to some of the top leaders in the field; is the creator of whole parts of theory—from Blum complexity, to algorithms of all kinds, to the theory of pseudo-randomness, to much more; and is one of the true visionaries of the field. There is only one Manny Blum.

Today I would like to talk about his recent presentation before FOCS 2009 on “what is consciousness?” I have no answer, but I have thought about his provocative talk and have a question that I would like to share.

When I was at Berkeley, in the late 1970’s as a faculty member, Manny was then the chair of the Computer Science Department. In those days we used those little pink square sheets of paper to keep track of calls. You may remember them; each had a space for the obvious information: who called?, when? any message? This was before email was common, and we still used the phone as the primary way to reach each other.

Manny had a spike in his office. On the spike were all the pink sheets, in order, of all the calls that he had ever gotten as the chair. I once asked him why he kept them all, and he just smiled. I never knew if he ever went back and looked at old ones, or if he just liked the physical image of all the messages he had handled as chair. I wish we had something as cool looking today with our email. I always like the image of the messages piled up on a spike.

Let’s turn to the discuss the issue of consciousness.

My Question

Manny discussed the: What is consciousness problem. (WICP). He is quite excited about the theory called Global Workspace Theory, which was created by Bernie Baars. As I said before, Manny believes that this theory has the ability for the first time ever to start to explain the WICP.

Since his talk, I have spent some time reflecting on what he said, and I have a simple thought. Manny talked about trying to figure out what is consciousness—trying to solve the WICP. He spoke at length about a variety of ideas that could lead to some computational like theory of how our brain creates consciousness.

What about a Turing test for consciousness? Is there a way to interact with something and see if it is conscious or not? I have no idea how such a test would go, but the idea would be that it would consist of some kind of interactive scheme. Suppose that X is some entity—it could be my friend Fred, or my dog, or a rock. Is there a series of interactions with X that have the following properties:

  1. If X passes the interactions, then it is reasonable to conclude that X is conscious.
  2. If X fails the interactions, then it is reasonable to conclude that X is not conscious.

The test does not have to be perfect, but having such a test would seem to be “easier” perhaps than understanding the WICP. Let’s call such a test a Blum test. Does it exist?

I checked and there has been some work on using the famous Turing test to argue about the WICP. I quote:

The Turing test has generated a great deal of research and philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers, argues that a philosophical zombie could pass the test, yet fail to be conscious.

The test for conscious would not in my opinion be the same as the Turing test. I think that a consciousness test could be quite different. For starters, my dog fails the Turing test by definition since she cannot talk. But, that seems a bit narrow—maybe my dog is conscious?

There also is the so called mirror test, invented by Gordon Gallup. The test essentially checks whether or not, in our terms, X can recognize X in a mirror. The test seems to not be what I am looking for. If nothing else, it says that any X that cannot “see” cannot be conscious.

Finally, Atish Das Sarma points out that there could be some connection to the recent work of Brendan Juba and Madhu Sudan on Universal semantic communication. I do not know.

Open Problems

Is there a Blum test for consciousness? What would such a test look like? What do you think?

[minor edits]

45 Comments leave one →
  1. L. Zoel permalink
    December 3, 2009 1:20 pm

    Wouldn’t philisophical zombies pass ANY test? By definition, a zombie is externally exactly the same as a human. I think requiring the Turing test to distinguish between humans and zombies is too high a standard, since the only way to tell them apart is to measure their qualia, which is by definition impossible. Of course Hofstadter and Dennet don’t believe in qualia or zombies, which is probably why they think the Turing test makes a good test for consciousness.

  2. Dan Fitch permalink
    December 3, 2009 1:20 pm

    I think it’s problematic to think of this consciousness test as a boolean yes/no dichotomy. Consciousness is probably a sliding scale from 0 to ? (c.f. Hofstadter’s arguments) and we should probably be thinking about tests that measure on that scale.

    Interesting post! Curious to see what comments people have.

    • rjlipton permalink*
      December 3, 2009 1:23 pm

      Yes a sliding scale is reasonable. I am curious too what other think.

  3. mquander permalink
    December 3, 2009 1:27 pm

    It seems to me like a little bit of a non sequitur to say that your dog fails the Turing test because it cannot talk, or to say that a blind person fails the mirror test because he or she cannot see in the mirror. Surely the Turing test is still the Turing test if you did it through text, telepathy, or sign language instead of spoken language. (Regarding the mirror test, I’d say that the “point” of the test is to recognize oneself and distinguish one’s own actions from other events; you could probably cook up the “same test” with a different kind of feedback, although it wouldn’t be as simple.)

    As long as we’re evaluating the results of consciousness instead of the cause of consciousness, we always have to rely on the subject being able to indicate something to the tester. No matter what you pick, you could always come up with a counterexample where there’s some physical reason that the subject can’t communicate in that way.

    • December 8, 2009 4:39 pm

      This thread may be going a little stale but I can’t resist piping in with regard to brain scans. There’s an old language log post regarding being distracted by the brain when thinking about the mind: Distracted by Brain. The main quote is “Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people’s abilities to critically consider the underlying logic of this explanation.” They found that even students in their neuroscience class could be misled and that really only other neuroscience faculty weren’t fooled by brain scans. I would interpret this to mean that when we’re thinking about consciousness, a pattern of firing neurons will tell us remarkably little at least with our present knowledge of the brain/mind.

      • rjlipton permalink*
        December 8, 2009 7:40 pm

        I should do a follow-up at sometime soon. A fun topic.

  4. Markk permalink
    December 3, 2009 1:38 pm

    Can there even be a test for consciousness? The only kind of test I can imagine is some kind of brain scan where there is a distinguishable pattern such that when that pattern is there, then with a probability greater than say 1/2 or 2/3 or something the people who agree about this test agree that the target is concious.

    So my question would be, without looking at internal state of the (for want of a better word) brain of an entity, is there any observable thing or pattern that would distinguish a conscious from an automatic response? If you took one of the zillions of papers on intelligence and just changed the word to consciousness, wouldn’t a lot of the same analysis apply?

  5. December 3, 2009 1:47 pm

    One first needs a definition of consciousness before one can test for its presence. Before we can even begin to define this computation (because consciousness is nothing but a computation) we need to understand what its goals are? What computational advantages does the conscious being have over the unconscious zombie?

  6. Ted permalink
    December 3, 2009 1:49 pm

    I’d ask a slightly more fundamental question: does a definition of consciousness exist?

    I’d argue that in the strictest sense, no. It’s going to be like defining computation. Sure, we have Turing machines (etc) but we can’t actually prove the Church-Turing thesis. Everyone just thinks it’s true.

    We might come up with some actual, working definition of consciousness, and we’ll be capable of measuring that, but there will always be that slight disconnect.

    And everybody with the slightest sacred cow will pile onto that disconnect. I think this already happens with the whole concept of “philosophical zombies,” which I find to be simply ridiculous.

  7. December 3, 2009 2:12 pm

    I find the Dennett-Hofstadter argument against zombies and qualia philosophically very compelling (and mind you, I’m an observant Jew who believes in an immaterial, immortal soul as a matter of faith!). I think that philosophically speaking, any entity that passes that Turing test would have to be granted the title of both intelligence and consciousness.

    I don’t think the Turing test is a necessary condition for consciousness (your dog example being a case in point) — but I do believe it’s sufficient.

    Think about a hypothetical scenario where you are arguing with an intelligent entity about whether or not it is conscious. “I’m conscious I tell you!” it says. “No you’re not” you insist. Isn’t that absurd, surreal, and ultimately silly? The only reason you believe I am conscious (if you do, of course) is that you presume that my mental states resemble your mental states, and you assume an existence of a rough mapping between them. But that’s YOUR assumption — logically, I can do nothing to convince you that I’m consious. The old solipsism argument, etc.

    I don’t see any alternative to defining consciousness through intelligence.

  8. December 3, 2009 2:53 pm

    Perhaps the issue should be turned on its head.

    We can recognize inanimate things; that is not difficult since they do not react. We can recognize machines and formal systems, since their behavior is precisely rigorous. We can recognize complex chaotic behavior such as weather patterns, because some elements of ‘formality’ draw them back to near-repeatable patterns. So the question is “what’s left?”

    If beings are not inanimate, nor machines, nor complex (repeatable) phenomenon then there must be something else driving them. Something so complex that it exceeds the bounds of any known formalization. These things, we tend to call “conscious”.

    As a guess, I’d say my dog was definitely conscious, and if someone could translate dog-speak to human language, she should hopefully pass a Turing test (or at least a modified one to account for the simpler expression concepts). On the other hand, I would assume that most insects were actually just pure little biological machines, running very simple programs. They just exist in the same way plants exist.


  9. Anon permalink
    December 3, 2009 4:26 pm

    To be honest, I am not even sure that I am not a zombie. I can’t see any way I could be convinced that you aren’t one.

  10. Jeremy H permalink
    December 3, 2009 4:35 pm

    Rather than try to find a single test for consciousness, especially given the challenge of even defining it, it seems more useful to come up with successively better tests, ideally each of which has 1-sided error in some sense. So if X passes test A with high probability, then X is conscious. Similarly, if X fails test B with high probability, then X is not conscious. However, failing A or passing B says nothing about X.

    I would claim that before trying to define consciousness, we should try to define intelligence. 100 years ago, people would say that a machine was intelligent if it could play chess. Now that computers can play chess, we either say that chess isn’t a good metric for “real” intelligence or that the computer is cheating by essentially brute forcing the problem.

  11. December 3, 2009 4:39 pm

    Generally one has to have definitions to make progress. There’s a paucity of definitions in the consciousness research dept. Under some implied definitions, I conclude:

    All things are conscious to a degree commensurate with their complexity.

    Other definitions come down to things like, does your system design pipe output from higher-level processing back down to sensory processing areas, so that for instance you ‘hear’ yourself thinking? I would guess that, yes, your dog is designed this way — it experiences its own mental processes. If it imagines catching a frisbee, it experiences catching a frisbee to some degree. Just as when we think of a word, motor areas involved in speaking that word are activated.

  12. Andrew permalink
    December 4, 2009 12:31 am

    I’d argue that such a test does not exist. Basically, consider something conscious (like a brain…) vs. a turing machine simulating it. The “output” would be indistinguishable. For instance, consider a pain nerve being stimulated, causing a person to say “Ouch”. The computer simulation would calculate the state of every neuron in the brain, and eventually show that an “output” nerve signal is generated that would cause vocal cords to say “Ouch.”

    Not having studied biology, my description of the simulation is probably wrong, but as long as we assume the Church-Turing thesis, there IS a turing machine that could make sufficiently accurate computation to be indistinguishable from a brain.

    Similarly, turing machines could even “discuss” the topic of consciousness the same way that human brains do!

    However, this might just mean that all implementations of certain sufficiently complicated turing machines are conscious. I find this hard to believe because the computation could even be done by a mechanical computer, such as the one Babbage designed. At that point we would be considering a collection of gears to be conscious.

    • December 4, 2009 12:43 pm

      I see no reason why one should expect that a computer cannot be conscious at some point. You conscious experience is nothing but a computation. As you said that computation can be done by a Turing machine.

  13. Tobias permalink
    December 4, 2009 3:39 am

    I am skeptic that such a consciousness test could exist…
    I have come up with some (rather weak) kind of “diagonalization” argument for Blum tests.

    Lets suppose we have some procedure that determines if something is conscious or not and for simplicity the outcome of this test is true/false.

    Now if X is “conscious” and knows the procedure, it might behave exactly the way that the outcome of the test is false and the test fails.

    One might argue that there are conscious X that are not able to know the procedure, e.g. dogs.
    But at least humans, which we consider to be conscious, could behave this way and recognize the procedure being applied to them.

    I admit that’s not really an argument against the existence of such a procedure. It may still happen that the test works only if the thing X to be tested does not know the procedure. ( Similar to many puzzles: If you know the solution once you will be able to solve it much faster another time )
    But somehow this renders the test useless. The procedure may not be repeated and the requirement of scientific experiments to be repeatable and checkable by others fails.
    Even if there are many procedures or even a “scheme” of tests, the thing X to be tested may recognize the scheme and make the test fail after finitely many applications.

    Finally I agree that no progress can be made without definitions.
    If there is no definition of consciousness, there is no possibility to argue that you found the Blum test, as no one knows what you want to test.

    And if one chooses definitions like “X knows about X and its boundaries” or “if X imagines Y, X experiences Y to some degree”, I see no possibility to ever come up with a convincing argument that one has found a Blum test. These definitions use inner “states” of X for the definition of X being conscious. And I don’t think that every inner state of X is necessarily communicated to the outside world. Especially if X does “not want to cooperate”.

    • rjlipton permalink*
      December 4, 2009 9:44 am

      I really like the idea you have. Let me restate: Is there a reasonable model where we could prove that there is no Blum test? I wonder?

      My immediate comments are two: first one way around some of your issues would be to assume that the test is known but that it is random. Another is just like an IQ test, a person may be able to deliberately try to score lower than they really are. But is that a problem?

      Still I like the idea of trying to prove an impossibility result very much.

      • Personne permalink
        December 4, 2009 2:43 pm

        At first I really liked the idea to use a diagonalization argument. Unfortunatly, I don’t think it will work.

        Let’s try to see why:
        -we draw a m*n table, m designing the mth possible Turing Machine (TM) and n being the nth possible input to it.
        -Then, we hypothetize the existence of a TM-judge (TMJ) which can decide whether the mth TM runing on the nth input is counscious (or intelligent, or beautiful, or… anything you might wonder to call a TM). If she exists, we can fill the table with her output.
        – Finally, we define a TM as the diagonal of this table plus 1 (TMD).

        As this TMD can be nowhere in the table, thus the TMJ does not exist as first hypothetized, CQFD.

        Unfortunatly, this doesn’t work.

        First, the result is trivial due to the fact that some of the TM will never halt, including maybe the TMJ: we have to set a limit in the running time both for the TM and the TMJ.

        Once we set this limit, the TMD has no longer any reason to be part of the table, as it’s running time is necessairly longer than that of any of the TM.

        So here’s the bug. Sorry Tobias! Maybe a better try will work?

  14. Josh permalink
    December 4, 2009 2:12 pm

    To the previous commenters who said a definition of consciousness is necessary: in principle, I agree. But I imagine the whole point of asking the question “Come up with a test for consciousness” is just a (slightly more constructive) rephrasing of the question “Define consciousness.” Obviously if we had a test that worked all the time, then that very test can serve as a definition.

    (The lack of a definition does not seem to me like a such a subtle hurdle that Manuel Blum, who is very smart and has spent years thinking about this, would have missed it…)

    • rjlipton permalink*
      December 4, 2009 2:49 pm

      I have a strange idea that I want to add to this fun discussion.

      Imagine for the moment that all normal humans have encoded, somehow a Blum test into them. Thus, when they meet or interact with another entity they can tell whether that entity is a conscious being or not. Think of it as being hardwired into us.

      We can imagine all kinds of reasons that evolution would want to have such a Blum test build in to us. Knowing which entities are conscious beings seems to be important for navigating the world.

      Yet we may not even understand how we “run” the test. We just do.

      The whole point of this is there are people that appear, in this view, to not to have a proper Blum test build in. Or their test is only partially working. Could we explain people with certain diseases as having a missing or damaged Blum test?

      Does this make any sense?

      • Personne permalink
        December 4, 2009 3:10 pm

        Not only it makes sense, but autism has already been suggested as a disorder of the Blum test. Well, we don’t call that Blum test. Search for “mirror neuron”. 😉

  15. Mankhool permalink
    December 4, 2009 8:08 pm

    In my experience animals, especially cats can tell whether or not someone is conscious or not. I’ll elaborate. When staying at a friend’s home for several months, his cat would wake me up VERY early every morning by pawing my face. Once I awakened she ran to her bowl and demanded to be fed immediately. I caught onto this quite quickly and experimented with keeping my eyes closed and playing possum while she smacked me about – but to no avail. She could sense immediately when I awoke.

  16. December 5, 2009 9:59 am

    It may appear that science will never have anything interesting to say about consciousness because consciousness is not observable. And so there would be no way to test a theory of consciousness.

    But actually, it is possible to test a theory of consciousness in a certain circumstance!

    Consciousness is observable as long as it is your consciousness.

    Suppose a theory of consciousness provides a way to transfer your consciousness to a computer say. Clearly if it works, then you will know that the theory has some merit, although others will not know this unless they have go through this transfer successfully as well.

    If the transfer can go back from the computer to the human, then maybe this would be a reasonable way to convince a lot of people that the theory has merit.

    • Personne permalink
      December 5, 2009 11:14 am

      @Amir Michail

      If you are computable, then I can transfer you into a computer, let you have some fun in it, and then go back to your brain.

      Problem: if you are computable, I don’t need to transfer you at all. I just need to compute what are the changes in your memory that will make you believe you’ve been transfered. And you’ll never be able to say what really happened!

      This has a taste of indecidability: if I transfer your spirit into a computer, that’s a proof your spirit is a software. If your spirit is a softeware, there is no way to proove to yourself you have once been transfered into a computer.

      • December 5, 2009 11:21 am


        Perhaps the brain can be shut down while the consciousness is in the computer without causing permanent damage?

      • Personne permalink
        December 5, 2009 1:00 pm

        @Amir Michail

        For sure, but I don’t think it matters too much. If spirit is software, you can have as many copies of the same spirit running at the same time (although they will likely diverge once placed in different environnement). If technology is powerfull enough to put a computed spirit back into a biological brain, it can also erase the path followed by the brain while the ghost was in the shell.

        The true problem is: all the consciousness you’d attribute from within the computer, was it truly existing before the memory of your (biological) brain was changed, or the computer part was just a philosophical zombie? Of course we could ask the computer, but now it’s no more (no less!) than another version of the Turing test.

      • December 5, 2009 5:01 pm


        It is still the case that the person saw the theory working out while his/her consciousness was in the computer.

    • Personne permalink
      December 5, 2009 11:31 pm

      @ Amir Michail

      How would you know, but at the very time you’re within the computer?

      Ok let’s try something else:

      Imagine you have just invented this technology, and then started your own compagny, and now you sell $20000000 each ticket for being one a of the few computer tourist in the world. Things are going very well!

      Too bad, one of the tourist now attacks you in justice. Despite the behavior of the computer was as usual with the tourist supposedly in, the guy pretends that in fact he has not been conscious during the trip.

      Because of that failure, his golden friends have laugth at him. This has caused some terrible suffering, and a deep depression during which he could have win a lot of money as a trader. Of course that’s your fault. So the plaintiff asks for $10000000000 and dead penalty.

      My friend, you’re better win your case. What will you say to the tribunal?

      Oh one more thing: please notice that the tribunal issued David Chalmers as expert witness. He accepted to testify.

      • December 5, 2009 11:40 pm

        The issue here is whom do you need to convince that the theory is (mostly) correct and for how long do they need to stay convinced of this?

        Is it sufficient if people are only convinced of the theory during the time in which their consciousness is in the computer?

        What if many people go through this process and spend much of their lives in a computer? Is that better?

    • Personne permalink
      December 6, 2009 2:54 pm

      Not exactly: the issue is to determine if the tourist is cheating.

      Fortunatly, Marvin Minsky has heard about your case and Chalmers mumbling something about the tourist that may have been replaced by a philosophical zombie during his trip. He’s angry, and very willing to secours you.

      So he proposes a very simple experiment: the tourist will be placed in a confortable chair waiting for another trip into the computer. But now the simulated environnement is… the same experienced by the biological brain while waiting in a confortable for a computer trip.

      A series of ascending integers will be presented, half in the real environnement, half in the simulated one. To prove he is not cheating, the tourist will have to indicate which numbers he will have see. Of course he won’t know which ones will appear in the simulated environnement only. The judge will.

      Congratulation M Amir, the tourist has just abandoned the case. 😉

  17. December 5, 2009 1:25 pm

    If there is a reliable test for consciousness, then by virtue of the test implying that consciousness is repeatable, consciousness itself must be a formal system of some type. But that implies that we are nothing more than biological machines, and that our behaviors must be strictly defined (and thus no free will).

    I know some people are predictable, but not entirely (with certainty) which to me implies that they are not rigid, and as such their behavior cannot be described within a formal system. The higher form creatures (on this planet), behave in exceedingly more complex manners as our intellectual capabilities approach our definition of “intelligent”. Our patterns of behavior get way more complex. Too complex (perhaps) to be formalized. To complex to be modeled in a consistent and reliable manner.

    In that sense, to me it feels more like my internal definition of “random”. Random is what is left, when you take away all of the other “patterns”. As such, much like our concepts of infinitely, there is no “random pattern”, it is just the infinitely large void of patterns left, after you remove the infinitely large set of possible patterns (yes, I see two ugly infinities fighting each other).

    It might be that we are just extra-ordinary complex and chaotic, in the same sense that weather is. Or we may supersede that with some even less deterministically decipherable, but larger degree of complexity (or even something completely unbounded).

    Either way, in much the same way that we know that even the most complex ‘weather simulation’ would quickly diverge from the original, copies of human systems transplanted would also diverge from the original trajectory (just because I can simulate a person (and even if they don’t know), their behavior is still going to be unique to the copy, not the original (because we could never achieve anything close to the degree of precisions required to keep the two in sync)). Thus one way to know that it is NOT conscious is because you can simulate it.

    Just a guess 🙂


  18. Dan Fitch permalink
    December 6, 2009 12:50 pm

    Paul’s comment reminds me of Tononi’s “integrated information” theory of consciousness:

    In this idea, due to the complexity of the systems involved, determining the Φ of anything higher than a simplistic circuit will be incredibly difficult.

    So again, maybe the Blum test is less a test but a heuristic?

    Panpsychism is another interesting thing to bring up: once you’re measuring consciousness on a sliding scale, where is the zero point? How do you confirm that something is absolutely, not at all “conscious”, for some definition?

    • phomer permalink
      December 6, 2009 8:12 pm

      I tend to think (maybe only because of hubris) that we as conscious beings operate in a broader system, that just a mere formal one. In that sense, given my quick read of Tononi’s article, I suspect that our Φ would be equal to ∞ (infinity). Or maybe consciousness is defined like limits, not really a value, but more of a movement in a direction, i.e., we are more conscious as Φ approaches ∞ (but we never get there). As Φ falls, we eventually drop to lesser states: asleep, comatose, and finally inanimate at zero.

      What I find interesting is the idea of whether or not we are just massively complex formal systems, or if we are able to exceed those limits in some special way? Can we actually think outside of the box?

      • rjlipton permalink*
        December 7, 2009 10:38 am

        Your probably know that Roger Penrose has argued that we are more than a classic formal system. He argues in his books that we need quantum abilities.

      • December 7, 2009 11:12 am

        Yes, I read “The Emperor’s New Mind” years ago, and it made a very strong impression. I didn’t fully buy into Penrose’s explanation, but intuitively I think we do exceed the expressibility of a formal system.

        I was thinking about ‘states’ this morning. If we take our state to mean a specific permutation of what we are doing, our current conscious thought, our emotional state and a set of unconscious thoughts, then we can make some interesting observations. For one, we are rarely in the same ‘state’ very often (although we can come close). When we practice (at sports or intellect), what we are trying to go is get ourselves back to some specific ‘state’, over and over again. If we are just cruising through life, then by repetition, we are returning to the same old familiar ‘habits’ (states) over and over again.

        This model matches with our perception of time, i.e., time is longer when we are creating new states. If we’re just revisiting them, we don’t perceive it in the same way (and thus time seems shorter, or goes by faster).

        If we consider formal system’s to be rigid (and thus contained in the overall number of states (although I don’t know how to explain this)), then it is our ability to create an infinite number of new and unique states that separates us from just any other complex system. And that ability to re-write our states is the essence of our consciousness. And any other “living” (animate) thing, that isn’t just repetitive, is conscious (and by being conscious it is intuitive self-aware (or it can be)).

        Just a thought 🙂

  19. December 12, 2009 8:03 pm

    The naive thoughts of “state” of consciousness as if it were a “state” of a physical system might bear some reconsideration in the light of very different interpretations of “state” in classical, quantum, and relativistic Physics.

    Note that Artificial Intelligence is done by Computer Science folks. John Baez: “Computer scientists have a different yet related notion of state that I’m finally beginning to understand. For example, it’s nontrivial for functional programming languages to incorporate ’state’, and Haskell does this using a monad….”

  20. Leopold permalink
    December 18, 2009 7:41 pm

    I think the question :
    “Is is possible to undestand consciousness ?”
    is so tricky besause consciousness appears twice in it.
    Once in the word “consciousness ” and once in the word “understand”.

    I belive a consciousness-definiton would have to be a very “indirect” one, a bit
    like a proof by contradiction.

    To understand a “direct” description of consciouness, one would have to “simulate” the logical steps in his head. But how should it be possible to simulate
    another consciousness while yourself are conscious ?

  21. January 9, 2010 2:55 pm

    well we share 98% of our genetic makeup with chimps and this makes it hard for me to believe that chimps do not experience consciousness. If they did not, it would mean that the 2% difference is responsible for our consciousness, which is also hard to believe. Recent studies of the animal kindgom have shown how biased we are to think that we are the only ones endowed with ” consciousness, intelligence…” Crows use tools just like us and continue to astonish the researchers who study them. It is hard to believe that nature has endowed elephants with a big brain and limited them to doing simple tasks that a fruit flie does just as well.
    Is consciousness the result of complexity of the brain? It would be interesting to find the transition point where consciousness starts to manisfest itself in the animal world.

  22. Tom permalink
    February 27, 2011 7:03 am

    “David Chalmers, argues that a philosophical zombie could pass the test, yet fail to be conscious”

    I think this sentence inspires an idea not about what consciousness is, but about what
    consciousness is for.

    A quantum computer solves factoring integer in polynomial time where as a classical computer needs exponential time. Maybe a conscious brain calculates a specific output in n^5 steps that a classical computer could calculate in n ^5000 steps.

    Maybe it’s an obvious idea, but i never thought of consciousness as a tool wich helps human brains
    to calculate outputs faster than a classical computer, as opposed to a tool wich helps humans to do things a computer couldn’t do at all.

    Woulnd’t it be interesting to consider consciousness as a physical effect wich is part of a very clever and efficient way to build a powerful turing machine?

  23. Cosmin permalink
    December 22, 2013 2:20 am

    Com’on guys. You don’t ever read the Chinese Room argument ? Consciousness is not computational. Consciousness is semantics/understanding. Computation is syntax. You cannot get semantics out of syntax. Semantics is more fundamental than syntax. The whole Existence is semantic in nature. Matter is just an illusion that emerges from semantics.

  24. Anon permalink
    March 22, 2014 5:27 pm

    There is a test for machine consciousness. Its a cradle to grave kind of thing though, not pure functionalism. You have to build a machine, and not teach it about consciousness or any words that refer to consciousness. Perhaps you have it simulate a civilization of humans starting without language. If it comes up with the idea all on its own (no user inputs allowed here) then it is conscious.


  1. uberVU - social comments
  2. Is There a Test for Consciousness? « sciencev

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s