Skip to content

A Quantum Two-Finger Exercise

April 8, 2015


More mileage than expected from a little example

CarrollBlackboard
Cropped from World Science Festival source

Sean Carroll is a cosmologist in the Department of Physics at Caltech. He also maintains a blog, “Preposterous Universe,” and writes books promoting the public understanding of science. I have recently been enjoying his 2010 book From Eternity to Here: The Quest for the Ultimate Theory of Time.

Today—yes, Carroll would agree that there is a today—I would like to share an interpretation of a little quantum computing example that occurred to me while reading his book.

As befits a book about how much of physics is time-reversible, I’ve been reading it in reverse order of chapters. From my own scientific knowledge and reading of similar books I figured I knew the run-up material to his conclusions and speculations. Now I’m working backward to fill some gaps in my knowledge and highlight how he supported those speculations I query. I’ve been pleased to find freshness in his coverage even of well-traveled topics, and this extends to his chapter on quantum basics.

In a popular science book one tries to maximize mileage from simple examples but must heed the dictum ascribed to Albert Einstein to “make everything as simple as possible, but not simpler.” I didn’t think that quantum decoherence could be adequately illustrated with just two elements, but the reliable Carroll does so. Since this augments an example that Dick and I use in our quantum algorithms textbook, I thought we’d share it.

Briefly on Carroll’s Book

Since Carroll does not claim his answer based on papers with Jennifer Chen in 2004-05 does anything more than illustrate how an “ultimate theory” could work, I don’t think it’s “spoiling” to mention it here. His blog’s two regular graphics are the equation {S = k\log W} defining entropy on Ludwig Boltzmann’s tombstone in Vienna and this poster based on his book:


FETH-poster-sm


The similar diagram in his book identifies the waist as a point of lowest entropy for the entire cosmos. ‘Baby universes’ branch off in funnels in both (or one could say all) directions of time. We are in one of them, inflating out from our Big Bang, and while that was a state of incredibly low entropy within our branch, it is but a blip in a grand system that is able to increase its entropy without limit.

I would have liked to see more in the book on how this squares with the “BGV” conditions under which general relativity requires an expanding universe to have a boundary in the past, and how their model relates to the “all-or-nothing argument” covered in our post two years ago on Jim Holt’s book Why Does the World Exist? Why should the lowest entropy have some particular nonzero finite value—that is, why, if the waistline is to be labeled some kind of origin for the whole multiversal shebang? These elements are of course subservient to quantum theory in whichever theory of quantum gravity proves to reconcile best with relativity and observation.

Carroll has addressed much of this on his blog, in particular within this post a year ago on an origins/theology debate. That’s one thing blogs are good for, and he supplies many links including this long critique with much on BGV. But leaving everything aside, let’s think about systems with just two quantum elements that give binary classical values when observed. Carroll calls them “Mr. Dog” and “Miss Kitty,” whereas I’ll use other animals toward further conceptual associations.

Quantum Coordinates and Classical Indexing

In quantum computation, Boolean strings of {n} bits are coded via {n} qubits. The qubits are represented via orthogonal basis vectors in the vector space {\mathbb{C}^N} where {N = 2^n}. The standard quantum indexing scheme enumerates {\{0,1\}^n} in lexicographical order as

\displaystyle  0^n,0^{n-1}1,0^{n-2}10,\dots,1^{n-1}0,1^n.

Labeling these strings {x_1,\dots,x_N}, the rule is that {x_i} is encoded by the standard basis vector

\displaystyle  e_{x_i} = e_i = [0,0,0,\dots,0,1,0,0,\dots,0]^T

with the lone {1} in the {i}-th position. Any linear combination {\vec{a} = \sum_{i=1}^N a_i e_i} is allowed as a quantum pure state provided {\sum_i |a_i|^2 = 1}. Measuring {\vec{a}} entirely then yields the output {x_i} with probability {|a_i|^2}. In particular, measuring {e_i} yields {x_i} with certainty.

This scheme telescopes for {n = 0,1,2,3,\dots} in that for for any binary strings {x} and {y}, the basis vector {e_{xy}} for their concatenation is given by the tensor product of {e_x} and {e_y}. Tensor product can be visualized first in the case of matrices {A} and {B} of arbitrary sizes. If

\displaystyle  A = \begin{bmatrix} a_{1,1} & \cdots & a_{1,n}\\ \vdots & \ddots & \vdots\\ a_{m,1} & \cdots & a_{m,n} \end{bmatrix},\quad\text{then}\quad A \otimes B = \begin{bmatrix} a_{1,1}B & \cdots & a_{1,n}B\\ \vdots & \ddots & \vdots\\ a_{m,1}B & \cdots & a_{m,n}B \end{bmatrix}.

If {B} is a {p \times q} matrix, then {A \otimes B} becomes an {mp \times nq} matrix, but it could also be regarded as a higher-dimensional {m \times n \times p \times q} object. Two vectors in column form are just the case {n = q = 1}. We can still visualize that the second vector is getting multiplied in block form by entries of the first vector. The indexing scheme kicks off with:

\displaystyle  \begin{array}{rcl}  e_{\lambda} &=& \begin{bmatrix} \,1\, \end{bmatrix}\\ e_0 &=& \begin{bmatrix} 1 \\ 0 \end{bmatrix}\\ e_1 &=& \begin{bmatrix} 0 \\ 1 \end{bmatrix}. \end{array}

Then we obtain:

\displaystyle  \begin{array}{rcl}  e_{00} &=& e_0 \otimes e_0 = \begin{bmatrix}1\cdot\begin{bmatrix}1 \\ 0\end{bmatrix}\\ 0\cdot\begin{bmatrix}1 \\ 0\end{bmatrix} \end{bmatrix} = \begin{bmatrix}1 \\ 0 \\ 0 \\ 0\end{bmatrix}, \end{array}

and so on for {e_{01} = e_0 \otimes e_1 = [0,1,0,0]^T}, {e_{10} = e_1 \otimes e_0 = [0,0,1,0]^T}, and {e_{11} = e_1 \otimes e_1 = [0,0,0,1]^T}.

Just as concatenation of strings is a basic operation apart from indexing notation, so too with tensor product. When {|x| = |y| = n}, so that {e_x} and {e_y} have length {N = 2^n}, our representation of the tensor product involves {N^2}-many indices. It seems silly to posit a quadratic expansion on top of an exponential explosion just for juxtaposing two {n}-bit strings that don’t even interact, but the classical notation does this. Since we will limit to two qubits we don’t mind: {n = 2}, so {N = 4}. We think of {1\dots n} as quantum coordinates and {1\dots N} as classical indices for the same system.

Matrices and Mazes

Quantum operations on {n} qubits are represented by {N \times N} matrices {A} that are unitary, meaning that {A} multiplied by its conjugate transpose {A^*} gives the identity. The Hadamard matrix

\displaystyle  H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}

is not only unitary but also self-adjoint: {H^* = H}. It works on one qubit and maps {e_0} to {\frac{1}{\sqrt{2}}(e_0 + e_1)}, which we abbreviate as {e_+}, and maps {e_1} to {\frac{1}{\sqrt{2}}(e_0 - e_1)} or {e_-} for short.

We can interpret each entry {H[i,j]} as the transition amplitude of going from {e_i} to {e_j}. Since we are using column vectors the “from” is a column and the “to” is a row. Then {H} is like a maze where each column point of entry allows either row as an exit, but the path that enters at {2} and exits at {2} picks up a {\frac{-1}{\sqrt{2}}} amplitude along the way.

Think of {e_0} as coming in “like a lamb” and {e_1} as “like a lion.” Then {H[1,2] = 1} (divided by {\sqrt{2}}) represents the saying:

March comes in like a lion and goes out like a lamb.

Whereas, {H[2,2] = -1} represents this winter’s actuality of coming in and going out like a lion. Let us call going lion-to-lamb or lamb-to-lion a flip, lamb-to-lamb a flap, and lion-to-lion a flop.

Both inputs {e_0} and {e_1} encounter amplitude {\frac{1}{\sqrt{2}}} for a flip, which translates to 50% probability in a measurement. The state {H e_0 = e_+} gives equal amplitude to “lamb” or “lion” as outcomes, hence probability 50% each of observing them. The state {H e_1 = e_-} gives different signs (speaking more generally, different phases) to the outcomes but still the same 50%-50% probabilities.

We might think that following with another {H} operation would preserve the even chance of doing a flip, but instead it zeroes it. Multiplying {H\cdot H} gives {I} because both off-diagonal entries sum a {+1} and a {-1} (divided by {2}). For {H^2[1,2]} the {+1} comes from the history of a flap then a flip, while the {-1} comes from a flip then a flop. The components flap-flip and flip-flop interfere, leaving no way the composed operations can accomplish a flip. If you come in as a lion then the actions flip-flap and flop-flip would individually make you a lamb, but they likewise interfere, leaving no option but to go out as a lion.

Put another way, applying {H} to {e_+} leaves a deterministic outcome of “lamb.” The coherence of the superposed state {e_+} makes this possible. Likewise, applying {H} to {e_-} entails “lion.”

Two Qubits

Now let’s introduce our second qubit and describe it as a butterfly, with {e_0} meaning its wings are open and {e_1} meaning closed. Here is an operation on two qubits:

\displaystyle  CNOT = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}.

This is a permutation matrix that swaps {x_3 = 10} and {x_4 = 11} but leaves {x_1 = 00} and {x_2 = 01} fixed. Viewed as a maze it routes column entrances to row exits with no choice, no meeting of corridors, and so executes a deterministic operation. In our system of lamb/lion and open/closed it says that March being a lamb leaves the butterfly undisturbed, but “lion” makes it shift its wings. So inversion is controlled by the component of the first qubit, hence the name CNOT for “Controlled NOT.”

Just by dint of juxtaposing the second qubit, we change the brute matrix notation for the Hadamard operation on our first qubit to:

\displaystyle  H_1 = H \otimes I = \left[\begin{array}{c|c} 1\cdot I & 1\cdot I\\ \text{------} & \text{------}\\ 1\cdot I & -1\cdot I \end{array}\right] = \begin{bmatrix} 1 & \;\,0 & 1 & 0\\ 0 & \;\,1 & 0 & 1\\ 1 & \;\,0 & -1 & 0\\ 0 & \;\,1 & 0 & -1 \end{bmatrix}.

Now {H_1\cdot H_1} equals the {4 \times 4} identity matrix, so we still have interference on the first qubit. But suppose we perform {H_1} and then CNOT instead. Viewed as a quantum circuit going left to right it looks like this:

HCNOT

But since we are composing right-to-left on column vectors, the matrix for the combined operation is

\displaystyle  CNOT\cdot H_1 = \begin{bmatrix} 1 & \;\,0 & 1 & 0\\ 0 & \;\,1 & 0 & 1\\ 0 & \;\,1 & 0 & -1\\ 1 & \;\,0 & -1 & 0 \end{bmatrix}.

On argument {e_{00}} it yields {\frac{1}{\sqrt{2}}(e_{00} + e_{11})}. This state—call it {f_+}—cannot be written as a tensor product of two one-qubit states. This means {\phi} is entangled. In our terms it means the butterfly’s wings are open if and only if March is like a lamb; in case of lion they are shut. But if we only care about lamb-versus-lion we still have the same amplitudes and 50-50 split that we had with the one-qubit state {e_+}. On argument {e_{01}} the circuit entangles {e_{01}} and {e_{10}} instead.

Decoherence

Suppose we forgot about the butterfly and tried the same trick of applying {H_1} once more to {f_+}. We might expect to get “lamb” again, but what we get is:

\displaystyle  H_1 f_+ = \frac{1}{2} \begin{bmatrix} 1 & \;\,0 & 1 & 0\\ 0 & \;\,1 & 0 & 1\\ 1 & \;\,0 & -1 & 0\\ 0 & \;\,1 & 0 & -1 \end{bmatrix} \; \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix} = \frac{1}{2}\begin{bmatrix} 1 \\ 1 \\ 1 \\ -1 \end{bmatrix}.

This state is also entangled but gives equal probability to all outcomes. To the holder of the first qubit it works the same as a classical coin flip between “lamb” and “lion.” Indeed if we trace out the unseen butterfly, {f_+} yields a mixed state that presents a classical {(\frac{1}{2},\frac{1}{2})} distribution on {(e_0,e_1)}, and applying {H} to that creates no interference. Indeed, there are no cancellations at all in the entire {4 \times 4} matrix computation:

\displaystyle  \frac{1}{2} \begin{bmatrix} 1 &\; 0 & 1 &\! 0\\ 0 &\; 1 & 0 &\! 1\\ 1 &\; 0 & -{1} &\! 0\\ 0 &\; 1 & 0 &\! -{1} \end{bmatrix} \begin{bmatrix} 1 &\;\, 0 &\;\, 0 &\;\, 0\\ 0 &\;\, 1 &\;\, 0 &\;\, 0\\ 0 &\;\, 0 &\;\, 0 &\;\, 1\\ 0 &\;\, 0 &\;\, 1 &\;\, 0 \end{bmatrix} \begin{bmatrix} 1 &\; 0 & 1 &\! \!0\\ 0 &\; 1 & 0 &\! \!1\\ 1 &\; 0 & -{1} &\! \!0\\ 0 &\; 1 & 0 &\! \!-{1} \end{bmatrix} = \frac{1}{2} \begin{bmatrix} \!1 &\!\! 1 &\!\! 1 &\!\! -{1}\\ \!1 &\!\! 1 &\!\! -{1} &\!\! 1\\ \!1 &\!\! -{1} &\!\! 1 &\!\! 1\\ \!-{1} &\!\! 1 &\!\! 1 &\!\! 1 \end{bmatrix}.

This is diagrammed by the following two-qubit quantum circuit:

HCNOTH

This example shows two other fine points. First, the “copy-uncompute” trick (see Fig. 3 here) fails markedly when the value at the control of the CNOT gate is a superposition that is not close to a basis state. The entanglement involved in attempting to copy it via the CNOT destroys the interference needed to “uncompute” the first qubit back to {e_0}, despite {H_1} being its own inverse. Thus decoherence does not involve any “collapse” into classical behavior, but makes it emerge from unseen entanglements.

Second, if we run the circuit on the input state {e_{++} = e_+ \otimes e_+} then we get {H_1 e_{++} = e_0 \otimes e_+} which is fixed by CNOT, so the final {H_1} gives {e_{++}} back again. At no time was there any entanglement. Hence the entangling action of a quantum circuit is not invariant under the input being separable nor under choice of basis, though perhaps some algebraic-geometric measure of its entangling “potential” can be.

Practical Butterflies?

I had previously pictured ‘decoherence’ in terms of many little entanglements with the environment. What happens if the entanglement with our butterfly is minor? Let {B = ae_0 + be_1} be a possible starting state for the butterfly, and consider the state

\displaystyle  \phi = \epsilon f_+ + z(e_+ \otimes B) = \frac{1}{\sqrt{2}}\begin{bmatrix} \epsilon + az \\ bz \\ az \\ \epsilon + bz \end{bmatrix}

with {z} chosen to normalize. Then

\displaystyle  H_1 \phi = \frac{1}{2}\begin{bmatrix} \epsilon + az + az \\ bz + \epsilon + bz \\ \epsilon + az - az \\ bz - (\epsilon + bz) \end{bmatrix} = \frac{1}{2}\begin{bmatrix} \epsilon + 2az \\ \epsilon + 2bz \\ \epsilon \\ -\epsilon \end{bmatrix}.

The probability of observing {1} (“lion”) on the first qubit is just {\frac{1}{2} \epsilon^2}, regardless of {B}. This practically negligible deviation can be interpreted in two different ways:

  • Coherence to high-order precision can be achieved even in the presence of non-negligible entanglement—before even mentioning quantum error-correcting codes and their recent possible cosmological significance.

  • That our experiments record coherence to 15-place precision and more does not prevent there being some non-negligible {\epsilon} amount of “pervasive” entanglement.

To continue the “butterflies” metaphor, pervasive could entail entanglement with millions of them. The question is whether this would collectively drive up {\epsilon} beyond what is compatible with experimental results. Perhaps it wouldn’t; I’m not conversant enough with the models to guess. But I have a fresh angle that was brought again to mind by an association Carroll makes between information and the holographic principle (page 281):

[It] seems to be telling us that some things just can’t happen, that the information needed to encode the world is dramatically compressible.

If our environment is highly compressible, would this manifest as pervasive entanglement? Here are two naive thoughts. The first is that the maximally entangled state {f_+} subtracts one bit of information from a two-bit system, likewise the related state {f'_+ = \frac{1}{\sqrt{2}}(e_{10} + e_{01})}. The naiveness is that the local computation {H_1 f_+}, as we have seen, restores having {4} equal-weight outcomes in the standard basis—so the idea is not local-invariant and prefers a basis. Yet it is possible that cross-cutting {\epsilon}-entanglements could embody a large information deficit in any (natural) basis. The second naive thought is that states like {f'_+} might contribute even amounts to certain statistical measures in large-block fashion, so as to skew expectations of (non-)uniformity that assume full independence.

Neither thought is new—related ones have been raised in connection with cosmological problems and theories discussed in Carroll’s book, though apparently without traction as yet. What my “Digital Butterflies” post brings is a digital arena in which the information content of the environment can be varied and its dynamical effects studied. The environment stems from about 50,000 bits that initialize a tabulation hashing scheme employed by all major chess programs. Many programs generate the 50,000 bits pseudorandomly, though one could just drop them in from a truly random source since they are fixed. Perhaps the hash-table dependent effects observed in the post can inform these issues, and also work toward the “grail” of developing a practical distinguisher of general pseudorandomness.

Open Problems

How far can quantum computation help understand issues in cosmology?

Update 5/17/15: This 4/28 article in Quanta magazine by Jennifer Ouellette—Carroll’s wife—covers the “MERA” proposal and others attempting to derive space-time from entanglements. We have not had time to study this.

[added point to end of “Decoherence” section; two minor typo fixes]

16 Comments leave one →
  1. April 8, 2015 10:16 pm

    “It seems silly to posit a quadratic expansion on top of an exponential explosion just for juxtaposing two n-bit strings that don’t even interact.”

    But keep in mind that this is also true for classical probability distributions: the tensor product corresponds to the joint distribution you get for two independent variables.

    Regarding decoherence: the easiest way to think of it is in terms of losing phase information. If we have a state $(1/\sqrt{2}) (|0> + e^{i \theta} |1>)$, but if theta becomes random because of interactions with the environment, then this becomes a classical mixture of |0> and |1>. It’s easiest to see this in terms of the density matrix $\rho$: randomizing theta zeroes out $\rho$’s off-diagonal entries, leaving a purely classical distribution (the completely mixed state).

    • April 8, 2015 10:29 pm

      Cris, thanks. On the first, that’s true, but the quantum wave still strikes me as more of a single entity than the classical distribution(s). Not sure on your second point from a non-physics vantage; anyway it plays into why I was surprised by Carroll’s pulling it off speaking mainly in terms of pure states of only two qubits.

      • April 9, 2015 6:30 pm

        I would urge you not to think about the density matrix as a “physics vantage”🙂 it’s by far the easiest way to represent combinations of classical uncertainty and quantum superposition. I don’t know of a cleaner formalism to express something which is, say, (|0>+|1>)/sqrt(2) with probability 1/2, and (|0>-|1>)/sqrt(2) with probability 1/2. It also lets us see that this is the same thing as a thing which is |0> with probability 1/2 and |1> with probability 1/2 (or with any other two orthogonal basis vectors in the mixture). And it lets us see a continuum of states ranging from this completely mixed state to pure states.

        Keep in mind also that the density matrix has been used very successfully in CS; for instance, in Ambainis’ lower bound on Grover’s algorithm, namely that sqrt(n) queries are necessary. He used the decay of the off-diagonal entries as a simple measure of entanglement: the computer can’t answer the question until it becomes sufficiently entangled with the oracle.

      • April 9, 2015 10:09 pm

        OK—maybe we’ll invite a guest post on density matrices🙂

  2. April 8, 2015 11:27 pm

    Incidentally, situations where ε is non-negligible while ε² is negligible were used by David Mumford and John Tate in their Alexander Grothendieck memorial to motivate Grothendieck’s embrace of nilpotent elements.

  3. April 9, 2015 11:41 pm

    The eps vs eps^2 thing comes up a lot in physics, specifically in wave mechanics. The energy density of a wave is proportional to the square of its amplitude, which can be used to explain things like Grover’s algorithm and
    https://en.wikipedia.org/wiki/Elitzur%E2%80%93Vaidman_bomb_tester

    What do you mean by pervasive entanglement?

    • April 10, 2015 8:49 pm

      Hi, Aram—thanks first for the examples. I picked up “pervasive” from Vlatko Vedral’s 2011 Scientific American article where the rock salt experiment by Aeppli et al. really struck me. The import to me is: a macroscopic system where entanglement is not mandated by its description or our knowledge of how it was produced, in which we detect different dynamics from what we’d find if it were absent. The term is also used in this blog poll by John Preskill to define the scale of decoherence by which the classical world emerges (in one response option). Whenever people speak of tangible effects of entanglements stemming from the Big Bang/inflation/whatever-before, that’s included.

      I should clarify that my angle doesn’t have to involve “entanglement”: it’s about detecting different “digital dynamics” between environments that can be made truly-random or compressible. But on reading Gil’s paper in Jan. 2007 right after discovering the chess phenomenon I perceived a connection. Another example of what I mean by “naiveness” in the post is that by some measures, a GHZ state {(|0^m> + |1^m>)/\sqrt{2}} has just 1 unit of entanglement for all m, whereas under my intended connection it subtracts m-1 bits of information.

      To answer Alex V below, I can quote Carroll again on page 280 (one sentence shortened): “Here’s why holography is important: It means that spacetime is not fundamental. [We] typically think [in terms of] locality. Holography says that we can’t really do that, in principle—there are subtle correlations between things that happen at different locations, which cut down on our freedom to specify a configuration of stuff through space.”

      • April 11, 2015 9:05 am

        In next paragraph he gave even more simple explanation “The holographic principle says that the universe is like that, on a fundamental level: Everything you think is happening in three-dimensional space is secretly encoded in a two-dimensional surface’s worth of information.” But how to reduce exponential quantum computing space into polynomial one?

      • April 12, 2015 6:50 pm

        Ken Regan quotes Sean Carroll  “Here’s why holography is important: It means that spacetime is not fundamental.”

        In his recent book Mathematics Without Apologies: Portrait of a Problematic Vocation, mathematician Michael Harris — a FIelds medalist whose thesis advisor was David Mumford — directs readers toward the natural next step: from the 3+1 dimensions of spacetime, to the 2+1 dimensions of a holographic boundary, to a countable set of points on algebraic spaces in which

        “Geometry is an illusion that serves to hold fixed points together.”

        “It remains only to fill in the details” (to borrow Pauli’s pithy phrase) … albeit with modern-day quantum simulation codes as our guiding avatars … these ideas are good news for today’s young researchers.

      • April 13, 2015 5:30 am

        Correction Michael Harris’ Mathematics Without Apologies, names David Mumford as “one of my former teachers”, however Harris’ doctoral advisor was Barry Mazur … both Mumford and Mazur are outstanding teachers, needless to say.

    • April 12, 2015 6:24 pm

      Ken Regan remarks “Situations where ε is non-negligible while ε² is negligible were used by David Mumford and John Tate in their Alexander Grothendieck memorial in Nature

      Aram Harrow remarks “The ε vs ε^2 thing comes up a lot in physics”

      In regard to Ken’s point, David Mumford has posted a longer (unredacted) version of his and Tate’s Nature Grothendieck Memorial as an essay Can one explain schemes to biologists?. This extended essay is accompanied by reader comments that are plenty interesting too.

      In regard to Aram’s point, it is a 21st century Great Truth that ε vs ε^2 arise naturally in the theory of decoherent dynamics, albeit with ε^2 not negligible, via the Ito/Stratonovich calculus associated to quantum Lindblad processes (see as a primer, Carlton Caves’ on-line notes “Completely positive maps, positive maps, and the Lindblad form”).

      Further references relating to categoric ε-topics would be welcomed by me and many …

      One should not get drunk on the idea that everything is general. Category theorists should get back to the original goal: applying general results to particularities and to making connections between different areas of mathematics.

      — F. William Lawvere

      … especially references that respect Lawvere’s friendly maxim!

  4. April 10, 2015 9:50 am

    I often met references to http://en.wikipedia.org/wiki/Holographic_algorithm trying to “compress” some quantum computing models (i.e. effectively simulate them by classical computer), but still did not guess, how such “holography” could be related to “compression”

  5. April 10, 2015 3:24 pm

    Dear Professor, I have “known better” this paper can be the solution of P versus NP:

    https://hal.archives-ouvertes.fr/hal-01139624

    and pdf in

    https://hal.archives-ouvertes.fr/hal-01139624/document

    Indeed, in that work I proved that:

    $POLYLOGTIME \neq NLOGTIME$

    in a trivial way. Next, I proved that:

    If $P = NP$, then $POLYLOGTIME = NLOGTIME$.

    in a trivial way too. From that result we can see $P \neq NP$.

    For that reason I’m quite assure there are big chances that my proof were correct. Certainly, I’m truly convinced, even though I have failed several times with other intents.

    Regards..

  6. Alexandre de Castro permalink
    April 24, 2015 5:58 pm

    if we succeed in cascading two controlled-NOT gates to implement a permutation operation, |1 0> –>|1 1> provokes a spontaneous symmetry breaking. This is exactly the Higgs mechanism. This is exactly Levin’s one-way bijection…this is exactly P versus NP problem.

    http://www.researchgate.net/publication/274083310_A_plain_proof_that_Levin%27s_combinatorial_function_is_one-way%28draft%29

  7. May 8, 2015 5:21 pm

    Here’s news of broad interest to readers of weblogs Gödel’s Lost Letter, and in particular a fine learning-opportunity for students:

    SYNOPSIS  The Intelligence Advanced Research Projects Activity (IARPA) will host a Proposers’ Day on 19 May 2015 at the University of Maryland Stamp Student Union to provide information to potential proposers on the objectives of an anticipated Broad Agency Announcement (BAA) for the Logical Qubits (LogiQ) program.

    PROGRAM OBJECTIVE AND DESCRIPTION  The LogiQ program in IARPA’s Safe and Secure Operations (SSO) Office is seeking creative technical solutions to the challenge of encoding imperfect physical qubits into a logical qubit that protects against decoherence, gate errors, and deleterious environmental influences.

    While quantum information processing has witnessed tremendous advances in high-fidelity qubit operations and an increase in the size and complexity of controlled quantum computing systems, it still suffers from physical-qubit gate and measurement fidelities that fall short of desired thresholds, multi-qubit systems whose overall performance is inferior to that of isolated qubits, and non-extensible architectures-all of which hinder their path toward fault-tolerance.

    Underpinning the program’s strategy to build a logical qubit is a push for higher fidelity in multi-qubit operations, the pursuit of dynamically controlled experiments in multi-qubit systems to remove entropy from the system during computation, and characterization and mitigation of environmental noise and correlated errors.

    There’s plenty more material on the LogiQ Program Proposer’s Day announcement page.

    Readers of Michael Harris’ book and/or weblog Mathematics Without Apologies (as they are both titled) will appreciate that IARPA is designating stable logical qubits as an avatar (in Harris’ phrase) for catalyzing advances in our general understanding of noise, decoherence, and entropy.

    Conclusion  The physical process of removing von Neumann entropy from systems of qubits/qudits can be appreciated as a mathematical avatar — in Michael Harris’ phrase — for the computational algorithms that (heuristically) are so marvelously effective in removing Boltzmann entropy from atoms/molecules in large-scale quantum simulation software … sufficient to compose (F)foundations for the $120M/5yr investment by the Swiss-based pharmaceutical corporation Sanofi in the Portland-based quantum simulation corporation Schrödinger.

    Hopefully at least some East Coast quantum cognoscenti will attempt/report on this fine workshop!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s