Skip to content

Grilling Quantum Circuits

July 8, 2012


Polynomial algebra turns up the heat

Amlan Chakrabarti is currently finishing a postdoc at Princeton University, while on leave from the University of Calcutta in India. He has been working in Niraj Jha’s group at the Engineering School. While we have been debating the theoretical feasibility of scalable quantum computers, Amlan and co-workers have been developing approaches to engineer them. This includes finding and comparing concrete realizations of elements for quantum circuits.

Today I describe ongoing work with Amlan on simulating quantum circuits by polynomial algebra—and what this may mean for the power of quantum.

The debate with Gil Kalai and Aram Harrow on this blog has focused on how hard is it to build quantum circuits in the real world, with real components, that work long enough to be useful. Here we are interested in a different but equally important question:

How hard is it to classically simulate a quantum circuit?

If there are efficient simulations, it may be moot whether quantum computers can be built or not. Many consider such efficient simulations unlikely because they would make factoring easy, among other reasons. But there is no proof today—let me repeat, no proof—that quantum circuits in {\mathsf{BQP}} are not easy to simulate classically.

My work with Amlan addresses this simulation issue by encoding quantum computations into multi-variable polynomials. Of course this encoding only replaces exponentially large matrix evaluations by other potentially difficult computations. But by extending known work already connected with the important reduction from {\mathsf{BQP}} to {\mathsf{\#P}}, and by using the full power of polynomial algebra, we can hope that it will lead to new insights. It could yield simulation methods better than any currently known, along lines provisionally implemented by Vladimir Gerdt and Vasily Severyanov. It may generate theoretical ideas capable of putting {\mathsf{BQP}} into the polynomial hierarchy, which is considered by some to be a hard “holy grail” type problem. What it definitely does is provide several new ways to express a quantum circuit by polynomials placed in a grill mesh at its gate junctures. Does the simple algebra capture all of the quantum power? That is well worth a look.

The Grille

A quantum circuit has some number {N} of qubit lines which go across like heating elements, and a finite number {s} of gates. Typical gates involve at most three qubit lines, and gates on disjoint sets of qubit lines can be arranged in parallel. But let us space all the gates so there are {s-1} column spaces between successive pairs, plus the left side which holds the input, and the right side which gives the output. Thus we can place a variable {z_{i,j}}, {1 \leq i \leq N} and {0 \leq j \leq s} at every juncture, in the manner of a grill.



We can identify {z_{i,0}} for {1 \leq i \leq n \leq N} with the inputs {x_i}, supposing the {N-n} other ancilla inputs have been set to zero, and {z_{i,s}} with outputs {y_i} for selected {i}. Under the standard-basis representation, the circuit {C} computes a {2^N \times 2^N} linear operator {U_C} applied to the (padded representation of the) input {x \in \{0,1\}^n 0^{N-n}}.

Known results show how to compute the acceptance probability of {C} on input {x} from the scalar values {u = x U_C y} for certain target values {y}. The game is to express {u} using polynomial(s) in the {z}-variables.

In order to grill our quantum circuits evenly on all sides, we define:

Definition 1. A quantum gate is balanced if it is defined by a matrix (over the standard basis) in which every nonzero entry has the same absolute value {c}.

Hadamard gates are balanced with {c = 1/\sqrt{2}}, and CNOT and Toffoli and all other deterministic gates are balanced with {c = 1}. Hence {\mathsf{BQP}} can be defined via circuits of balanced gates. We call {1/c} the balance factor.

High Versus Low Degree Grilling

Given a polynomial {p} in {r}-many variables and a value {v}, denote by {N[p:v]} the number of arguments {z \in \{0,1\}^{r}} such that {p(z) = v}. For a quantum circuit {C} we can identify an integer {m > 0} such that {2\pi/m} is the greatest common divisor of the angular component of complex-number matrix entries of gates in {C}. Call {2\pi/m} the min-phase of {C}, and let {\omega} stand for a primitive {m}th root of unity. Our first theorem turns {\omega} around like a rotisserie:

Theorem 2. Given a balanced quantum circuit {C} with min-phase {2\pi/m} and values {x,y}, we can construct in polynomial time a polynomial {p} with values in {\{0\} \cup \{\omega^k\}} and a real number {R \geq 1} such that

\displaystyle  y U_C x = \frac{1}{R} \sum_{k = 0}^{m-1} \omega^{k} N[p:\omega^k].

Here {p = \prod_g p_g} is a product of terms for each gate, and {R} is the product of the balance factors for each gate.

The product makes the degrees grow fast. For low-degree cooking we can work additively in {\mathbb{Z}_m} instead. My main innovation is using extra variables “{w}” (shown later) to solve technical problems noted in an above-linked paper when {m > 2}.

Theorem 3. Given a balanced quantum circuit {C} with min-phase {2\pi/m} and values {x,y}, we can construct in polynomial time a polynomial {q} with values in {\mathbb{Z}_m} and real number {R \geq 1} such that

\displaystyle  y U_C x = \frac{1}{R} \sum_{k = 0}^{m-1} \omega^{k} N[q: k].

Here {q = \sum_g q_g} is a sum of terms for each gate, which keeps the total degree down but involves extra variables, while {R} may be the same or greater depending on whether the domain of assignments is over {\mathbb{Z}_m} or {\{0,1\}}.

When {m=2}, which suffices for the universal set of Hadamard and Toffoli gates, {\omega = -1}, and both formulas give a difference of solution counts. Since the solution count is a {\mathsf{\#P}} function even for the higher-degree {p} polynomial, this recovers the result that {\mathsf{BQP}} reduces to the difference of two {\mathsf{\#P}} functions {f_0,f_1}.

This raises the hope that Larry Stockmeyer’s {(1 + \epsilon)} factor approximation might put {\mathsf{BQP}} into the polynomial hierarchy, but there are two problems:

  1. Approximating {f_0,f_1} to within {(1 + \epsilon)} need not approximate their difference to any similar factor.

  2. The value {R} is small, roughly the square root of the magnitudes attained by {f_0} and {f_1}, so that the property guaranteed by the theorems that

    \displaystyle  \left|\frac{1}{R}(f_0 - f_1)\right| \leq 1

    is actually a substantial promise condition observed by the quantum circuit {C} itself.

The second factor makes the first issue “bite”—but also suggests recipes for getting simulations of {C} into the hierarchy, or even into classical polynomial time especially for cases of the low-degree polynomials {q}. The recipes involve substitutions, special cases of {\mathsf{\#P}}-complete problems that may be tractable, and possibly even tricks that are “illegal” in quantum mechanics but may be fine in algebra. This brings us right to the cookbook for representing gates {g} by terms for {p} and {q}.

Cookbook of Gates

We use {y} or {y_i} in place of the {z_{i,j}}-variable of a gate input, and {z} or {z_i} similarly for outputs. For a single-qubit gate {g}, the rows and columns are indexed like this:

\displaystyle  \begin{array}{rcc} & (1-z) & z\\ (1-y) & a_{11} & a_{12}\\ y & a_{21} & a_{22}, \end{array}

and the corresponding term {p_g} for the product polynomial {p} is given by

\displaystyle  p_g = a_{11}(1-y)(1-z) + a_{12}(1-y)z + a_{21}y(1-z) + a_{22}yz.

For the NOT gate, also called {X}, we have

\displaystyle  \left[\begin{array}{cc} 0 & 1\\ 1 & 0 \end{array}\right];\quad p_g = (1-y)z + y(1-z) = y + z - 2yz.

For 0-1 arguments, note this gives {0} when {y = z}, which kills the whole product. Thus the solution sets are unchanged if we substitute {z = 1 - y}. Then {p_g} cancels down to {1}, and we have saved a term in the product.

The identity gate is handled similarly: we can introduce the term {2yz - y - z + 1}, or just substitute {z = y} and do nothing. Thus in the absence of gates along a qubit line we can just carry the variable forward.

Additive Case

Instead of multiplying by {a_{i,j}}, we multiply by the log base {\omega} of that entry, except that when {a_{i,j} = 0}, we introduce new variable(s) {w}. Thus the NOT gate gives

\displaystyle  q_g = (1-y)(1-z)w + (1-y)z\cdot 0 + y(1-z)\cdot 0 + yzw = w(2yz - y - z + 1).

Now when {z = y = 0} or {z = y = 1} the {w} is left alone as an additive term. When {m} is a power {2^\ell} we can use {\ell}-many {w}-variables with multipliers to make an additive term that assumes all values in {0\dots m-1} with equal weight given assignments in {\{0,1\}^{\ell}}. The set of constraint-violating assignments in which this {w}-term survives is thus partitioned into {m} equal subsets giving the {m} different values. Since the sum of the {m}th roots of unity is zero, these give a net-zero contribution to the sum representation of {x U_C y}.

Again we can substitute {z = 1 - y}, and this dispenses with the {w}-variables leaving just zero. We can always do substitution for any deterministic gate, even one with imaginary entries such as the Phase Gate:

\displaystyle  S = \left[\begin{array}{cc} 1 & 0\\ 0 & i\end{array}\right]\;\quad p_g = (1-y)(1-z) + iyz;\quad q_g = w(y+z-2yz) + \frac{m}{4}yz.

Hadamard Case

For Hadamard gates we pull the balance factor {\sqrt{2}} outside, and note that {-1 = \omega^{m/2}}.

\displaystyle  \left[\begin{array}{cc} 1 & 1\\ 1 & -1\end{array}\right];\quad p_g = (1-y)(1-z+z) + y(1-z) - yz = 1-yz.

And {q_g} is just {\frac{m}{2}yz}. There is no substitution, so we have added a variable, and no constraint. The Hadamard case is thus translated into non-determinism for the polynomials, which is all-important.

Multi-Qubit Cases

A 2-qubit gate with inputs {y_1,y_2} has a {4 \times 4} matrix with rows indexed {(1-y_1)(1-y_2)}, {(1-y_1)y_2}, {y_1(1-y_2)}, {y_1 y_2}, and columns similarly for the outputs {z_1,z_2}. For example:

\displaystyle  \text{CNOT} = \left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\end{array}\right],\quad \text{CZ} = \left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & -1\end{array}\right].

The {p}-polynomial for CNOT includes a term {y_1(1 - y_2)z_1 z_2} for the final ‘$1$’ in the third row, while the {q}-polynomial has twelve terms multiplied by {w} but nothing else. They become {1} and {0}, respectively, under the substitution {z_1 = y_1}, {z_2 = y_1 + y_2 - 2 y_1 y_2}, which represents the deterministic action. The Toffoli gate is similar for three inputs/outputs and an {8 \times 8} matrix.

For the CZ gate the bottom-right {-1} entry contributes {-y_1 y_2 z_1 z_2} to {p} but {\frac{m}{2}y_1 y_2 z_1 z_2} to {q}. The substitution {z_1 = y_1}, {z_2 = y_2} is applicable, but does not leave {1} in the {p}-case, but rather {(1 - 2y_1 y_2)}. In the additive case it leaves {\frac{m}{2}y_1^2 y_2^2}, which is equivalent to {\frac{m}{2}y_1 y_2} for 0-1 assignments. The additive case, of course, has a similar {w}-multiplied term as for CNOT.

See our draft paper for polynomial translations of many other gates, even the variable-phase Shor gates. The net improvements over previous work are:

  1. Wider set of quantum gates handled directly.

  2. Multiplicative as well as additive representation.

  3. Single polynomial not set of polynomials.

  4. Extra “{w}” variables avoid mixed arithmetic in additive case.

  5. Freedom to substitute or not.

  6. Wide choice of target ring allows encoding {\mathsf{BQP}} into many algebraic problems and exploring tradeoffs.

Circuit Simulations and Invariants

To compute {u = x U_C y} exactly, we need to calculate {N[p:v]} or {N[q:v]} for some finite set of values {v}. Gerdt and Severyanov outline a procedure for solution counting using Gröbner bases and triangular forms. But to simulate {\mathsf{BQP}} we need not compute exactly—we need only distinguish accepting from rejecting cases, which basically means distinguishing {|u|} near zero from {|u|} near {1}. This may motivate new methods for approximate counting.

Here is a circuit previously used as an example in the earlier-linked papers, treating {a_i} and {b_j} as variables to be filled from {x} and {y}, followed by the {p} and {q} polynomials.

The Hadamard and Toffoli gates are arranged so as to produce entangled states. On input {|000\rangle}, the state in the middle after the first Toffoli gate is

\displaystyle  \frac{1}{2}(|000\rangle + |010\rangle + |100\rangle + |111\rangle).

It is not meaningful to assign the separate values {\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)} to the first two qubits and {\frac{1}{2}(\sqrt{3}|0\rangle + |1\rangle)} to the third, because those could also come from the different, non-entangled, state that is the tensor product of those three values. However, the polynomial annotations at those three points do have separate values, and preserve the information—the non-entangled state would have different polynomials in any realization. This is the intuitive point of using the grill.

Hence the polynomials must be telling us something about entanglements. How can we better quantify this? The degree of the polynomial is not much indication, since this depends on substitution and is always bounded for the {q}-polynomial. However, polynomial invariant theory gives other notions of degree.

One notion is the same used by Walter Baur and Volker Strassen to obtain the best known lower bounds on arithmetic complexity for explicit polynomials {p(x_1,\dots,x_n)}: allocate new variables {y_1,\dots,y_n} and form the polynomial ideal {J(p)} generated by the polynomials

\displaystyle  y_1 - \frac{\partial p}{\partial x_1},\dots, y_n - \frac{\partial p}{\partial x_n}.

The graph of this mapping in the {2n}-dimensional {x}{y} space is topologically irreducible and hence has an unambiguously defined geometric degree {\delta(p)}. The Baur-Strassen quantity {\gamma(p) = \log_2\delta(p)} is their famous lower bound on circuits for {p}, as we covered here.

What makes {\gamma(\cdot)} a plausible component of an entanglement measure is that it is additive for disjoint systems, i.e. for products of polynomials on disjoint sets of variables, and is non-increasing under projections hence amenable to substitutions. Multiway entanglement measures on {N} qubits are notoriously difficult to define, but we may hope the polynomial representations provide a salient measure that also bounds the true complexity of building and operating the circuit.

Last, polynomials may help us identify subsets of gates and restricted kinds of circuits that are classically simulable. This is known for so-called stabilizer circuits which can be characterized by the Hadamard gate, the {S} phase gate, and the CZ gate. Over {\mathbb{Z}_4} the {q}-polynomials for circuits built from these gates are incredibly simple.

Each term is either a variable squared, {y^2}, or a product {2yz}.

These terms are invariant under changing {y} from {1} to {3} and are zeroed out by {y=0} and {y=2}, so for {r} variables the domains {\{0,1\}^r} and {\mathbb{Z}_4^r} are essentially the same. Can we inductively compute the quantities {N[q:0],N[q:1],N[q:2],N[q:3]} as the circuit is built up? This would at least replicate classical theorems about these circuits. In any event we get a wealth of newly-motivated special cases of {\mathsf{\#P}}-complete counting problems to study.

Open Problems

Can polynomial algebra using our grilles out-perform other classical simulations of quantum circuits that have been programmed?

Is solution-counting for polynomials over {\mathbb{Z}_4} all of whose terms have the form {y^2} or {2yz} in {\mathsf{P}}? Or is this restricted problem still {\mathsf{\#P}}-complete, so that our simulation would need to find additional structure in the polynomials? Update: It is indeed in {\mathsf{P}}, for any quadratic polynomial, via this paper co-authored by Dick.

What properties of quantum circuits (besides entanglement) can we approach this way? Can this inform the debate over feasibility of quantum computers?

[fixed “3” to “\sqrt{3}” near end]

42 Comments leave one →
  1. July 8, 2012 3:10 pm

    You might be interested in some work that Wim van Dam did on this approach: http://www.cs.ucsb.edu/~vandam/LeastAction.pdf I don’t think it was ever published. The cool thing is that your balanced condition, i.e. that all the magnitudes of the amplitudes are the same, is sort of related to the idea of path integral in physics.

    In its most common form the path integral is a way to think about the amplitude for particle to go from position x0 and time t0 to position x1 and time t1 as a sum over all possible paths from these two events, weighted by a phase related to the action of the path. The cool thing here is that the magnitudes of the amplitudes are all the same. One motivation for the thinking about quantum computers this way is that there are some cool tricks physicists have dreamed up to sometimes efficiently calculate these path integrals, and it might be useful to bring these over into the CS regime.

    Oh, and, p.s. there is also no proof that classical circuits can’t solve NP-complete problems efficiently, but for some reason I don’t see that in all of your posts on classical computers 😉

    • July 8, 2012 4:06 pm

      Thanks! I may have been told about van Dam’s incipient work at Complexity’08, but did not go further. I did know about the path-integral connection—indeed, I’ve asked around about whether there are geometric/topological invariants associated to it. The image in the CS regime are all these restricted cases of #P-complete solution-counting problems, and I’ve asked both Jin-Yi Cai and Les Valiant whether they may fall into their “gap”-theorem schemes. I hope to get time to bring this out more fully…

      As for “no proof”, Dick provided some thoughts which I merged into my intro; I pondered upgrading that line to add “…, nor even a convincing hardness argument”—but thought that better left-alone in the post. 😎

      • Wim van Dam permalink
        July 8, 2012 7:53 pm

        Ken: indeed you and I talked about this at Computational Complexity in Maryland. The ‘action polynomial’ (which is related to the polynomial that is defined above) that Dave, Alex Russell and I defined does indeed have various interesting geometric properties.

        In the case of stabilizer circuits the action polynomial F is of degree 2, which allowed us to use quadratic forms to efficiently compute all possible properties of the quantum circuit. It turns out that the crucial quantity of the surface F=0 is the dimension of the collection of its its singular points. The conjecture is that for general quantum circuits this dimension keeps on playing an important role.

      • John Sidles permalink
        July 8, 2012 8:17 pm

        Wim’s notion of an “action polynomial” is very attractive (to me). There is a considerable literature that is devoted to the geometric properties of these objects, viewed as algebraic varieties. Please post more, Wim!

      • July 8, 2012 8:22 pm

        Right—you have the same expression grouped a different way, and say more simply that |A_C|^2 (in my terms, |u|^2) is the acceptance probability. I see your algorithm for the stabilizer case has more powerful tools than what I was trying—albeit my y^2, 2yz form is more particular than “degree 2”—including using the Jacobian ideal without the mapping variables y_i. Does it achieve Richard Jozsa’s nearly-linear time? That’s what I hope to match, but I suspect this needs to use more information about how the gates are connected. I’ll look to see if your f-to-g transformation takes care of that (too), thanks! The gdeg of the Jacobian ideal (with or without the y_i mapping) may include the info you use about the singular points.

        John—just saw your comment; I’d indeed be interested in pointers about action polynomials [as] algebraic varieties!

      • John Sidles permalink
        July 8, 2012 8:37 pm

        Be cautious about wading in these waters, Ken … the ocean of algebraic varieties is infinitely deep … and yet, it has grillable Salmon in it (arXiv:1009.6181v2 [math.AG]). 🙂

      • July 8, 2012 8:45 pm

        That ocean also has volcanic heat vents with growth-in-deep, or rather Grothendieck. 😉

      • John Sidles permalink
        July 8, 2012 9:04 pm

        Ouch! 🙂 OK, give me a a secant (second) to think of an answering pun! 🙂

        Seriously, these varieties can be viewed as quantum dynamical state-spaces, upon which pullback of FTQC dynamics yields conjectures that are midway between Gil Kalai’s skepticism and Aram Harrow’s faith.

        The above link is to a Coimbra lecture by Anthony Geramita, that provides a (relatively) accessible introduction to an ocean of literature that is Grothendieck-deep and Landsberg-wide.

    • John Sidles permalink
      July 8, 2012 7:40 pm

      Regarding the notions of “path integral” and “quantum computing”, what is striking about the preprints on the arxiv server is how popular these terms are individually, yet how relatively seldom they appear together. E.g. for the year 2011, with a full record arxiv search:

         •  “quantum computing”: 366 preprints

         •  “path integral”: 229 preprints

         •  “quantum computing” AND “path integral”: 0 preprints

      It is natural to enquire whether there is some mathematical obstruction to simultaneous application of these two toolsets. In this regard a tutorial by Zia, Redish, and McKay titled “Making sense of the Legendre transform” (American Journal of Physics, 2009) is commended.

      As I presently (partially) appreciate it, one obstruction may be that the expressing propagators as functional integrals over a Lagrangian action, which is natural for free particles, is *not* natural for qubits. For example, a Lagrangian for classical particles arises naturally via a Lagendre transform of the convex kinetic term in the Hamiltonian, but for classical (Blochian) qubits the Hamiltonian in general has no natural convex kinetic function.

      We wonder, what mathematical element(s) is/are natural for qubits, in the sense that the Lagrangian action is natural for particles? Perhaps Ken Regan’s essay is correct to suggest that a polynomial algebra may in some sense (that admittedly is not entirely clear to me) provide these natural elements?

      • John Sidles permalink
        July 9, 2012 12:27 pm

        As it turns out, there is a very readable, and very interesting (but unpublished?) manuscript by Dave Bacon, Wim van Dam, and Alexander Russell titled “Analyzing Algebraic Quantum Circuits Using Exponential Sums” (2008?) that for a broad class of quantum circuits constructs precisely the “action polynomials” that the above comment had in-mind (a Google search finds the manuscript)

        Along with Dickens’ character Oliver Twist, I hope it’s OK to voice a sentiment that this manuscript inspires: “Please, sirs, we want some more!” 🙂

      • John Sidles permalink
        July 9, 2012 12:38 pm

        … and systematically checking through the links of this GLL+PvsNP thread, I find that that the above-mentioned manuscript “Analyzing Algebraic Quantum Circuits Using Exponential Sums” is identical to the PDF file that Dave Bacon’s comment linked-to above.

        A very nice, very thought-provoking manuscript!

      • July 10, 2012 2:16 pm

        Works about quantum computing and path integral do exist. Even I myself had such work (http://arxiv.org/abs/quant-ph/0308017 rejected as “not clear”, because Lagrangian approach in discrete case is indeed not trivial).

  2. July 9, 2012 6:10 am

    Random Guess (hopefully polynomially testable). Hypothesis – the classical measurement device provides boundary condition for the state of the quantum device. Corollary – the output qubit state (the pair of variables representing qubit amplitudes) can only accept real vectors [0,1] or [1,0]. These qubit states are linked to the initial state via unitary matrix (circuit Hamiltonian). The initial states are amplitude norm one and are entangled (that what experimental preparation requires). The entanglement is easily implemented via \alpha_{|0>} + \alpha_{|1>}= 0. The only problem is amplitude norm one for the initial state of qubit variable (as far as I understand this is not analytical, and thus could not be implemented polynomially (the possible solution is to work in \mathbb{Z}_n as this post suggest, although this may be too restrictive on the input space ) ). This tells that the possible values of output states are interfering with each other via the Hamiltonian and entanglement.

  3. John Sidles permalink
    July 9, 2012 4:08 pm

    As a further resource in regard to the ultra-rich relationship between geometry, algebra, quantum dynamics, path integrals, computation, and simulation, please let me commend Tony Zee’s discussion of Fadeev-Popov methods in his student-friendly Quantum Field Theory in a Nutshell. Slowly and inexorably, Zee’s book developes the aggregate notion that path integrals in gauge theories span a non-flat state-space, and that the quantum dynamical consequences of that curvature can be effectively simulated as arising from (non-physical) ghost particles acting on a flat state-space.

    It is indicative of the immense breadth and depth of this geometric-dynamical duality — yet regrettable too — that working through its consequences occupies Prof. Zee for pretty much the entirely of his book.

    This can be viewed as a field-theoretic instance of John Preskill’s principle, as state in his “Quantum computing and the entanglement frontier” (arXiv:1203.5813v2) that

    “The only quantum states that are physically relevant are those that can be prepared with reasonable (quantum) resources, which occupy an exponentially small portion of the full Hilbert space. Only these can arise in Nature, and only these will ever be within the reach of the quantum engineers as technology advances.”

    As with Zee’s state-space restriction, it is natural to describe Preskill’s state-space restriction too, with all the mathematical tools at our disposal: informatic, differential, algebraic, stochastic, and geometric. Indeed, the lesson of field theory (per Tony Zee) is that a mature understanding will both enable us and compel us to freely translate our understanding among all of these mathematical tool-sets (and more).

    Surely (and fortunately!) we are a long way from achieving *that*.

    • John Sidles permalink
      July 11, 2012 10:58 am

      We can extend the preceeding comments a bit, by envisioning two very hopeful, very progressive impacts that the ideas of Faddeev and Popov might exert on the future evolution of ideas QM/QIT/QSE/FTQSE (etc.).

      By way of Tony Zee’s Quantum Field Theory in a Nutshell, the teaching of Faddeev and Popov’s path-integral formalism is that restrictions of quantum dynamics to lower-dimension non-flat subspaces can be equivalently described as non-linear dynamics associated to interactions with “ghost” particles

      This insight authorizes us to envision two hopeful futures for QM/QIT/QSE/FTQSE.

      Future A (Aram Harrow’s hopeful future):  F-P ghosts do *not* exist in QM/QIT/QSE/FTQSE, and yet for purposes of practical simulation it is convenient to pretend that they *do* exist (analogous to von Neumann’s method of introducing artificial viscosity to render the Navier-Stokes equations computationally tractable).

      Future B (Gil Harrow’s hopeful future):  F-P ghosts *do* exist in QM/QIT/QSE/FTQSE, in the sense that experiments to induce high-order quantum correlations will (eventually) observe effects that can be ascribed equivalently to “ghost” particles that decohere quantum states (per Roger Penrose’ notion of gravitational decoherence), or alternatively to Nature’s embrace of non-linear state-spaces (per Abhay Ashtekar and Troy Schilling’s geometric formulations of quantum dynamics).

      We can even envision a supremely optimistic oscillation of ideas, in which Kalai’s Future B dominates for a few decades (decades that focus upon efficient quantum simulation with classical computational resources), then Harrow’s Future A dominates for a few decades (in which practical quantum computers are realized), following which Kalai’s Future B is reborn (as M theory, what else!).

      Has there ever been a more exciting time for mathematicians, scientists, and engineers alike, to embark on wonderful quests and conceive great enterprises?   🙂

    • John Sidles permalink
      July 11, 2012 1:11 pm

      For further references, the concise review “Faddeev-Popov ghost” in Encyclopedia of Mathematics (European Mathematical Society/Springer) provides a reasonable starting-point. Needless to say, best and/or most natural approach to generalize these beautiful geometric ideas to the finite-dimensional algebraic state-spaces of quantum computation and/or quantum simulation is far from evident to me (or anyone AFAICT).

  4. July 11, 2012 8:22 am

    I have a problem with how can one count possible exponentially many solutions in polynomial time, if symmetries are not involved. I even have a problem of expressing myself. In algebraic case a monomial basis should encode all possible solutions in the null space, and to distinguish the solutions one need to expand the basis that each solution has unique encoding, e.g. one needs exponentially dimensional basis, unless symmetries (or direct product, or separation of variables, which lead to direct product) are involved. For example, the system x_k(x_k-1)= 0, k=1..n has exponentially many recognizable solutions because each equation has 2 solutions, and the solution space is a direct product of 1D solutions. The same problem I have for the quantum computer, encoding exponentially many possible solutions that are possibly encoded in the different paths with polynomially many states, unless again some (possibly hidden) symmetries are involved. Therefore, it seems to be important to recognize symmetries in both quantum and classical cases. Question: do people consider automation of the symmetry finding in the system of polynomial equations. I do not see how to formulate the analogous question about quantum computers. The former question can be reformulated in the following way – is there any known efficient procedure to rewrite system of polynomial equations that guarantee to recognize separation of variables, or almost a separation of variables. By the almost separation, I mean exponentially small number of “brigde” variables – the variables which, when removed, lead to separable set of equations. If yes, than one can solve separable systems for each set of values for bridge variables, and use the procedure recurrently.

    • July 12, 2012 11:25 pm

      mkatkov,
      Somewhat a sidetrack– do you believe a symmetry exits in the set of perfect matchings of any bipartite graph? After all, each instance is a subset of asymmetric group.

      • July 13, 2012 7:04 am

        Javaid, the symmetry is certainly exist for some bipartite graphs in the sence that permuting vertices in one part leads to the same graph. It is possible that not all graphs have symmetries. The interesting question here if there is a nonsymmetric instances that have exponentialy many solutions for some problem, and the number of solution can be counted in polynomial time. Do you have something specific in mind?

    • October 11, 2012 6:24 am

      Side comment about symmetries. There is an interesting presentation by David A. Cox http://www.cs.amherst.edu/~dac/lectures/EACA.pdf Read to the end to get the point. It is not hard to encode many decision problems where optimization is not involved (satisfability problems) into system of polynomials over \mathbb{C} . For instance one can encode binary variables as x_k^2-x_k=0 if one needs 0 and 1, or x_k^2-1= 0 if one needs \pm 1. Next, one can encode the problem by adding constraint equations. For example, partition problem: given a vector a \in \mathbb{Z}^n we add \sum a_k x_k=0 ; SAT: for each clause add \prod t_{k, n}= 1 where t_{k,n}= x_k or t_{k,n}= 1-x_k for negation in clause n. The set of solution is given by the Groebner basis, which is hard to compute. On the other hand, if some symmetries exist, they somehow should help in computation. In particular, it is interesting in the David A. Cox presentation that one can automatically construct theorem – discovery of the symmetry. That is not that expensive for very small system. The question here whether it is possible to construct the hierarchy of theorems automatically, and what is the computational cost for this. By the way, 2-SAT would have many quadratic equations, and in particular for any constraint x_k x_m-1 or x_k (1- x_m) -1 there are 2 complimentary equations x_k^2- x_k = x_m^2- x_m=0, so the full square can be formed, from which one can get a linear constraint, from which it would be easy to compute Groebner basis.

  5. Javaid Aslam permalink
    July 13, 2012 10:53 pm

    mkatkov,
    I was trying to relate people’s belief in NP!=P and enumerability of bipartite perfect matchings.
    I suppose the main reason that permutation groups can be enumerated in polynomial time is because of the symmetries inherent in these groups.
    For those who believe NP !=P, must also believe that certain subsets of these groups exhibit asymmetry responsible for the exponential enumeration time for the associated perfect matchings in those bipartite graphs.
    Or else, #P=[F]P.

  6. July 16, 2012 1:44 am

    Of course, the question of classically similating some fragments of quantum computation is very basic and exciting. Since it is not known even that BQP is in NP (and it is quite reasonable to believe that BQP is not in NP) I wonder if there are some results on fragments of quantum computers which can be simulated in NP.

  7. July 16, 2012 2:09 am

    The following paper by Dorit Aharobnov and Umesh Vazirani Is Quantum Mechanics Falsifiable? A computational perspective on the foundations of Quantum Mechanics is related to the discussion in the post “Is this a quantum computer.“ Dorit and Umesh argue that in order to test QM in a regime where classical computers are too weak to simulate one needs to extend the scientific paradigm to include interactive experiments.

Trackbacks

  1. The Viewpoint Skeptics of Quantum Computers Don’t Want You To Hear | Pontiff++
  2. Michigan Coding, Complexity, and Sparsity Workshop « Gödel’s Lost Letter and P=NP
  3. Polynomials Behaving Badly « Gödel’s Lost Letter and P=NP
  4. Interstellar Quantum Computation | Gödel's Lost Letter and P=NP
  5. Practically P=NP? | Gödel's Lost Letter and P=NP
  6. Quantum Algorithms Via Linear Algebra | Gödel's Lost Letter and P=NP
  7. Cancellation is a Pain | Gödel's Lost Letter and P=NP
  8. UB Theory Weekly – Final Week | Code for Food
  9. Quantum Supremacy and Complexity | Gödel's Lost Letter and P=NP
  10. Progress On Modular Computation | Gödel's Lost Letter and P=NP
  11. A Magic Madison Visit | Gödel's Lost Letter and P=NP
  12. Doing Mod N Via Mod R | Gödel's Lost Letter and P=NP
  13. A Quantum Connection For Matrix Rank | Gödel's Lost Letter and P=NP
  14. A Rank Problem | Gödel's Lost Letter and P=NP
  15. A Rank Problem | Gödel's Lost Letter and P=NP
  16. A Rank Problem | Gödel's Lost Letter and P=NP
  17. Net-Zero Graphs | Gödel's Lost Letter and P=NP
  18. The Reach of Dichotomy | Gödel's Lost Letter and P=NP
  19. Dominic Welsh, 1938–2023 | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading