Lucas Kocia

s3miclassical blog

Disclaimer: Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

Opinions expressed are solely my own and do not express the views or opinions of my employer.


IR Remote Control and Receiver for Home Theater PC

I have a home theater PC connected to an audio-video-receiver (AVR) and a TV. This is just a normal PC. So, after turning my TV and AVR on with their remotes, I would have to walk over to the PC and turn it on by pressing its power button. This was a bit annoying. Since I have to walk over to the TV and AVR anyway to turn on the PC, I might as well turn everything on and not benefit from the convenience of remote control.

So I decided to build an IR receiver and remote (transmitter) to remedy this, admittedly mild, inconvenience. My set-up requires that my new remote signal not trigger the other receivers in the TV and AVR. I didn't want to deal with digital circuit logic (and coding up a PAL, FPGA, or Arduino etc.) and preferred an analog circuit. At first I thought this would be difficult to do with analog components. However, after examining how IR remotes generally work and how different signals are encoded, I discovered this is a lot more straightforward.

Almost all IR remotes transmit at a "carrier" frequency of 30-40 kHz and then modulate the amplitude on/off to encode messages. We're going to use this to our advantage in producing a simple analog circuit that transmits a signal that none of the other receivers will decode to a meaningful message; we're just going to transmit the carrier frequency, i.e. a null signal according to all the other receivers.

This strategy will also allow us to build a particularly simple receiver. We can simply put the received signal through a low-pass filter that will filter out any signals higher than our carrier frequency. A low-pass is an integrator. Any IR signal that will be sending a message to the other devices will be modulating the amplitude and so be sending fewer "on" pulses at the carrier frequency in a given interval of time. Hence, the signal will integrate to a smaller area in that time interval. If we "bleed" the area at the correct rate (by adding a grounded resistor to our RC integrator) then we can ensure that only a "clean" unmodulated carrier signal produces enough area to surpass a comparator's threshold, which can then send a logical "ON" level to our computer's power supply.

Take a look at the overall IR transmitter schematic [PDF], IR receiver schematic [PDF], and pictures of both:

IR Transmitter (remote control) Pictures

breadboard overhead shot profile shot

IR Receiver Pictures

breadboard overhead shot profile shot overhead shot profile shot
Breadboard Top
PC wire

So how does this work?

Let's start with the IR transmitter. The 555 timer is connected to be in an astable mode duty cycle, which has the following frequency (see the schematic PDFs for RA, RB and C):

frequency = 1/(0.7×(RA+2 RB) × C) Hz
= 1/(0.7×((1 × 103)+2 × (20 × 103)) x (0.001 × 10-6)) Hz
≈ 35 kHz

You can read about this formula's derivation in many other excellent sources online so I won't get into that. The output of the 555 is passed to a 3904 NPN transister that acts as a switch to run sufficient current through the IR LED. It flashed on and off at the carrier frequency of 35 kHz.

I also added a red IR to the circuit that lights up when you power circuit so you know something is happening when you press the remote's button.

The IR receiver uses the TL1838 sensor integrated chip to pick up the IR signal. It actually likes a carrier signal of 38 KHz, but our 35 KHz signal is close enough. This needs 5 V at pin 3 and ground at pin 1. It outputs the amplitude it senses at pin 2. Note that it outputs the signal without the carrier frequency. This resultant signal still consists of square (ON/OFF) waves, and so I pass pin 2's output to drive a PNP 3906 transister switch--a follower and will pull more current for the rest of the circuit.

The signal on the other side of transistor is sent through a RC low-pass filter with an 1/2πRC constant of 160 Hz. It attenuates the amplitude of 160 Hz sinusoidal signals by about 70%. Higher frequency sinusoidal signals are attenuated more. Now, our signals consist of square waves, which have high frequency components at their abrupt/discontinuous slope changes. The bare carrier frequency signal will be output by the TL1838 as a steady DC high and so will be devoid of such high frequencies. So you can think of the modulated carrier signal as possessing more higher frequency sinusoidal components because it changes slope more often. As a result, modulated carrier signals will be filtered more. This integrated area is continually "bled" by the 100 kΩ resistor to ground. If the TL1838 sensor receives sufficiently pure 35 KHz carrier signal, then it is able to drive the area produced by the RC integrator high enough to combat this bleed. I chose the values of the RC constant and the bleed resister by trial and error.

Op amp A serves as a comparator. When enough signal passes through the high-pass filter such that the filter has integrated the area and raised the voltage to above 2.5 V (as set by that voltage divider with the two 10 kΩ resistors), then the inverting op amp ratchets to low (0 V at pin 1). We pass this signal to our power button wire to turn the computer on (or off).

Op amp B acts as a simple follower that sees how much signal passes the filter and lights up the red LED according to the amplitude that has passed through. This LED serves to give us some indication how much signal is passing through when we're flashing our IR remote at the receiver. It lights up a bit when I use my other remotes but only fully lights up when I send it a bare carrier signal.

You want the receiver to share the same +5 V and ground lines as the computer's motherboard. It turns out that on modern ATX power standards, the (usually red) +5 V line is not on when the computer is off. However, the (usually purple) +5 VSB (the standby voltage) is always on, so that's what you want to connect to. It's usually most easily accessible on the ATX 24 pin connector you plug from your power supply unit (PSU) to the motherboard. The PC's ground can be found in this same connector at multiple pins corresponding to its black wires.

And that's it. As you can see, it's a pretty simple set of circuits. I think the remote looks a bit gnarly, but it's the process of making it that counts. You can get away with using some cell phones as "IR blasters" if you don't want to build that part. The receiver also has a bit of a scary look in the PC (see the last picture). I might cover it with a panel that has two small holes for the red LED and the TL1838 sensor to give it a more "professional" look. I've been using it for over half a year now and enjoy this little device's compact simplicity.

Disclaimer: Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

Symmetries and entanglement of stabilizer states

I'd like to highlight a paper recently posted on the arxiv (arxiv:1902.04764 v2) by Matthias Englbrecht and Barbara Kraus at the University of Innsbruck, in Austria.

Stabilizer states are an important subject in fault-tolerance and understanding the power and cost of universal quantum computation, and this paper seems to advance their characterization for qubits in terms of their local symmetries and provides an algorithm for finding them.

The main foundational tool that Englbrech and Kraus use during their characterization is that it is sufficient to study all stabilizer states by restricting to graph states, since any stabilizer state can be related to a graph state by a local Clifford transformation (which are trivial to additionally characterize). So, there exists equivalence class of stabilizer states corresponding to the graph states. To be more precise, given an abelian subgroup of the n-qubit Pauli group not including -I a stabilizer state is the unique eigenstate for all elements of this subgroup with eigenvalue +1. It can be shown that this abelian subgroup can be generated by n Pauli operators.

Given a undirected graph G(E,V) with edges E and vertices V, the corresponding graph state is given by
|G> = {i,j} ∈ E Uczij |+>⊗n,
where Uczij denotes the controlled phase gate on qubits i and j. This is a stabilizer state with generators Xjk ∈ NjZk.

So this defines the equivalence class of stabilizer states that are equivalent up to local Clifford transformations.

They then proceed to show that there are two different types of graph states:

  1. graph states that possess a symmetry of order 3:
    σ1exp(iβσ2)exp(iγσ3), β, γ ∈ {± π/4} where σ2, σ3 ∈ {X,Y,Z} with σ2σ3,
  2. those which do not have such a symmetry.

The symmetry group of the first type of graph state has to be a subgroup of the local Clifford group and is therefore discrete and finite.

Graph states of the second type have a continuous unitary symmetry iff the corresponding graph has a leaf up to local complementation.

I would be interested to see how this plays out for odd-dimensional qudit stabilizer states where the analysis might be simpler. I also wonder how non-local symmetries characterize the stabilizer states. Nevertheless, this gives a nice handle on how these stabilizer states "look" in terms of their (local) symmetries, which is especially important from the perspective of studying their entanglement properties.

SAND No.: SAND2020-1612 W

Disclaimer: Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

Comment on Bravyi et al.

On page 12 of PRX 6,021043 (2016), Bravyi et al. discuss an Monte Carlo algorithm they use to try and find the smallest set of stabilizer states that spans a given state called a "magic" state. They write:

"Since our target state has real amplitudes in the computational basis, one can easily show that the optimal decomposition Eq.(B1) can be chosen such that all stabilizer states have real amplitudes as well (the real part of a stabilizer state is either zero or proportional to a stabilizer state). Accordingly, we restrict the random walk to the subset corresponding to real stabilizer states. Clearly, a move maps real states to real states if P contains even number of Y’s. The move is accepted only if this condition is satisfied."

Despite their claim that this is clear, I thought this could use some clarification. In particular, I want to clear up what I think they mean by "real part of a stabilizer state".

I believe that the authors are just expressing the fact that, in terms of the computational basis states, you can write qubit stabilizer states in terms of a linear combination of two stabilizer states, where the first only has real coefficients and the second has purely imaginary coefficients. So for instance:
a stabilizer state with stabilizer generators YI & IY can be written as: |ψA> = |00>+i|01>+i|10>-|11> = (|00>-|11>) +i(|01>+|10>)... and
a stabilizer state with stabilizer generators XI & IY can be written as: |ψB> = |00>+i|01>+|10>+i|11> = (|00>+|10>) +i(|01>+|11>), etc.

Any stabilizer state can be expressed as:
|ψstab> = creal |ψstab1> + cimag |ψstab2>,
where |ψstab{1,2}> has stabilizers that are just in terms of Xs and Zs.

Therefore, in order to maintain that the stabilizers of |ψstab{1,2}> are just composed of Xs and Zs, you can only use Pauli projectors that have pairs of Ys. In other words, to project from one state that is has cimag = 0 to another with cimag = 0, you must have pairs of Ys.

Maybe this clarification proves useful for some future reader.

SAND No.: SAND2019-11569 W

Disclaimer: Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

Quantum Machine Learning with Boltzmann Machines

Nathan Wiebe gave a talk at a conference on "Quantum Machine Learning" at Purdue on work that he's done on a class of neural networks of Boltzmann machines. These are characterized by each neural network mode having a probability function of P(E) α exp (-E(x)). Typically E is chosen to be the Ising Hamiltonian and you want to change its coefficients such that the energy is minimized for what you're trying to identify:
E = i,j wi,j ni nj,
w = argminw (DKL (P(x) | Ptrain(x)),
E(x) -> H = i,j wi,j (1-Zi)(1-Zj)/4,
where DKL is the KL divergence,
DKL(P | Q) = x ∈ X P(x) log P(x) log Q(x) .

Quantizing this to a quantum Ising model is thought to have no classical analog when a transverse Xi term is added (the transverse Ising model).

Unlike previous studies that have followed the classical route and trained by using the KL divergence, Nathan measured training by using the quantum relative entropy between the two quantum distributions. Relative entropy is defined as
S(ρ | σ) = Tr ( ρ log ρ - ρ log σ).

Though this solves some issues related to using the KL divergence, it becomes very difficult when you have "hidden" layers in your neural network. Hidden layers are units in your neural network that are distinguished from "visible" layers by not being related to physical quantities. They generate correlations between the visible units. This is problematic in the quantum realm since this means that they can produce entanglement, which when traced away to just look at the visible layers, will necessarily lead to the remaining states approaching maximally mixed states. Since the eigenbasis index is what is used in these quantum Boltzmann devices as the classifier index (e.g. if you're training your neural network to pick out cats from a set of picutres, the second eigenstate might correspond to a picture of a dog and so you would want it to have a larger eigenvalue than those held by eigenstates that correspond to cats), a maximally mixed state is useless for classification.

Nathan largely addresses this problem by assuming that the hidden layer Hamiltonians commute though he also has a more general method (that is correspondingly much more expensive). There are also mathematical issues with using the relative entropy and difficulties with taking its gradient. To address this he used Jensen's inequality to get a simpler variational upper bound on the objective function and used this in the gradient calculations.

On the subject of the power of quantum machine learning compared to classical machine learning, Nathan noted that with classical machine learning, you can only change the energy surface while with quantum machine learning you can also change the eigenbasis. This, along with other reasons, form the crux of his argument for why quantum machine learning has the potential to solve more problems than classical machine learning: quantum Boltzmann machines are more expressive than classical Boltzmann machines unless BQB=BPP. This seems to largely follow the argument he published some time ago (arXiv:1412.3489).

This is a pretty interesting approach to machine learning that introduces a novel approach to training when using quantum Boltzmann machines.

Disclaimer: Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

Fine-grained quantum computational supremacy

There is an interesting paper on the arxiv (arxiv:1902.04764 v2) by Tomoyuki Morimae and Suguru Tamaki at Kyoto University.

They demonstrate that strong simulation of the Clifford+T-gateset must scale as 2o(t) up to an exponentially small additive error by relying on the Exponential-Time Hypothesis, instead of the traditional appeal to the non-collapse of the polynomial time hierarchy. It also shows that the corresponding weak simulation sampling problem within a constant multiplicative error must scale as 2o(t). They then suggest that this implies that strong simulation algorithms that calculate up to some constant multiplicative error cannot be subexponential (and so, for instance 2t is not possible).

This is interesting because, in general, weak simulation is not necessarily less costly than strong simulation. So it is surprising that they are using a lower bound on weak simulation results to lower bound a strong simulation result. However, this is probably fine since weak simulation of the Clifford+T gateset is thought to be easier than its strong simulation (certainly all classical algorithms have so far shown this to be the case).

NISQ Computers and Semiclassics

We are in the age of intermediate-size quantum computers without error-correction---so called noisy intermediate scale quantum (NISQ) computers---and we are reaching the limit of quantum circuits that can be validated by classical computers [1]. All the major industrial players in classical computing---IBM, Microsoft, Google, Intel etc.---and many start-ups are involved in trying to build such a NISQ computer. They have publicly expressed the need for better and newer algorithms for classical simulation of quantum circuits to validate and verify whether their efforts truly result in a "quantum" machine. Moreover, it is important to also establish whether their eventual machines are offering quantum "advantage" or "supremacy" over their classical counterparts---i.e. can they perform their task faster than our current computers.

Moreover, classical algorithms are necessary for breaking up quantum algorithms in the near-term due to their small size. It will be very helpful to find as many parts of a quantum algorithm that are classically efficiently simulable and so can be run in parallel on a classical computer, producing a hybrid classical-quantum computer that still demonstrates quantum "advantage". This is important for algorithms prevalent in chemistry, such as the variational quantum eigensolver (VQE) [2-4], for calculating properties of condensed matter [5], and many others [6,7].

One of the most compelling reasons for using a semiclassical approach is that it directly relates quantum phenomena to classical mechanics and provides a formal and exact relationship between the two, governed by ħ. Calculating or computing quantum evolution by including only low-order ħ terms is usually more computationally tractable than full quantum evolution as it is representable by a classical probability theory with integrable dynamics.

Continuous systems are characterized by infinite-dimensional Hilbert spaces (i.e. a particle's position can take on any value on the real line) while discrete systems are characterized by finite-dimensional Hilbert spaces (i.e. a particle's position can only take on discrete values, like the natural numbers up to some bound). Quantum computers are typically defined on the latter and define their fundamental unit of information to be a qubit---a quantum state that can only take two discrete values.

Whereas the application of a semiclassical path integral approach to continuous (infinite-dimensional) systems is long-established, its use for discrete systems is in a far more nascent stage. The notion of classicality in quantum information and related topics, including contextuality, entanglement and quantum "coherence", has until now resided within the language of complexity classes developed in computer science and entropic (or "thermodynamic") definitions [30-33]. Similarly, the practical side of the simulation of quantum algorithms has generally been accomplished by quantum Monte Carlo methods or tensor network approaches. I think it is time for the discrete version of methods that have been developed and found to be useful in continuous Hilbert spaces, such as WKB methods and their higher order corrections [34] and level-spacing trace formulas [35], to be developed and introduced. There has even been a lot of work done in the continuous setting on developing "minimally quantum" corrections to classically efficient propagators (so-called Feynman-Vernon influence functionals) that serve as incredibly promising candidates, when discretized, as quantum algorithm candidates for NISQ computers to handle in a hybrid classical-quantum manner. I have taken the first few steps in the direction of bringing such semiclassical methods to bear in the discrete world with my work on the classical simulation of quantum circuits composed of Clifford+T-gates [36], but there is far more to do.

[1] J. Preskill. Quantum Computing in the NISQ era and beyond. ArXiv e-prints, January 2018.
[2] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J Love, Alán Aspuru-Guzik, and Jeremy L Obrien. A variational eigenvalue solver on a photonic quantum processor. Nature communications, 5:4213, 2014.
[3] Jarrod R McClean, Jonathan Romero, Ryan Babbush, and Alán Aspuru-Guzik. The theory of variational hybrid quantum-classical algorithms. New Journal of Physics, 18(2):023023, 2016.
[4] Dave Wecker, Matthew B Hastings, and Matthias Troyer. Progress towards practical quantum variational algorithms. Physical Review A, 92(4):042303, 2015.
[5] Bela Bauer, Dave Wecker, Andrew J Millis, Matthew B Hastings, and Matthias Troyer. Hybrid quantum-classical approach to correlated materials. Physical Review X, 6(3):031045, 2016.
[6] Juha M Kreula, Laura Garcı́a-Álvarez, Lucas Lamata, Stephen R Clark, Enrique Solano, and Dieter Jaksch. Few-qubit quantum-classical simulation of strongly correlated lattice fermions. EPJ Quantum Technology, 3(1):11, 2016.
[7] Ryan Babbush, Nathan Wiebe, Jarrod McClean, James McClain, Hartmut Neven, and Garnet Kin Chan. Low depth quantum simulation of electronic structure. arXiv preprint arXiv:1706.00023, 2017.
[8] Andrew M Gleason. Measures on the closed subspaces of a hilbert space. Journal of mathematics and mechanics, 6(6):885–893, 1957.
[9] Simon Kochen and E.P. Specker. The problem of hidden variables in quantum mechanics. Journal of Mathematics and Mechanics, 17(0):1, 1967.
[10] Harold Ollivier and Wojciech H. Zurek. Quantum discord: A measure of the quantumness of correlations. Phys. Rev. Lett., 88:017901, Dec 2001.
[11] Mark Howard, Joel Wallman, Victor Veitch, and Joseph Emerson. Contextuality supplies the ’magic’ for quantum computation. Nature, 510(7505):351–355, 2014.
[12] C Chester, B Friedman, and F Ursell. An extension of the method of steepest descents. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 53, pages 599–611. Cambridge Univ Press, 1957.
[13] MC Gutzwiller. Mc gutzwiller, j. math. phys. 12, 343 (1971). J. Math. Phys., 12:343, 1971.
[14] Kocia, Lucas, and Peter Love. "Stationary Phase Method in Discrete Wigner Functions and Classical Simulation of Quantum Circuits." arXiv preprint arXiv:1810.03622 (2018).