Challenging Claims Quantum Supremacy

Download as pdf or txt
Download as pdf or txt
You are on page 1of 78

Challenging Claims of Quantum Supremacy

A Tensor + RL Approach

Xiao-Yang Liu

April 19, 2024


NeurIPS 2023
NeurIPS 2022 NeurIPS 2022
Machine learning,
Springer-Nature 2024

IEEE Trans. Computers (2) IEEE Trans. Signal


IEEE Trans. on Parallel and Classically Processing, 2023
Distributed System (2) Classically
Classicallu
simulating L2O Algo
L2A Algo
simulating
Simuating
QC + QA Reinforce
Tensor QC + QA
QC+QA ment
IEEE Trans. Neural Networks Networks Two libraries:
and Learning Systems (2)
Learning ElegantRL, FinRL
IEEE Transactions Information
Theory, 2020

[Book] Tensor for Data Processing [Book] Reinforcement learning


Elsevier for cyber-physical systems

2021 Chapman & Hall/CRC,


2019
Outline
● Background
○ Quantum supremacy: speedups

● Contributions
○ Tensor networks: representation
○ Reinforcement learning: search
○ Classical simulation: our results

● Future Plan
○ Educating Quantum
○ Large language models for quantum simulation
○ Open Model Framework (OMF)
The Quest for “Quantum Supremacy”
● Quantum Computing: speedups
○ Shor’s factoring: exponential
○ Grover’s search: quadratic

Do quantum algorithms live up to the claims?


The Quest for “Quantum Supremacy”
● Google’s Sycamore circuits (Nature 2019): 53 qubits, running 20 cycles
○ Noisy intermediate-scale quantum (NISQ) era (small #qubits highly noisy)
○ Task: Sampling the output of random quantum circuits
○ “200s vs 10K years”
The Debate for “Quantum Supremacy”
● “Hasty”:
used an inefficient simulation method.

Achieved or to be achieved?

○ Nature 2019: 10K years


○ Our results at NeurIPS 2023: about 4 days
A Suitable Computational Task: Sampling the Output of a
Random Quantum Circuits

NISQ era
Goal: empirical quantum advantage

Quantum supremacy aims to demonstrate that a programmable quantum device


can solve a problem that no classical computer can solve in any feasible amount of
time, irrespective of the usefulness of the problem.

Challenges:
● Limited number of qubits
● High error rate of qubits and gates
Metric: Linear Cross-entropy Benchmarking Fidelity (XEB)

No error in quantum circuits


and ideally distributed.

Randomized distribution

● : state-space dimension of qubits.


● : index of a bit string
● : average probability of all bit strings
● : average probability of a specific bit string

Arute F, Arya K, Babbush R, et al. Quantum supremacy using a programmable superconducting processor. Nature, 2019.
Sycamore Circuits (Random Quantum Circuit)

● 53 qubits, 20 cycles
● Single-qubit gates, uniform random from
● Two-qubit gates: fSim
● Cycles: m (in ABCDCDAB sequence)

Arute F, Arya K, Babbush R, et al. Quantum supremacy using a programmable superconducting processor. Nature, 2019.
Sycamore Circuits (Random Quantum Circuit)

Arute F, Arya K, Babbush R, et al. Quantum supremacy using a programmable superconducting processor. Nature, 2019.
Debate: Classical Simulation

Xiao-Yang Liu. Classical Simulation of Quantum Circuits: Parallel Environments and Benchmark. NeurIPS 2023.
Classical Simulations of Random Quantum Circuits

classical simulation task


What are the mainstream methods by which classical
simulations of quantum circuits can be realized?

● State vector method (Schroedinger method)


● Decision diagram method
● Tensor network method

Why tensor network method is good?

1). The state vector method faces storage bottleneck.


2). The advantages of tensor network method over the other two methods.
3). The efficiency of tensor network method relies on the contraction path.
An Example: GHZ State
● Quantum Entanglement Showcase
● Non-Classicality Highlight
● Wide Quantum Computing Utility
● Experimental Validation Potential

GHZ (Greenberger-Horne-Zeilinger) State


Method 1. State Vector Method

left-multiplication
=
Matrix-vector representation

Computational representation
Initial State

All-zero initial state, determining the α000


amplitude of the resulting state

P1 P2 P3 P4 P5

Bra-Ket notation Computational form tensor product


First Gate: Hadmard Gate

P1 P2 P3 P5

Two tensor products with I-gate to form a 2n matrix.

The identity gate is a logic gate in quantum computing that, essentially, does nothing.
Second Gate

P1 P2 P3 P4 P5

control
qubit

target qubit
Third Gate

P1 P2 P3 P4 P5
Calculation of a particular amplitudes of the final state

P1 P2 P3 P4 P5
Method 1. Final State

left-multiplication
=
Matrix-vector representation

Computational representation
Method 1. State Vector Method

1. The State Vector method faces storage bottleneck.

n qubits requires a state vector space of and a matrix space of .


This exponential growth makes quantum circuit simulation on classical computers
impractical for larger systems, even with powerful supercomputing clusters.
Method 1. State Vector Method

Advantages: Disadvantages:
● Mathematical Simplicity ● Exponential complexity

State vector method offers an intuitive Dimensional increase in vector representation with
and concise way to describe quantum qubit count leads to exponential complexity.
systems, aiding comprehension and
computation.
● High memory requirements

● Ease of Implementation Storing complex amplitudes demands significant


memory, limiting scalability.
Utilizing standard linear algebra
operations simplifies implementation ● High computational complexity
using existing libraries and algorithms.
Larger quantum systems require more computational
resources and time for state vector multiplication.

Burgholzer L, Ploier A, Wille R. Tensor Networks or Decision Diagrams? Guidelines for Classical Quantum Circuit Simulation[J]. arXiv preprint arXiv:2302.06616, 2023.
NeurIPS 2023
NeurIPS 2022 NeurIPS 2022
Machine learning,
Springer-Nature 2024

IEEE Trans. Computers (2) IEEE Trans. Signal


IEEE Trans. on Parallel and Classically Processing, 2023
Distributed System (2) Classically
Classicallu
simulating L2O Algo
L2A Algo
simulating
Simuating
QC + QA Reinforce
Tensor QC + QA
QC+QA ment
IEEE Trans. Neural Networks Networks Two libraries:
and Learning Systems (2)
Learning ElegantRL, FinRL
IEEE Transactions Information
Theory, 2020

[Book] Tensor for Data Processing [Book] Reinforcement learning


Elsevier for cyber-physical systems

2021 Chapman & Hall/CRC,


2019
Tensor Network Diagram

Initial Input

Single-qubit Gate

Two-qubit Gate
Method 2. Tensor Network Method

Tensor network representation


Method 2. Tensor Network Method

efficiency of classically simulating the quantum circuits


= contracting the tensor network along a path with minimum cost.
Method 2. Tensor Network Contraction

Tensor Network
contraction

Intermediate storage: ~ Only the last storage (result):

State vector method requires state space (storage) for each operation.
Method 3. Decision Diagram Method
(relatively new, not stable yet)
Comparison of Three Methods

Method Advantages Disadvantages

•Fit well for low-level classical circuit


•Exponential memory
simulation
State vector •Complexity similar to straightforward
•Performance depends on topological
state vector calculation
structure

•Exploit structural redundancies in the •Worst-case exponential complexity


underlying representation independent of the simulation ordering
Decision Diagrams
•Efficiency depends on redundancy in •Difficult to estimate resulting diagram
representations size without computation

•Preferable for computing scalar quantities


in quantum circuit simulation
•Infeasible for computing complete
Tensor Networks •Performance is independent of the actual
output state vector
content of the individual tensors, but only
depending on their size and shape

Burgholzer L, Ploier A, Wille R. Tensor Networks or Decision Diagrams? Guidelines for Classical Quantum Circuit Simulation[J]. arXiv preprint arXiv:2302.06616, 2023.
RL/ML Shows Great Potentials

Algorithms Classical Quantum

Matrix Multiplication
Operations Tensor Contraction (CUDA)
(AlphaTensor, Nature 2023)

Fast Fourier Transformer Quantum Fourier Transform (QFT)


BLAS, qBLAS
further reduce #multiplications? gate-efficient circuit?

Tensor Network Contraction Ordering


Tensor Network Structure Search
Tensor Networks (TNCO) for quantum circuits
(ICML 2022/2023, ICLR 2023)
(NeurIPS 2023)
Sycamore Circuits
Tensor Network Representation

quantum circuit simulation Tensor network contraction


Example of Two Orderings:

The ordering in blue use 976 multiplications The ordering in red use 5056 multiplications
Tensor Network Contraction Ordering (TNCO) Problem

Goal: find an ordering P with minimum contraction cost


NeurIPS 2023
NeurIPS 2022 NeurIPS 2022
Machine learning,
Springer-Nature 2024

IEEE Trans. Computers (2) IEEE Trans. Signal


IEEE Trans. on Parallel and Classically Processing, 2023
Distributed System (2) Classically
Classicallu
simulating L2O Algo
L2A Algo
simulating
Simuating
QC + QA Reinforce
Tensor QC + QA
QC+QA ment
IEEE Trans. Neural Networks Networks Two libraries:
and Learning Systems (2)
Learning ElegantRL, FinRL
IEEE Transactions Information
Theory, 2020

[Book] Tensor for Data Processing [Book] Reinforcement learning


Elsevier for cyber-physical systems

2021 Chapman & Hall/CRC,


2019
A Simple Example
Encoding the Contraction Ordering
Tensor network
s
1 4
i m
k
2 j 3

1
2
tensor

3
m
4
Encoding the Contraction Ordering
Tensor network
s 1 s
1 4 34
i k i
k m
j
2 j 3 2

1
k
2
tensor

3
m
4
Encoding the Contraction Ordering
Tensor network
s 1 s
1 4 34 34
i k i
k m s i j
j
2 j 3 2 12

1
k
2 s
tensor

i j
3
m
4
Encoding the Contraction Ordering
Tensor network
s 1 s
1 4 34 34
i k i
k m s i j
j
2 j 3 2 12

1
k
2 s
tensor

i j
3
m
4
L2O and L2A Algorithms

L2O for Continuous Opt (TSP 2023): non-convex optimization

L2A for Discrete Opt (NeurIPS 2023): A huge discrete space, transition
according to a policy pi, which approximates a Boltzmann distribution p(x)
L2A Algorithm Using Transformer Policy Network
● Sampling the conditional probability
(autoregressive)

RNN (LSTM) Transformer


Build Massive Parallel TNCO Environment
Parallelism of RL Algorithms (in ElegantRL Library)

Xiao-Yang Liu, Zechu Li, Zhuoran Yang, Jiahao Zheng, Zhaoran Wang, Walid, Anwar Walid, Jian Guo, Michael I. Jordan, ElegantRL-Podracer:
Scalable and elastic library for cloud-native deep reinforcement learning, NeurIPS Workshop on Deep Reinforcement Learning, 2022
RL Pipeline on GPU

Action (select edge) compute


calculate
+ policy
reward Action
Step (contraction) gradient
(select edge) compute
calculate policy
+
… reward gradient
Step
GPU (contraction)

GPU

CPU core data transfer data transfer

Traditional RL pipeline Our RL pipeline


Mapping TNCO Environment onto GPU

RL Method
nu
m
Sample o
… f envs Replay Buffer 0 Replay Buffer 1
orderings
N Optimize
Construct undirected graphs nu
m
A B C D A B C D … of
en
vs
A B C D … A B C D …
A j l i A j l i N A lk i A
j l i j il il
B j k Create B j
j
kj l i
k B
j
lkj lm
k
i … B
C l k m parallel envs C l kj mk
l k m C lk lkj mk C nu
il k m
D i m D i l m k m D i lm i m k lm D of
i m il k en
i m j lm … vs
Contraction N
nu and calculating
A B C D A B C D m of A B C D … … A B C D …
… en
A 1 A 1 vs A 1 1 A 1 1 1 1
1 N 1
B 1 Create B 11 B 1 1 1 … B 111 1 111
1 1
C 1 parallel envs C 11 1 C 1 C 1 11 1 111
D 1 D 111 1 1 11111
11 1 D D
1 1 1 1 1 1
1 1 1 1 1 1 1
Massive Parallel Simulation on GPU
Verification on Maxcut Problem

The Gset dataset (by Stanford) is a benchmark dataset.

L2A updates the state-of-the-art results


Experimental Results (#Multiplications): Sycamore Circuits
Mapping Tensor Contraction onto GPU
Experimental Results (searching time and running time)
Take-home Messages
● Example: Google’s supremacy circuits
Our results: push back the quantum supremacy claim

Quantum computers are still far.


● The benefit of our work: squeeze more efficiency from classical computers, if it is
done the right way.

Open-source codes and results, as benchmark curves.


● In the NISQ era (limited #qubits, and highly noisy)

Classically simulation will allow testing/design/evaluating


of QC/QA before large-scale physical QC available.
Future Plan
Build A SandBox for Educating AI&Quantum: from classical to quantum
Classical Quantum

RL
Applications Recommendation Communication
Solver
Simulator Cloud Simulator

Algorithms Privacy Deep Learning Optimization Quantum ML

Tensor Factorization Tensor Network


Tensor
Tensor

Parallel Computing BLAS qBLAS

Tensor core TPU Sycamore Amazon Braket


Hardware (NVIDIA) (Google) (Google) (Amazon)
1). Quantum Initiative@RPI: Educating Quantum

● Collaborating with Columbia Quantum Initiative


○ MS program, starting Fall 2024
● Course 1: Quantum sensing and detection, Spring 2023@Columbia (TA)
● Course 2: Quantum optimization and machine learning, Spring 2024@Columbia
● Course 3: Quantum error correction codes, Fall 2024@Columbia

● Introduction to quantum computing, Summer 2024@RPI


About 30 lecture notes and 6 lab sessions

● $10K for quantum services on AWS and Microsoft Azure √


GUI-based Simulator of Quantum Circuits and Algorithms
Snapshots: Demo for Operating Qubits
Snapshots: Demo for Grover’s Search Algo
Quantum Fourier Transform (QFT)
Fast Fourier Transform (FFT) Tensor Network Representation Rank 3 input tensor; each index represents a 1 x N
vector.

Traditional FFT Tensor Contraction Method


Recurrence: T(n) = 2T(n/2) + O(n);

By Master Theorem: T(n) = O(n logn)


Representing QFT as a Tensor Network

Each level of the tree corresponds to which qubit is


dictating the phase shift of the other qubits in the
system.

The tree represents the repetitive and reductive


nature of the QFT circuit.
Tree-Tensor Network (Mapping onto GPU)

Xiao-Yang Liu, IEEE Trans. Computers, 2023


DMRG Algorithm (Mapping onto GPU)

Xiao-Yang Liu, IEEE Trans. Parallel and Distributed System, 2022


2). Language Model for Quantum Simulation

Roger G. Melko & Juan Carrasquilla. Language Model for Quantum Simulation. Nature Computational Science, Jan., 2024.
3). Open Model Framework (OMF) with Linux Foundation
Funding Opportunities
Quantum computing
● IBM Research
● U.S. National Quantum Initiative
● U.S. NSF Quantum Leap
● U.S. NSF Quantum Information Science
FinTech:
● $5K for FinRL contest 2023, by Vatic Ventures and NYU √
● $300K per year, by Linux Foundation (LF AI & Data, FinOS)
● $100K per year, CRAFT Center
Computing power
● $10K for quantum services on AWS √
● $700K per year by IDEA and NVIDIA, DGX SuperPOD (1500 GPUs A100) √
● $100K computing by Falcon LLM Foundation √
Appendix
Build A SandBox: from classical to quantum
Classical Quantum

RL
Applications Recommendation Communication
Solver
Simulator Cloud Simulator

Algorithms Privacy Deep Learning Optimization Quantum ML

Tensor Factorization Tensor Network


Tensor
Tensor

Parallel Computing BLAS qBLAS

Tensor core TPU Sycamore Amazon Braket


Hardware (NVIDIA) (Google) (Google) (Amazon)
Method 2. Tensor Network Contraction

Tensor Network
contraction

Intermediate storage: ~ Only the last storage (result):

State vector method requires state space (storage) for each operation.
Tensor Network Contraction Ordering Environment
Tensor Network Contraction Ordering Environment
Tensor Network Contraction Ordering Environment
Tensor Network Contraction Ordering Environment
Massive Parallel Tensor Network Contraction Ordering Environment

Orderings Environments Rewards

Scalar


Scalar


Scalar
Classical Quantum

Recommendatio RL
Applications n
Communication
Solver
Simulator Cloud Simulator

Algorithms Privacy Deep Learning Optimization Quantum ML

Tensor Factorization Tensor Network


Tensor Computing
Tensor

Parallel Computing BLAS qBLAS

Tensor core TPU Sycamore Amazon Braket


Hardware (NVIDIA) (Google) (Google) (Amazon)
Circuit of Quantum Fourier Transform (QFT)

Controlled Phase Shift Rotation (CPSR) Gates:

Example of Highlighted Gate:

You might also like