Benchmarks¶
Here we show the benchmarks comparing the simulation times of LinAlg running on CPU and GPU. The CPU version is timed using one socket (Intel(R) Xeon(R) Platinum 8153 CPU @ 2.00GHz) with 16 cores for the simulation. The GPU used in the benchmarks is a Nvidia Volta 100 (16 GB memory). All simulations are done in double precision.
a) Sampling from random circuits¶
Sampling a random circuit with 5000 gates (drawn uniformly from H, X, Y, Z, CNOT, SWAP, T, PH, RX, RY, RZ, X.ctrl()) followed by sampling 1000 states.
b) Solving maxcut problem with QAOA¶
Timed a single a call to QPU (i.e. evolution of QAOA ansatz + observable evaluation) of a Maxcut problem on a random Erdos-Renyi graph.
c) Fidelity¶
Here we benchmark the time to obtain the fidelity of a noisy Quantum Fourier Transform assuming a noise level comparable to that of IBM’s chip. For the stochastic simulations we use 200 samples, which is enough to estimate fidelity upto 2% accuracy.