Using MPS and MPO tensor networks¶
As in the other notebooks, we will use a simple random circuit to showcase some MPS and MPO features. Instead of relying on the QPU themselves, we will see how to compute the matrix product states (MPS) and matrix product operators (MPO) directly, and how to use this factorized version of the quantum state.
We start by defining a Haar-random circuit, meaning that all quantum gates in the circuit are 2-qubits gates Haar-random unitary matrices. Most features available for MPS are also available for MPO. We will first detail the MPS features, and then add the features specific to MPO at the end of the notebook.
Using MPS tensor networks¶
Let's generate the random circuit.
from qat.mps.random_circuits import random_circuit_generator
circ = random_circuit_generator(12, 20)
We then compute the MPS associated with this circuit. To do so we first define the MPS QPU, and then use the compute_mps function.
from qat.mps import compute_mps
from qat.qpus import MPS
qpu = MPS(bond_dimension=16)
mps, run_info = compute_mps(circ, qpu)
run_info contains all the metadata you usually find in your Result.meta_data when using the QPU framework.
print("Truncation fidelity product is : %s " % run_info["truncation_fidelity_product"])
Truncation fidelity product is : 0.8939957319814157
Extracting the bond dimensions¶
Here, the circuit is large enough to necessitate truncations. The MPS is truncated to a maximum bond dimension of 16. We can show the bond dimensions fo the MPS using the .bond_dimensions property.
print(f"Bond dimensions: {mps.bond_dimensions} for a MPS of size {len(mps)}")
Bond dimensions: (2, 4, 8, 16, 16, 16, 16, 16, 8, 4, 2) for a MPS of size 12
Contracting an MPS to its statevector representation¶
MPS are very efficient in terms of memory footprint. However one may wish to contract the MPS to get back the statevector representation. Keep in mind this scales exponentially with the number of qubits and is by default limited. Change the default max_qubit_memory_limit to set the restriction to a higher threshold and avoid a memory error.
statevector = mps.contract()
print(statevector)
[ 0.00579379-0.0038765j -0.00558693-0.00368392j 0.00146634+0.00660909j ... 0.00065571-0.01601414j -0.00838224-0.00257625j -0.00313782-0.00454166j]
Sampling from an MPS¶
You can compute efficiently the probability for a given bitstring. To do so, you must input the corresponding state index.
sample_0 = mps.sample(0)
print(f"Probability for state |0>*N = {sample_0}")
Probability for state |0>*N = 4.859529556874823e-05
Computing an observable¶
To do so, use the method .expectation_value(observable):
from qat.core import Observable, Term
observable = Observable(circ.nbqbits, pauli_terms=[Term(1., "ZZZ", [1, 3, 4])])
print(f"Expectation value : {mps.expectation_value(observable)}")
Expectation value : (0.012019416057278041-1.8595610826844775e-17j)
Checking the memory usage of an MPS in megabyte (MB)¶
print(f"Memory usage of the MPS: {mps.get_memory_used()} MB")
Memory usage of the MPS: 0.043648 MB
Saving and loading an MPS¶
Since MPS are memory efficient representations, iot can be useful to save them after a simulation to load them later. To do this, use the .save(filename) and .load(filename) methods.
mps.save("mps_1")
from qat.mps import StateMPS
mps = StateMPS.load("mps_1.npz")
mps.bond_dimensions
(2, 4, 8, 16, 16, 16, 16, 16, 8, 4, 2)
Computing the norm of an MPS¶
The norm $|\langle\psi|\psi\rangle|^2$ can be computed using the .norm() method.
mps.norm()
np.float64(1.0000000000000004)
Using MPO tensor networks¶
Let us now see the features specific to MPO. Again we define a Haar-random circuit, and a noise model.
# Define a circuit
import numpy as np
from qat.hardware import make_depolarizing_hardware_model
from qat.mps import compute_mpo
from qat.qpus import MPO
# Generate the circuit
nqbits = 5
depth = 10
circ = random_circuit_generator(nqbits, depth)
# Define a hardware noise model
hw_model = make_depolarizing_hardware_model(0.01, 0.01)
# Compute the MPO for defined circuit
qpu = MPO(hardware_model=hw_model, bond_dimension=8)
mpo, run_info = compute_mpo(circ, qpu)
print("Truncation fidelity product is : %s " % run_info["truncation_fidelity_product"])
Truncation fidelity product is : 0.9885427526453385
The contraction of an MPO gives back the quantum state in its density matrix representation.
density_matrix = mpo.contract()
print(f"Density matrix shape: {density_matrix.shape}")
Density matrix shape: (32, 32)
The density matrix trace can be efficiently computed in MPO representation using the .trace() method.
print(f"Density matrix trace = {mpo.trace()}")
Density matrix trace = (0.8928106734956162+8.845734474766107e-17j)
The MPO simulator can also be used to the compute expectation value of an observable $O$ in the Heisenberg picture.
Unlike the conventional Schrödinger picture, where the initial state is evolved forward in time by applying gates directly to the state, here the process is reversed: the observable is evolved backward in time through a circuit $U$ (with or without noise) by computing $U^\dagger O U$ and then projected onto the initial state.
This contraction approach can be beneficial for simulating quantum circuits with a larger number of qubits, especially when the qubit topology is not restricted to one dimension, when circuits are close to Clifford circuits, or when circuit depths permit light-cone structures.
We take a random circuit and an observable:
# The circuit parameters
nqbits = 12
depth = 5
# defining a random circuit
np.random.seed(42)
circ = random_circuit_generator(nqbits, depth)
# and an observable
observable = Observable(circ.nbqbits, pauli_terms=[Term(1., "ZZZ", [1, 3, 4])])
We can now use the MPO QPU for the observable computation by setting heisenberg_picture = True
mpo_qpu = MPO(heisenberg_picture=True, bond_dimension= 8 )
From that, we can compute the expectation value onto the zero state $|000 \dots \rangle $
job = circ.to_job(observable = observable)
result = mpo_qpu.submit(job)
print(f'Expectation value = ', result.value)
Expectation value = 0.08849424604949553