Plugins
qat.fermion
contains several plugins:
qat.fermion
features an ADAPT-VQE plugin, called AdaptVQEPlugin
. Its purpose is to build ansatze for
VQE by selecting automatically operators in a given pool. This selection is done by computing a commutator between said operator and
the observable, which is a way to determine which operator would have the most influence on the resulting energy.
While this selection can be slower than directly building other standard ansatze, ADAPT-VQE builds automatically efficient ansatze,
and can lead to improved overall VQE performance.
Let us see how to initialize and use AdaptVQEPlugin
. We are interested in computing the ground state of
molecule.
For the sake of simplicity, we will assume we have already the variable cluster_operators
, containing a list of
Observable
, and hamiltonian
, the SpinHamiltonian
object containing the
Hamiltonian of our system. We also assume we have computed the correct initial Harree-Fock state in binary
representation, represented by the variable hf_init
. These steps can be seen in more details in our two tutorials
VQE for a H2 molecule using the UCC ansatz and
VQE for a LiH molecule using the UCC ansatz.
We first need to initialize the circuit with a Hartree-Fock state, and generate a variational Job
.
from qat.plugins import AdaptVQEPlugin
from qat.lang.AQASM import Program, X
# Initialize a Program
prog = Program()
reg = prog.qalloc(hamiltonian.nbqbits)
# Define the circuit which prepares a Hartree-Fock state
for j, char in enumerate(format(hf_init, "0" + str(hamiltonian.nbqbits) + "b")):
if char == "1":
prog.apply(X, reg[j])
# Generate the associated circuit
circuit = prog.to_circ()
# Define the variational Job we need to optimize
job = circuit.to_job(observable=hamiltonian)
We now need to define the stack for the computation.
from qat.plugins import AdaptVQEPlugin, ScipyMinimizePlugin
# Define the stack
adaptvqe_plugin = AdaptVQEPlugin(cluster_operators, n_iterations=5)
optimizer = ScipyMinimizePlugin(method="COBYLA", tol=1e-7, options={"maxiter": 100}, x0=theta_init)
qpu = get_default_qpu()
stack = adaptvqe_plugin | optimizer | qpu
Everything is set up ! The qat.plugins.AdaptVQEPlugin
will generate the variational ansatz to be optimized by
qat.plugins.ScipyMinimizePlugin
, the latter doing back and forth with the QPU to optimize the ansatz parameters. Let’s
submit the job:
# Submit the job
result = stack.submit(job)
# Print the energy value
print(f"Computed energy = {result.value}")
print(f"Expected energy = {min(np.linalg.eigvalsh(hamiltonian.get_matrix()))}")
>>> Computed energy = -1.1372701679264894
Expected energy = -1.1372701679265027
Gradient descent optimizer (GradientDescentOptimizer
) represent a very standard class of optimizer. To minimize a given function, the gradient of
this function with respect to each of its parameters is computed. This allows the update of the parameters such that the final value
of the function decreases.
To minimize a function \(L(\theta)\), we update \(\theta\) iteratively such that:
\(\eta\) is the learning parameter.
Doing this iteratively, we get a set of parameters which minimize the given function. Many algorithms such as the stochastic gradient descent algorithm (SGD) or its adaptative extension Adam work this way.
However, these optimizers assume the parameter space is Euclidean, meaning that during the parameters update, each parameter is updated by the same Euclidean distance. This is a baseless assumption, and at best a good approximation, since the loss function might change at different rates depending on the parameter considered. To correct for this behaviour, we can use the *Fisher information matrix*, which acts as a metric tensor by transforming the steepest descent in the parameter space to a steepest descent in the distribution space.
The same issue arises when it comes to optimizing variational parameters of a quantum circuit. By using the Fubini-Study metric tensor \(g\), one can devise a quantum analog to the classical natural gradient descent defined earlier:
Here, \(g^{+}\) represents the pseudo-inverse of the Fubini-Study metric tensor \(g\).
Note
For more information, see this publication.
The quantum natural gradient descent algorithm is available on Qaptiva and can be accessed via the
GradientDescentOptimizer
plugin. It features both the standard and the natural gradient descent algorithm.
Let us use the natural gradient-based optimizer to solve a variational problem. We want to compute the expectation value of an observable by VQE, using a custom ansatz.
import numpy as np
from qat.core import Observable, Term
from qat.lang.AQASM import Program, RX, RY, RZ, CNOT
nbqbits = 3
# Define the observable
obs = Observable(nbqbits, pauli_terms=[Term(1, "Y", [0])])
# Build a parameterized circuit
prog = Program()
reg = prog.qalloc(nbqbits)
RY(np.pi / 3)(reg[0])
RZ(prog.new_var(float, "\\theta_0"))(reg[0])
RZ(prog.new_var(float, "\\theta_1"))(reg[1])
RZ(prog.new_var(float, "\\theta_2"))(reg[2])
RY(np.pi / 4)(reg[1])
CNOT(reg[0], reg[1])
RY(prog.new_var(float, "\\theta_3"))(reg[1])
RY(np.pi / 4)(reg[2])
CNOT(reg[0], reg[1])
RY(prog.new_var(float, "\\theta_4"))(reg[2])
CNOT(reg[1], reg[2])
circ = prog.to_circ()
We initialize the plugin.
from qat.qpus import get_default_qpu
from qat.plugins import GradientDescentOptimizer
# Initialize Optimizer
natgrad_opt = GradientDescentOptimizer(maxiter=50, learning_parameter=0.3, natural_gradient=True, tol=1e-5)
# Define which QPU to use
qpu = get_default_qpu()
# Define the stack
stack = natgrad_opt | qpu
The stack is defined ! Let us run the job on the QPU:
result = stack.submit(circ.to_job(job_type="OBS", observable=obs))
print(f"Expected value for the observable = {result.value}")
>>> Expected value for the observable = -0.8660254037652464
More information on how to use this plugin is available in the following Jupyter notebook: Natural gradient-based optimizer.
The sequential minimization optimization algorithm is a hybrid classical-quantum algorithm which leverages the parameter shift rule to locally optimize the angles of a certain class of circuit with 3 energy measurements. The full algorithm is described in an article by Nakanishi et al. (2020) and also in an article by Ostaszewski et al., (2021).
This algorithm only applies to circuits containing gates of the form \(G(\theta)=e^{-ic\theta/2\hat{P}}\), with \(\hat{P}\) a tensor product of Pauli matrices and \(c\) a constant coefficient.
The sequential optimization algorithm has been implemented as an plugin called qat.plugins.SeqOptim
. For the class of
circuit previously, it usually outperforms more standard optimizers such as the methods implemented in the plugin
ScipyMinimizePlugin
, while having a low sensitivity to shot noise.
For more information, please refer to the notebook Optimizing circuits with the sequential optimization plugin.
The plugin ZeroNoiseExtrapolator
is a plugin whch helps mitigate multiqubit gate noise by means of an
extrapolation to the zero noise regime. It is use in the following notebook : Mitigating multiqubit noise (Qaptiva users only).
The idea is to measure the observable of interest \(\hat{O}\) under varying noise intensities, so that a noise-free value can be inferred. The noise is artificially increased by multiplying the number of \(CNOT\) gates: since \(CNOT^2=I\), one can replace each \(CNOT\) gate in the circuit by a number \(2n_{\mathrm{pairs}}+1\) of identical \(CNOT\) gates without changing the logical function of the circuit. Since the two-qubit gates are considerably more faulty than the one-qubit gates, this boils down to globally increasing the noise that will be picked at the execution of the circuit: one can show that as a first approximation, considering the gate noise can be modelled by a depolarizing channel, the equivalent noise level will correspond to a \((2n_{\mathrm{pairs}}+1)\)-fold increase of the original noise level (see Hybrid quantum-classical algorithms and quantum error mitigation by endo et al. (2021) p.23 for the detailed calculation).
By choosing a fit \(f\) so that \(\langle \hat{O} \rangle_{\mathrm{noisy}} = f(n_\mathrm{pairs})\), one can thus estimate the noise-free expectation value of the observable as \(\langle \hat{O} \rangle_{\mathrm{noise-free, inferred}} = f(-0.5)\).
The plugin allows for either a linear fit (Richardson extrapolation):
or an exponential one:
Note: The plugin allows to increase the noise level with a gate that is different from the \(CNOT\) gate. This corresponds to the more general “unitary-folding” technique, in which we replace each occurence of \(G\) by \(G(GG^{\dagger})^n\). See for example Digital zero noise extrapolation for quantum error mitigation by Giurgica-Tiron et al. (2020).
The plugin MultipleLaunchesAnalyzer
is a very simple plugin allowing the optimization of a variational process multiples times at once. This can be useful
when it is unclear which initial parameters to use for an ansatz or a set of ansatze, as the plugin computes several optimizations
with different set of random initial parameters, but also the variance of the results obtained.
For more information on how to use this plugin, refer to the notebook Running several optimizations and keeping the best one with MultipleLaunchesAnalyzer.