qat.qpus.LinAlg

Warning

This is a newer version of the previous LinAlg simulator. The older simulators may still be imported as LinAlgLegacy. As a result, the following parameters are deprecated: gpu_group_size, gpu_buffer_size, group_size, feynman_factor, extract_causal_cones, density_matrix. LinAlgLegacy is still used as the main backend of NoisyQProc.

class qat.qpus.LinAlg(gpu: bool = False, gpu_index: int | None = None, precision: int = 2, sparse: bool | None = None, light_circuit: bool = False, tqdm: bool = False, fusion: bool = True, readonly_statevector: bool = False, individual_terms_values: bool = False, seed: int = None, **kwargs)

LinAlg simulator

Parameters:
  • gpu (bool) – use the GPU backend (may not be available on your version of NewLinAlg) Default: False

  • gpu_index (int, optional) – what GPU device should be used

  • precision (int) – precision used for the simulation, either 1(single) or 2(double) Default: 2

  • sparse (bool) – whether or not to return a sparse result Default: False

  • light_circuit (bool) – deprecated, only works for some named gates Default: False

  • tqdm (bool) – use tqdm to display progress in simulating the circuit Default: False

  • fusion (bool) – use the FusionPlugin as a pre-processing step, merging gates together (only for at least 23 qubits) Default: True

  • readonly_statevector (bool) – the statevector included in the result is returned directly in memory from the C++ array. This statevector will then be in convention lsb first. This option only makes sense for perfect sampling where all qubits are measured. Default: False

  • individual_terms_values (bool) – if True, each observable term exceptation value will be individually filled into value_data and error_data maps. Slight performance decrease. This is not compatible with GPU mode. Default: False

  • seed (int) – seed of the random number generator Default: None (will use a random seed)