Public API¶
Graph¶
Abstract class for NetKet graph objects. |
|
A simple implementation of Graph based on an external graph library. |
|
A lattice built by periodic arrangement of a given unit cell. |
|
Construct a set graph (collection of unconnected vertices). |
|
Constructs a hypercubic lattice with equal side length in all dimensions. |
|
Contains information about a single |
|
Constructs a chain of length sites. |
|
Constructs a hypercubic lattice given its extent in all dimensions. |
Hilbert¶
Abstract class for NetKet hilbert objects. |
|
Abstract class for the Hilbert space of particles in continuous space. |
|
Abstract class for an hilbert space defined on a lattice. |
|
The Abstract base class for homogeneous hilbert spaces. |
|
A custom hilbert space with discrete local quantum numbers. |
|
Tensor product of several Discrete sub-spaces, representing the space |
|
Superoperatorial hilbert space for states living in the tensorised state \(\hat{H}\otimes \hat{H}\), encoded according to Choi’s isomorphism. |
|
Hilbert space obtained as tensor product of local spin states. |
|
Hilbert space obtained as tensor product of local qubit states. |
|
Hilbert space obtained as tensor product of local fock basis. |
|
Hilbert space derived from AbstractParticle for Fermions. |
Operators¶
Abstract class for quantum Operators. |
|
This class is the base class for operators defined on a discrete Hilbert space. |
|
An extended Bose Hubbard model Hamiltonian operator, containing both on-site interactions and nearest-neighboring density-density interactions. |
|
A graph-based quantum operator. |
|
A custom local operator. |
|
The Transverse-Field Ising Hamiltonian \(-h\sum_i \sigma_i^{(x)} +J\sum_{\langle i,j\rangle} \sigma_i^{(z)}\sigma_j^{(z)}\). |
|
The Heisenberg hamiltonian on a lattice. |
|
A Hamiltonian consisiting of the sum of products of Pauli operators. |
|
LocalLiouvillian super-operator, acting on the DoubledHilbert (tensor product) space ℋ⊗ℋ. |
Pre-defined operators¶
Builds the boson creation operator \(\hat{a}^\dagger\) acting on the site-th of the Hilbert space hilbert. |
|
Builds the boson destruction operator \(\hat{a}\) acting on the site-th of the Hilbert space hilbert. |
|
Builds the number operator \(\hat{a}^\dagger\hat{a}\) acting on the site-th of the Hilbert space hilbert. |
|
Builds the projector operator \(|n\rangle\langle n |\) acting on the site-th of the Hilbert space hilbert and collapsing on the state with n bosons. |
|
Builds the \(\sigma^x\) operator acting on the site-th of the Hilbert space hilbert. |
|
Builds the \(\sigma^y\) operator acting on the site-th of the Hilbert space hilbert. |
|
Builds the \(\sigma^z\) operator acting on the site-th of the Hilbert space hilbert. |
|
Builds the \(\sigma^{+} = \frac{1}{2}(\sigma^x + i \sigma^y)\) operator acting on the site-th of the Hilbert space hilbert. |
|
Builds the \(\sigma^{-} = \frac{1}{2}(\sigma^x - i \sigma^y)\) operator acting on the site-th of the Hilbert space hilbert. |
Continuous space operators¶
This class is the abstract base class for operators defined on a continuous Hilbert space. |
|
This is the kinetic energy operator (hbar = 1). |
|
Returns the local potential energy defined in afun |
|
This class implements the action of the _expect_kernel()-method of ContinuousOperator for a sum of ContinuousOperator objects. |
Exact solvers¶
Computes all eigenvalues and, optionally, eigenvectors of a Hermitian operator by full diagonalization. |
|
Computes first_n smallest eigenvalues and, optionally, eigenvectors of a Hermitian operator using |
|
Computes the numerically exact steady-state of a lindblad master equation. |
Sampler¶
Generic API¶
Those functions can be used to interact with samplers
|
Creates the structure holding the state of the sampler. |
|
Resets the state of the sampler. |
|
Samples the next state in the Markov chain. |
|
Samples chain_length batches of samples along the chains. |
|
Returns a generator sampling chain_length batches of samples along the chains. |
List of Samplers¶
This is a list of all available samplers.
Please note that samplers with Numpy
in their name are implemented in
Numpy and not in pure jax, and they will convert from numpy<->jax at every
sampling step the state.
If you are using GPUs, this conversion can be very costly. On CPUs, while the
conversion is cheap, the dispatch cost of jax is considerate for small systems.
In general those samplers, while they have the same asyntotic cost of Jax samplers, have a much higher overhead for small to moderate (for GPUs) system sizes.
This is because it is not possible to implement all transition rules in Jax.
Abstract base class for all samplers. |
|
This sampler generates i.i.d. |
|
Metropolis-Hastings sampler for a Hilbert space according to a specific transition rule. |
|
Metropolis-Hastings sampler for an Hilbert space according to a specific transition rule executed on CPU through Numpy. |
|
Metropolis-Hastings with Parallel Tempering sampler. |
|
Sampler acting on one local degree of freedom. |
|
This sampler acts locally only on two local degree of freedom \(s_i\) and \(s_j\), and proposes a new state: \(s_1 \dots s^\prime_i \dots s^\prime_j \dots s_N\), where in general \(s^\prime_i \neq s_i\) and \(s^\prime_j \neq s_j\). |
|
Sampling based on the off-diagonal elements of a Hamiltonian (or a generic Operator). |
|
Sampler acting on one local degree of freedom. |
|
This sampler acts locally only on two local degree of freedom \(s_i\) and \(s_j\), and proposes a new state: \(s_1 \dots s^\prime_i \dots s^\prime_j \dots s_N\), where in general \(s^\prime_i \neq s_i\) and \(s^\prime_j \neq s_j\). |
|
Direct sampler for autoregressive neural networks. |
Transition Rules¶
Those are the transition rules that can be used with the Metropolis
Sampler. Rules with Numpy
in their name can only be used with
netket.sampler.MetropolisSamplerNumpy
.
|
Base class for transition rules of Metropolis, such as Local, Exchange, Hamiltonian and several others. |
A transition rule acting on the local degree of freedom. |
|
A Rule exchanging the state on a random couple of sites, chosen from a list of possible couples (clusters). |
|
|
Rule proposing moves according to the terms in an operator. |
Rule for Numpy sampler backend proposing moves according to the terms in an operator. |
|
|
Internal State¶
Those structure hold the state of the sampler.
|
Base class holding the state of a sampler. |
State for a Metropolis sampler. |
Pre-built models¶
This sub-module contains several pre-built models to be used as neural quantum states.
A restricted boltzman Machine, equivalent to a 2-layer FFNN with a nonlinear activation function in between. |
|
A fully connected Restricted Boltzmann Machine (RBM) with real-valued parameters. |
|
A fully connected Restricted Boltzmann Machine (see netket.models.RBM) suitable for large local hilbert spaces. |
|
A symmetrized RBM using the netket.nn.DenseSymm layer internally. |
|
Jastrow wave function \(\Psi(s) = \exp(\sum_{ij} s_i W_{ij} s_j)\). |
|
A periodic Matrix Product State (MPS) for a quantum state of discrete degrees of freedom, wrapped as Jax machine. |
|
Encodes a Positive-Definite Neural Density Matrix using the ansatz from Torlai and Melko, PRL 120, 240503 (2018). |
|
Implements a Group Convolutional Neural Network (G-CNN) that outputs a wavefunction that is invariant over a specified symmetry group. |
|
Base class for autoregressive neural networks. |
|
Autoregressive neural network with dense layers. |
|
Autoregressive neural network with 1D convolution layers. |
|
Autoregressive neural network with 2D convolution layers. |
|
Fast autoregressive neural network with 1D convolution layers. |
|
Fast autoregressive neural network with 2D convolution layers. |
Model tools¶
This sub-module wraps and re-exports flax.nn. Read more about the design goal of this module in their README
Base class for all neural network modules. |
Linear Modules¶
A linear transformation applied over the last dimension of the input. |
|
A linear transformation with flexible axes. |
|
Implements a projection onto a symmetry group. |
|
A group convolution operation that is equivariant over a symmetry group. |
|
Convolution Module wrapping lax.conv_general_dilated. |
|
Embedding Module. |
|
1D linear transformation module with mask for autoregressive NN. |
|
1D convolution module with mask for autoregressive NN. |
|
2D convolution module with mask for autoregressive NN. |
Activation functions¶
|
Continuously-differentiable exponential linear unit activation. |
|
Exponential linear unit activation function. |
|
Gaussian error linear unit activation function. |
|
Gated linear unit activation function. |
|
Log-sigmoid activation function. |
|
Log-Softmax function. |
|
Rectified linear unit activation function. |
|
Sigmoid activation function. |
|
Soft-sign activation function. |
|
Softmax function. |
|
Softplus activation function. |
|
SiLU activation function. |
|
|
|
|
|
Variational State Interface¶
Abstract class for variational states representing either pure states or mixed quantum states. |
|
Variational State for a variational quantum state computed on the whole Hilbert space without Monte Carlo sampling. |
|
Variational State for a Variational Neural Quantum State. |
|
Variational State for a Mixed Variational Neural Quantum State. |
|
|
Returns the function computing the local estimator for the given variational state and operator. |
|
Returns the samples of vstate used to compute the expectation value of the operator O, and the connected elements and matrix elements. |
Optimizer Module¶
This module provides some optimisers, implementations of the {ref}`Quantum Geometric Tensor <QGT_and_SR>` and preconditioners such as SR.
Optimizers¶
Optimizers in NetKet are simple wrappers of optax optimizers. If you want to write a custom optimizer or use more advanced ones, we suggest you have a look at optax documentation.
Check it out for up-to-date informations on available optimisers.
Warning
Even if optimisers in netket.optimizer
are optax optimisers, they have slightly different
names (they are capitalised) and the argument names have been rearranged and renamed.
This was chosen in order not to break our API from previous versions
In general, we advise you to directly use optax, as it is much more powerful, provides more optimisers, and it’s extremely easy to use step-dependent schedulers.
Adam Optimizer. |
|
AdaGrad Optimizer. |
|
Stochastic Gradient Descent Optimizer. |
|
Momentum-based Optimizer. |
|
RMSProp optimizer. |
Preconditioners¶
This module also provides an implemnetation of the Stochastic Reconfiguration/Natural gradient preconditioner.
Construct the structure holding the parameters for using the Stochastic Reconfiguration/Natural gradient method. |
Quantum Geometric Tensor¶
It also provides the following implementation of the quantum geometric tensor:
Automatically select the ‘best’ Quantum Geometric Tensor computing format acoording to some rather untested heuristic. |
|
Lazy representation of an S Matrix computed by performing 2 jvp and 1 vjp products, using the variational state’s model, the samples that have already been computed, and the vector. |
|
Semi-lazy representation of an S Matrix where the Jacobian O_k is precomputed and stored as a PyTree. |
|
Semi-lazy representation of an S Matrix where the Jacobian O_k is precomputed and stored as a dense matrix. |
Dense solvers¶
And the following dense solvers for Stochastic Reconfiguration:
Solve the linear system using Singular Value Decomposition. |
|
Optimization drivers¶
Those are the optimization drivers already implmented in Netket:
Abstract base class for NetKet Variational Monte Carlo drivers |
|
Energy minimization using Variational Monte Carlo (VMC). |
|
Steady-state driver minimizing L^†L. |
Logging output¶
Those are the loggers that can be used with the optimization drivers.
Runtim Logger, that can be passed with keyword argument logger to Monte Carlo drivers in order to serialize the outpit data of the simulation. |
|
Json Logger, that can be passed with keyword argument logger to Monte Carlo drivers in order to serialize the outpit data of the simulation. |
|
A logger which serializes the variables of the variational state during a run. |
|
Creates a tensorboard logger using tensorboardX’s summarywriter. |
Utils¶
Utility functions and classes.
This class wraps a numpy or jax array in order to make it hashable and equality comparable (which is necessary since a well-defined hashable object needs to satisfy |
Callbacks¶
Those callbacks can be used with the optimisation drivers.
A simple callback to stop NetKet if there are no more improvements in the training. |
|
A simple callback to stop NetKet after some time has passed. |