`tau_corr` does not estimate autocorrelation in all cases
Created by: femtobit
Take the following example:
import netket as nk
import numpy as np
xs = np.random.normal(size=(32, 500))
xs[0::2] += 2*np.linspace(-1, 1, 500)
xs[1::2] -= 2*np.linspace(-1, 1, 500)
These are 32 chains which are all non-stationary and have a non-zero autocorrelation. According to emcee
, each individual chain has an integrated autocorrelation time of about ~60. For reference, here is the autocorrelation function computed for each chain using arviz.autocorr
, which uses an FFT internally:
According to netket.stats.statistics
, however:
>>> stats = nk.stats.statistics(xs)
>>> print(stats)
0.0110 ± 0.0071 [σ²=2.3175, R̂=1.1950]
>>> print(stats.tau_corr)
0.0
(This was done with current master, so note that split-Rhat actually detects the non-stationarity.)
For illustration, the chains look like this (with a boxplot of each chain's distribution below):
Note that by default this does not happen below 32 chains, because then statistics
switches to the tau_block
estimate (which to my understanding concatenates all chains and then estimates tau_corr, leading to a quantity that can indeed detect this and other convergence issues, but is not related to the individual chain's autocorrelation).