Remove custom code for LdagL/SteadyState gradient computation
Created by: PhilipVinc
The special code for the steady state computation is quite old. With the big operator rewrite for chunking I did not realise it immediately, but I implemented a more efficient version of squared operator gradient.
This PR simply removes the old special code and falls back to using AD on nkjax.expect
to compute gradient of LdagL.
It's also 10% faster.