Skip to content

Added SR_solving.py with code for benchmarks

Vicentini Filippo requested to merge github/fork/bharathr98/master into master

Created by: bharathr98

Sorry for the previous PR. Added a file "SR_solving.py" in the Benchmarks folder. Mostly uses the code in this comment together with pytest-benchmark style code. This is what running it looks like:

pytest -n0 SR_solving.py 
=========================================================================================================== test session starts ============================================================================================================
platform darwin -- Python 3.7.9, pytest-6.2.2, py-1.10.0, pluggy-0.13.1 -- /Users/bharath/Dev/OpenSourceContributions/netket/.venv/bin/python3
cachedir: .pytest_cache
benchmark: 3.2.3 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /Users/bharath/Dev/OpenSourceContributions/netket, configfile: pyproject.toml
plugins: xdist-2.2.1, benchmark-3.2.3, forked-1.3.0
collected 2 items                                                                                                                                                                                                                          

SR_solving.py::test_sr_solver[example0-True] PASSED                                                                                                                                                                                  [ 50%]
SR_solving.py::test_sr_solver[example0-False] PASSED                                                                                                                                                                                 [100%]


-------------------------------------------------------------------------------------- benchmark: 2 tests -------------------------------------------------------------------------------------
Name (time in ms)                     Min               Max              Mean            StdDev            Median               IQR            Outliers       OPS            Rounds  Iterations
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_sr_solver[example0-False]     3.3662 (1.0)      3.8675 (1.0)      3.6413 (1.0)      0.1631 (1.37)     3.5968 (1.0)      0.2572 (2.09)          3;0  274.6307 (1.0)          10           1
test_sr_solver[example0-True]      4.4655 (1.33)     4.8771 (1.26)     4.7138 (1.29)     0.1187 (1.0)      4.7145 (1.31)     0.1230 (1.0)           2;1  212.1427 (0.77)         10           1
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Legend:
  Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
  OPS: Operations Per Second, computed as 1 / Mean
============================================================================================================ slowest durations =============================================================================================================
5.45s call     Benchmarks/SR_solving.py::test_sr_solver[example0-True]
2.67s setup    Benchmarks/SR_solving.py::test_sr_solver[example0-True]
0.55s call     Benchmarks/SR_solving.py::test_sr_solver[example0-False]

(3 durations < 0.005s hidden.  Use -vv to show these durations.)
============================================================================================================ 2 passed in 11.50s ============================================================================================================

A couple of minor things need to be ironed out -

  1. Pytest-benchmark does not support multiple benchmarks, in the sense that the data is presented under the assumption that if two tests were performed, both the tests are the same. Thus when multiple tests are performed, it ranks them not by order, but by mean-time. Case in point, it can be seen that the test was run for True first and then for False, but the table shows False first and then True. It also colour codes the tests in a similar fashion, with red being the slowest and green being the fastest (not shown). Although this is more of an aesthetic issue, it would be good to sort it out
  2. The final assert statement has been set to True, primarily because I am not aware of what the result should be compared with. Would be good if someone pitches in on it.

Merge request reports