This reduces the futiltiy_margin if our opponents last move was bad by
around ~1/3 when not improving and ~1/2.7 when improving, the idea being
to retroactively futility prune moves that were played, but turned out
to be bad. A bad move is being defined as their staticEval before their
move being lower as our staticEval now is. If the depth is 2 and we are
improving the opponent worsening flag is not set, in order to not risk
having a too low futility_margin, due to the fact that when these
conditions are met the futility_margin already drops quite low.
Passed STC:
https://tests.stockfishchess.org/tests/live_elo/65e3977bf2ef6c733362aae3
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 122432 W: 31884 L: 31436 D: 59112
Ptnml(0-2): 467, 14404, 31035, 14834, 476
Passed LTC:
https://tests.stockfishchess.org/tests/live_elo/65e47f40f2ef6c733362b6d2
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 421692 W: 106572 L: 105452 D: 209668
Ptnml(0-2): 216, 47217, 114865, 48327, 221
closes https://github.com/official-stockfish/Stockfish/pull/5092
Bench: 1565939
Created by retraining the previous main net `nn-b1a57edbea57.nnue` with:
- some of the same options as before:
- ranger21, more WDL skipping, 15% more loss when Q is too high
- removal of the huge 514G pre-interleaved binpack
- removal of SF-generated dfrc data (dfrc99-16tb7p-filt-v2.min.binpack)
- interleaving many binpacks at training time
- training with some bestmove capture positions where SEE < 0
- increased usage of torch.compile to speed up training by up to 40%
```yaml
experiment-name: 2560--S10-dfrc0-to-dec2023-skip-more-wdl-15p-more-loss-high-q-see-ge0-sk28
nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more
start-from-engine-test-net: True
early-fen-skipping: 28
training-dataset:
# similar, not the exact same as:
# https://github.com/official-stockfish/Stockfish/pull/4635
- /data/S5-5af/leela96.v2.min.binpack
- /data/S5-5af/test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack
- /data/S5-5af/test77-2021-12-dec-16tb7p.v6-dd.min.binpack
- /data/S5-5af/test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack
- /data/S5-5af/test78-2022-06-to-09-juntosep-16tb7p.v6-dd.min.binpack
- /data/S5-5af/test79-2022-04-apr-16tb7p.v6-dd.min.binpack
- /data/S5-5af/test79-2022-05-may-16tb7p.v6-dd.min.binpack
- /data/S5-5af/test80-2022-06-jun-16tb7p.v6-dd.min.unmin.binpack
- /data/S5-5af/test80-2022-07-jul-16tb7p.v6-dd.min.binpack
- /data/S5-5af/test80-2022-08-aug-16tb7p.v6-dd.min.binpack
- /data/S5-5af/test80-2022-09-sep-16tb7p.v6-dd.min.unmin.binpack
- /data/S5-5af/test80-2022-10-oct-16tb7p.v6-dd.min.binpack
- /data/S5-5af/test80-2022-11-nov-16tb7p.v6-dd.min.binpack
- /data/S5-5af/test80-2023-01-jan-16tb7p.v6-sk20.min.binpack
- /data/S5-5af/test80-2023-02-feb-16tb7p.v6-dd.min.binpack
- /data/S5-5af/test80-2023-03-mar-2tb7p.min.unmin.binpack
- /data/S5-5af/test80-2023-04-apr-2tb7p.binpack
- /data/S5-5af/test80-2023-05-may-2tb7p.min.dd.binpack
# https://github.com/official-stockfish/Stockfish/pull/4782
- /data/S6-1ee1aba5ed/test80-2023-06-jun-2tb7p.binpack
- /data/S6-1ee1aba5ed/test80-2023-07-jul-2tb7p.min.binpack
# https://github.com/official-stockfish/Stockfish/pull/4972
- /data/S8-baff1edbea57/test80-2023-08-aug-2tb7p.v6.min.binpack
- /data/S8-baff1edbea57/test80-2023-09-sep-2tb7p.binpack
- /data/S8-baff1edbea57/test80-2023-10-oct-2tb7p.binpack
# https://github.com/official-stockfish/Stockfish/pull/5056
- /data/S9-b1a57edbea57/test80-2023-11-nov-2tb7p.binpack
- /data/S9-b1a57edbea57/test80-2023-12-dec-2tb7p.binpack
num-epochs: 800
lr: 4.375e-4
gamma: 0.995
start-lambda: 1.0
end-lambda: 0.7
```
This particular net was reached at epoch 759. Use of more torch.compile decorators
in nnue-pytorch model.py than in the previous main net training run sped up training
by up to 40% on Tesla gpus when using recent pytorch compiled with cuda 12:
https://github.com/linrock/nnue-tools/blob/7fb9831/Dockerfile
Skipping positions with bestmove captures where static exchange evaluation is >= 0
is based on the implementation from Sopel's NNUE training & experimentation log:
https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY
Experiment 293 - only skip captures with see>=0
Positions with bestmove captures where score == 0 are always skipped for
compatibility with minimized binpacks, since the original minimizer sets
scores to 0 for slight improvements in compression.
The trainer branch used was:
https://github.com/linrock/nnue-pytorch/tree/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more
Binpacks were renamed to be sorted chronologically by default when sorted by name.
The binpack data are otherwise the same as binpacks with similar names in the prior
naming convention.
Training data can be found at:
https://robotmoon.com/nnue-training-data/
Passed STC:
https://tests.stockfishchess.org/tests/view/65e3ddd1f2ef6c733362ae5c
LLR: 2.92 (-2.94,2.94) <0.00,2.00>
Total: 149792 W: 39153 L: 38661 D: 71978
Ptnml(0-2): 675, 17586, 37905, 18032, 698
Passed LTC:
https://tests.stockfishchess.org/tests/view/65e4d91c416ecd92c162a69b
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 64416 W: 16517 L: 16135 D: 31764
Ptnml(0-2): 38, 7218, 17313, 7602, 37
closes https://github.com/official-stockfish/Stockfish/pull/5090
Bench: 1373183
Based on 130M positions from 2.1M games.
```
Look recursively in directory pgns for games from SPRT tests using books
matching "UHO_4060_v..epd|UHO_Lichess_4852_v1.epd" for SF revisions
between 8e75548f2a (from 2024-02-17
17:11:46 +0100) and HEAD (from 2024-02-17 17:13:07 +0100). Based on
127920843 positions from 2109240 games, NormalizeToPawnValue should
change from 345 to 356.
```
The patch only affects the UCI-reported cp and wdl values.
closes https://github.com/official-stockfish/Stockfish/pull/5070
No functional change
Created by retraining the previous main net `nn-baff1edbea57.nnue` with:
- some of the same options as before: ranger21, more WDL skipping
- the addition of T80 nov+dec 2023 data
- increasing loss by 15% when prediction is too high, up from 10%
- use of torch.compile to speed up training by over 25%
```yaml
experiment-name: 2560--S9-514G-T80-augtodec2023-more-wdl-skip-15p-more-loss-high-q-sk28
training-dataset:
# https://github.com/official-stockfish/Stockfish/pull/4782
- /data/S6-514G-1ee1aba5ed.binpack
- /data/test80-aug2023-2tb7p.v6.min.binpack
- /data/test80-sep2023-2tb7p.binpack
- /data/test80-oct2023-2tb7p.binpack
- /data/test80-nov2023-2tb7p.binpack
- /data/test80-dec2023-2tb7p.binpack
early-fen-skipping: 28
start-from-engine-test-net: True
nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-torch-compile
num-epochs: 1000
lr: 4.375e-4
gamma: 0.995
start-lambda: 1.0
end-lambda: 0.7
```
Epoch 819 trained with the above config led to this PR. Use of torch.compile
decorators in nnue-pytorch model.py was found to speed up training by at least
25% on Ampere gpus when using recent pytorch compiled with cuda 12:
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch
See recent main net PRs for more info on
- ranger21 and more WDL skipping: https://github.com/official-stockfish/Stockfish/pull/4942
- increasing loss when Q is too high: https://github.com/official-stockfish/Stockfish/pull/4972
Training data can be found at:
https://robotmoon.com/nnue-training-data/
Passed STC:
https://tests.stockfishchess.org/tests/view/65cd76151d8e83c78bfd2f52
LLR: 2.98 (-2.94,2.94) <0.00,2.00>
Total: 78336 W: 20504 L: 20115 D: 37717
Ptnml(0-2): 317, 9225, 19721, 9562, 343
Passed LTC:
https://tests.stockfishchess.org/tests/view/65ce5be61d8e83c78bfd43e9
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 41016 W: 10492 L: 10159 D: 20365
Ptnml(0-2): 22, 4533, 11071, 4854, 28
closes https://github.com/official-stockfish/Stockfish/pull/5056
Bench: 1351997
This introduces a form of node counting which can
be used to further tweak the usage of our search
time.
The current approach stops the search when almost
all nodes are searched on a single move.
The idea originally came from Koivisto, but the
implemention is a bit different, Koivisto scales
the optimal time by the nodes effort and then
determines if the search should be stopped.
We just scale down the `totalTime` and stop the
search if we exceed it and the effort is large
enough.
Passed STC:
https://tests.stockfishchess.org/tests/view/65c8e0661d8e83c78bfcd5ec
LLR: 2.97 (-2.94,2.94) <0.00,2.00>
Total: 88672 W: 22907 L: 22512 D: 43253
Ptnml(0-2): 310, 10163, 23041, 10466, 356
Passed LTC:
https://tests.stockfishchess.org/tests/view/65ca632b1d8e83c78bfcf554
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 170856 W: 42910 L: 42320 D: 85626
Ptnml(0-2): 104, 18337, 47960, 18919, 108
closes https://github.com/official-stockfish/Stockfish/pull/5053
Bench: 1198939
This patch does similar thing to how it's done for
qsearch - in case of fail high adjust result to
lower value. Difference is that it is done only
for non-pv nodes and it's depth dependent - so
lower depth entries will have bigger adjustment
and higher depth entries will have smaller
adjustment.
Passed STC:
https://tests.stockfishchess.org/tests/view/65c3c0cbc865510db0283b21
LLR: 2.96 (-2.94,2.94) <0.00,2.00>
Total: 112032 W: 29142 L: 28705 D: 54185
Ptnml(0-2): 479, 13152, 28326, 13571, 488
Passed LTC:
https://tests.stockfishchess.org/tests/view/65c52e62c865510db02855d5
LLR: 2.96 (-2.94,2.94) <0.50,2.50>
Total: 132480 W: 33457 L: 32936 D: 66087
Ptnml(0-2): 67, 14697, 36222, 15156, 98
closes https://github.com/official-stockfish/Stockfish/pull/5047
Bench: 1168241
In both search and qsearch, there are instances
where we do unadjustedStaticEval = ss->staticEval
= eval/bestValue = tte->eval(), but immediately
after re-assign ss-static and eval/bestValue to
some new value, which makes the initial assignment
redundant.
closes https://github.com/official-stockfish/Stockfish/pull/5045
No functional change
Move divisor from capture scoring to good capture
check and sligthly increase it.
This has several effects:
- its a speedup because for quience and probcut
search the division now never happens. For main
search its delayed and can be avoided if a good
capture triggers a cutoff
- through the higher resolution of scores we have
a more granular sorting
STC: https://tests.stockfishchess.org/tests/view/65bf2a93c865510db027dc27
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 470016 W: 122150 L: 121173 D: 226693
Ptnml(0-2): 2133, 55705, 118374, 56644, 2152
LTC: https://tests.stockfishchess.org/tests/view/65c1d16dc865510db0281339
LLR: 2.96 (-2.94,2.94) <0.50,2.50>
Total: 98988 W: 25121 L: 24667 D: 49200
Ptnml(0-2): 77, 10998, 26884, 11464, 71
closes https://github.com/official-stockfish/Stockfish/pull/5036
Bench: 1233867
This replaces the PvNode condition and tte Pv call previously with using
the precomputed ttPv, and also removes the multiplier of 2. This new
depth condition occurs with approximately equal frequency (47%) to the
old depth condition (measured when the other conditions in the if are
true), so non-linear scaling behaviour isn't expected.
Passed Non-Reg STC:
https://tests.stockfishchess.org/tests/view/65b0e132c865510db026da27
LLR: 2.97 (-2.94,2.94) <-1.75,0.25>
Total: 243232 W: 62432 L: 62437 D: 118363
Ptnml(0-2): 910, 28937, 61900, 28986, 883
Passed Non-Reg LTC:
https://tests.stockfishchess.org/tests/view/65b2053bc865510db026eea1
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 190596 W: 47666 L: 47618 D: 95312
Ptnml(0-2): 115, 21710, 51596, 21766, 111
closes https://github.com/official-stockfish/Stockfish/pull/5015
Bench: 1492957
This splits the logic of search and perft. Before, threads were started,
which then constructed a search object, which then started perft and
returned immediately. All of this is unnecessary, instead uci should
start perft right away.
closes https://github.com/official-stockfish/Stockfish/pull/5008
No functional change
Update the internal WDL model. After the dual net merge, the internal
evaluations have drifted upwards a bit. With this PR
`NormalizeToPawnValue` changes from `328` to `345`.
The new model was fitted based on about 200M positions extracted from
3.4M fishtest LTC games from the last two weeks, involving SF versions
from 6deb88728f to current master.
Apart from the WDL model parameter update, this PR implements the
following changes:
WDL Model:
- an incorrect 8-move shift in master's WDL model has been fixed
- the polynomials `p_a` and `p_b` are fitted over the move range [8, 120]
- the coefficients for `p_a` and `p_b` are optimized by maximizing the
probability of predicting the observed outcome (credits to @vondele)
SF code:
- for wdl values, move will be clamped to `max(8, min(120, move))`
- no longer clamp the internal eval to [-4000,4000]
- compute `NormalizeToPawnValue` with `round`, not `trunc`
The PR only affects displayed `cp` and `wdl` values.
closes https://github.com/official-stockfish/Stockfish/pull/5002
No functional change
The idea of this is to unroll the futility_margin calculation to allow
for the improving flag to have a greater effect on the futility margin.
The current factor is 1.5 instead of the previous 1 resulting in a
deduction of an extra margin/2 from futilit_margin if improving. The
chosen value was not tuned, meaning that there is room for tweaking it.
This patch is partially inspired by @Vizvezdenec, who, although quite
different in execution, tested another idea where the futility_margin is
lowered further when improving [1].
[1]: (first take) https://tests.stockfishchess.org/tests/view/65a56b1879aa8af82b97164b
Passed STC:
https://tests.stockfishchess.org/tests/live_elo/65a8bfc179aa8af82b974e3c
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 161152 W: 41321 L: 40816 D: 79015
Ptnml(0-2): 559, 19030, 40921, 19479, 587
Passed rebased LTC:
https://tests.stockfishchess.org/tests/live_elo/65a8b9ef79aa8af82b974dc0
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 96024 W: 24172 L: 23728 D: 48124
Ptnml(0-2): 56, 10598, 26275, 11012, 71
closes https://github.com/official-stockfish/Stockfish/pull/5000
Bench: 1281703