The idea of this is to unroll the futility_margin calculation to allow
for the improving flag to have a greater effect on the futility margin.
The current factor is 1.5 instead of the previous 1 resulting in a
deduction of an extra margin/2 from futilit_margin if improving. The
chosen value was not tuned, meaning that there is room for tweaking it.
This patch is partially inspired by @Vizvezdenec, who, although quite
different in execution, tested another idea where the futility_margin is
lowered further when improving [1].
[1]: (first take) https://tests.stockfishchess.org/tests/view/65a56b1879aa8af82b97164b
Passed STC:
https://tests.stockfishchess.org/tests/live_elo/65a8bfc179aa8af82b974e3c
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 161152 W: 41321 L: 40816 D: 79015
Ptnml(0-2): 559, 19030, 40921, 19479, 587
Passed rebased LTC:
https://tests.stockfishchess.org/tests/live_elo/65a8b9ef79aa8af82b974dc0
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 96024 W: 24172 L: 23728 D: 48124
Ptnml(0-2): 56, 10598, 26275, 11012, 71
closes https://github.com/official-stockfish/Stockfish/pull/5000
Bench: 1281703
This addresses the issue where Stockfish may output non-proven checkmate
scores if the search is prematurely halted, either due to a time control
or node limit, before it explores other possibilities where the
checkmate score could have been delayed or refuted.
The fix also replaces staving off from proven mated scores in a
multithread environment making use of the threads instead of a negative
effect with multithreads (1t was better in proving mated in scores than
more threads).
Issue reported on mate tracker repo by and this PR is co-authored with
@robertnurnberg Special thanks to @AndyGrant for outlining that a fix is
eventually possible.
Passed Adj off SMP STC:
https://tests.stockfishchess.org/tests/view/65a125d779aa8af82b96c3eb
LLR: 2.96 (-2.94,2.94) <-1.75,0.25>
Total: 303256 W: 75823 L: 75892 D: 151541
Ptnml(0-2): 406, 35269, 80395, 35104, 454
Passed Adj off SMP LTC:
https://tests.stockfishchess.org/tests/view/65a37add79aa8af82b96f0f7
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 56056 W: 13951 L: 13770 D: 28335
Ptnml(0-2): 11, 5910, 16002, 6097, 8
Passed all tests in matetrack without any better mate for opponent found in 1t and multithreads.
Fixed bugs in https://github.com/official-stockfish/Stockfish/pull/4976
closes https://github.com/official-stockfish/Stockfish/pull/4990
Bench: 1308279
Co-Authored-By: Robert Nürnberg <28635489+robertnurnberg@users.noreply.github.com>
Also remove dead code, `rootSimpleEval` is no longer used since the introduction of dual net.
`iterBestValue` is also no longer used in evaluate and can be reduced to a local variable.
closes https://github.com/official-stockfish/Stockfish/pull/4979
No functional change
This aims to remove some of the annoying global structure which Stockfish has.
Overall there is no major elo regression to be expected.
Non regression SMP STC (paused, early version):
https://tests.stockfishchess.org/tests/view/65983d7979aa8af82b9608f1
LLR: 0.23 (-2.94,2.94) <-1.75,0.25>
Total: 76232 W: 19035 L: 19096 D: 38101
Ptnml(0-2): 92, 8735, 20515, 8690, 84
Non regression STC (early version):
https://tests.stockfishchess.org/tests/view/6595b3a479aa8af82b95da7f
LLR: 2.93 (-2.94,2.94) <-1.75,0.25>
Total: 185344 W: 47027 L: 46972 D: 91345
Ptnml(0-2): 571, 21285, 48943, 21264, 609
Non regression SMP STC:
https://tests.stockfishchess.org/tests/view/65a0715c79aa8af82b96b7e4
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 142936 W: 35761 L: 35662 D: 71513
Ptnml(0-2): 209, 16400, 38135, 16531, 193
These global structures/variables add hidden dependencies and allow data
to be mutable from where it shouldn't it be (i.e. options). They also
prevent Stockfish from internal selfplay, which would be a nice thing to
be able to do, i.e. instantiate two Stockfish instances and let them
play against each other. It will also allow us to make Stockfish a
library, which can be easier used on other platforms.
For consistency with the old search code, `thisThread` has been kept,
even though it is not strictly necessary anymore. This the first major
refactor of this kind (in recent time), and future changes are required,
to achieve the previously described goals. This includes cleaning up the
dependencies, transforming the network to be self contained and coming
up with a plan to deal with proper tablebase memory management (see
comments for more information on this).
The removal of these global structures has been discussed in parts with
Vondele and Sopel.
closes https://github.com/official-stockfish/Stockfish/pull/4968
No functional change
Created by retraining the previous main net nn-b1e55edbea57.nnue with:
- some of the same options as before: ranger21 optimizer, more WDL
skipping
- adding T80 aug filter-v6, sep, and oct 2023 data to the previous best
dataset
- increasing training loss for positions where predicted win rates were
higher than estimated match results from training data position scores
```yaml
experiment-name: 2560--S8-r21-more-wdl-skip-10p-more-loss-high-q-sk28
training-dataset:
# https://github.com/official-stockfish/Stockfish/pull/4782
- /data/S6-1ee1aba5ed.binpack
- /data/test80-aug2023-2tb7p.v6.min.binpack
- /data/test80-sep2023-2tb7p.binpack
- /data/test80-oct2023-2tb7p.binpack
early-fen-skipping: 28
start-from-engine-test-net: True
nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-10p-more-loss-high-q
num-epochs: 1000
lr: 4.375e-4
gamma: 0.995
start-lambda: 1.0
end-lambda: 0.7
```
Training data can be found at:
https://robotmoon.com/nnue-training-data/
Training loss was increased by 10% for positions where predicted win
rates were higher than suggested by the win rate model based on the
training data, by multiplying with: ((qf > pt) * 0.1 + 1). This was a
variant of experiments from Sopel's NNUE training & experimentation log:
https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY
Experiment 302 - increase loss when prediction too high, vondele’s idea
Experiment 309 - increase loss when prediction too high, normalize in a
batch
Passed STC:
https://tests.stockfishchess.org/tests/view/6597a21c79aa8af82b95fd5c
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 148320 W: 37960 L: 37475 D: 72885
Ptnml(0-2): 542, 17565, 37383, 18206, 464
Passed LTC:
https://tests.stockfishchess.org/tests/view/659834a679aa8af82b960845
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 55188 W: 13955 L: 13592 D: 27641
Ptnml(0-2): 34, 6162, 14834, 6535, 29
closes https://github.com/official-stockfish/Stockfish/pull/4972
Bench: 1219824
Created by training an L1-128 net from scratch with a wider range of
evals in the training data and wld-fen-skipping disabled during
training. The differences in this training data compared to the first
dual nnue PR are:
- removal of all positions with 3 pieces
- when piece count >= 16, keep positions with simple eval above 750
- when piece count < 16, remove positions with simple eval above 3000
The asymmetric data filtering was meant to flatten the training data
piece count distribution, which was previously heavily skewed towards
positions with low piece counts.
Additionally, the simple eval range where the smallnet is used was
widened to cover more positions previously evaluated by the big net and
simple eval.
```yaml
experiment-name: 128--S1-hse-S7-v4-S3-v1-no-wld-skip
training-dataset:
- /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
- /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
- /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack
- /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-v4.binpack
- /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
- /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-v4.binpack
- /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-v4.binpack
- /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-v4.binpack
- /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-v4.binpack
- /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-v4.binpack
wld-fen-skipping: False
start-from-engine-test-net: False
nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128
num-epochs: 500
start-lambda: 1.0
end-lambda: 1.0
```
Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py
Binpacks interleaved at training time with:
https://github.com/official-stockfish/nnue-pytorch/pull/259
FT weights permuted with 10k positions from fishpack32.binpack with:
https://github.com/official-stockfish/nnue-pytorch/pull/254
Data filtered for high simple eval positions (v4) with:
https://github.com/linrock/Stockfish/blob/b9c8440/src/tools/transform.cpp#L640-L675
Training data can be found at:
https://robotmoon.com/nnue-training-data/
Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch319.nnue : -241.7 +/- 3.2
Passed STC vs. 36db936:
https://tests.stockfishchess.org/tests/view/6576b3484d789acf40aabbfe
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 21920 W: 5680 L: 5381 D: 10859
Ptnml(0-2): 82, 2488, 5520, 2789, 81
Passed LTC vs. DualNNUE #4915:
https://tests.stockfishchess.org/tests/view/65775c034d789acf40aac7e3
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 147606 W: 36619 L: 36063 D: 74924
Ptnml(0-2): 98, 16591, 39891, 17103, 120
closes https://github.com/official-stockfish/Stockfish/pull/4919
Bench: 1438336
Add a `.git-blame-ignore-revs` file which can be used to skip specified
commits when blaming, this is useful to ignore formatting commits, like
clang-format #4790.
Github blame automatically supports this file format, as well as other
third party tools. Git itself needs to be told about the file name to
work, the following command will add it to the current git repo. `git
config blame.ignoreRevsFile .git-blame-ignore-revs`, alternatively one
has to specify it with every blame. `git blame --ignore-revs-file
.git-blame-ignore-revs search.cpp`
Supported since git 2.23.
closes https://github.com/official-stockfish/Stockfish/pull/4969
No functional change
Only Direction type is using two of the enable overload macros.
Aside from this, only two of the overloads are even being used.
Therefore, we can just define the needed overloads and remove the macros.
closes https://github.com/official-stockfish/Stockfish/pull/4966
No functional change.
Remove a redundant int cast in the calculation of fwdOut. The variable
OutputType is already defined as std::int32_t, which is an integer type, making
the cast unnecessary.
closes https://github.com/official-stockfish/Stockfish/pull/4961
No functional change
The primary rationale behind this lies in the fact that enums were not
originally designed to be employed in the manner we currently utilize them.
The Value enum was used like a type alias throughout the code and was often
misused. Furthermore, changing the underlying size of the enum to int16_t broke
everything, mostly because of the operator overloads for the Value enum, were
causing data to be truncated. Since Value is now a type alias, the operator
overloads are no longer required.
Passed Non-Regression STC:
https://tests.stockfishchess.org/tests/view/6593b8bb79aa8af82b95b401
LLR: 2.95 (-2.94,2.94) <-1.75,0.25>
Total: 235296 W: 59919 L: 59917 D: 115460
Ptnml(0-2): 743, 27085, 62054, 26959, 807
closes https://github.com/official-stockfish/Stockfish/pull/4960
No functional change
As some have noticed, a security alert has been complaining about a for loop in
our TB code for quite some now. Though it was never a real issue, so not of high
importance.
A few lines earlier the symlen vector is resized
`d->symlen.resize(number<uint16_t, LittleEndian>(data));` while this code seems
odd at first, it resizes the array to at most (2 << 16) - 1 elements, basically
making the infinite loop issue impossible to occur.
closes https://github.com/official-stockfish/Stockfish/pull/4953
No functional change
Idea from Caissa (https://github.com/Witek902/Caissa) chess engine.
With given pawn structure collect data with how often search result and by how
much it was better / worse than static evalution of position and use it to
adjust static evaluation of positions with given pawn structure. Details:
1. excludes positions with fail highs and moves producing it being a capture;
2. update value is function of not only difference between best value and static
evaluation but also is multiplied by linear function of depth;
3. maximum update value is maximum value of correction history divided by 2;
4. correction history itself is divided by 32 when applied so maximum value of
static evaluation adjustment is 32 internal units.
Passed STC:
https://tests.stockfishchess.org/tests/view/658fc7b679aa8af82b955cac
LLR: 2.96 (-2.94,2.94) <0.00,2.00>
Total: 128672 W: 32757 L: 32299 D: 63616
Ptnml(0-2): 441, 15241, 32543, 15641, 470
Passed LTC:
https://tests.stockfishchess.org/tests/view/65903f6979aa8af82b9566f1
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 97422 W: 24626 L: 24178 D: 48618
Ptnml(0-2): 41, 10837, 26527, 11245, 61
closes https://github.com/official-stockfish/Stockfish/pull/4950
Bench: 1157852
For developing an Android GUI it can be helpful to use the Emulator on Windows.
Therefor an android_x86-64 library of Stockfish is needed. It would be nice to
compile it "out-of-the-box".
This change is originally suggested by Craftyawesome
closes https://github.com/official-stockfish/Stockfish/pull/4927
No functional change
This fixes futility pruning return values after recent tweaks, `eval` is
guaranteed to be less than the mate-in range but it can be as low value such
that the average between eval and beta can still fall in the mated-in range when
beta is as low in mated range. i.e. (eval + beta) / 2 being at mated-range which
can break mates.
Passed non-regression STC:
https://tests.stockfishchess.org/tests/view/658f3eed79aa8af82b955139
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 117408 W: 29891 L: 29761 D: 57756
Ptnml(0-2): 386, 13355, 31120, 13429, 414
Passed non-regression LTC:
https://tests.stockfishchess.org/tests/view/658f8b7a79aa8af82b9557bd
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 60240 W: 14962 L: 14786 D: 30492
Ptnml(0-2): 22, 6257, 17390, 6425, 26
changes signature at higher depth e.g. `128 1 15`
closes https://github.com/official-stockfish/Stockfish/pull/4944
Bench: 1304666