This updates estimates from 2yr ago #2401, and adds missing terms.
All tests run at 10+0.1 (STC), 20000 games, error bars +- 1.8 Elo, book 8moves_v3.png.
A table of Elo values with the links to the corresponding tests can be found at the PR
closes https://github.com/official-stockfish/Stockfish/pull/3868
Non-functional Change
This is a reintroduction of an idea that was simplified away approximately 1 year ago.
There are some tweaks to it :
a) exclude promotions;
b) exclude Pv Nodes from it - Pv Nodes logic for captures is really different from non Pv nodes so it makes a lot of sense;
c) use a big grain of capture history - idea is taken from my recent patches in futility pruning.
passed STC
https://tests.stockfishchess.org/tests/view/61bd90f857a0d0f327c373b7
LLR: 2.96 (-2.94,2.94) <0.00,2.50>
Total: 86640 W: 22474 L: 22110 D: 42056
Ptnml(0-2): 268, 9732, 22963, 10082, 275
passed LTC
https://tests.stockfishchess.org/tests/view/61be094457a0d0f327c38aa3
LLR: 2.95 (-2.94,2.94) <0.50,3.00>
Total: 23240 W: 6079 L: 5838 D: 11323
Ptnml(0-2): 14, 2261, 6824, 2512, 9
https://github.com/official-stockfish/Stockfish/pull/3864
bench 4493723
This patch is a follow up of previous 2 patches that introduced more reductions for PV nodes with low delta and more pruning for nodes with low delta. Instead of writing separate heuristics now it adjust reductions based on delta / rootDelta - it allows to remove 3 separate adjustements of pruning/LMR in different places and also makes reduction dependence on delta and rootDelta smoother. Also now it works for all pruning heuristics and not just 2.
Passed STC
https://tests.stockfishchess.org/tests/view/61ba9b6c57a0d0f327c2d48b
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 79192 W: 20513 L: 20163 D: 38516
Ptnml(0-2): 238, 8900, 21024, 9142, 292
passed LTC
https://tests.stockfishchess.org/tests/view/61baf77557a0d0f327c2eb8e
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 158400 W: 41134 L: 40572 D: 76694
Ptnml(0-2): 101, 16372, 45745, 16828, 154
closes https://github.com/official-stockfish/Stockfish/pull/3862
bench 4651538
This idea is somewhat similar to extentions in LMR but has a different flavour.
If result of LMR was really good - thus exceeded alpha by some pretty
big given margin, we can extend move after LMR in full depth search with 0 window.
The idea is that this move is probably a fail high with somewhat of a big
probability so extending it makes a lot of sense
passed STC
https://tests.stockfishchess.org/tests/view/61ad45ea56fcf33bce7d74b7
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 59680 W: 15531 L: 15215 D: 28934
Ptnml(0-2): 193, 6711, 15734, 6991, 211
passed LTC
https://tests.stockfishchess.org/tests/view/61ad9ff356fcf33bce7d8646
LLR: 2.95 (-2.94,2.94) <0.50,3.00>
Total: 59104 W: 15321 L: 14992 D: 28791
Ptnml(0-2): 53, 6023, 17065, 6364, 47
closes https://github.com/official-stockfish/Stockfish/pull/3838
bench 4881329
This patch optimizes the NEON implementation in two ways.
The activation layer after the feature transformer is rewritten to make it easier for the compiler to see through dependencies and unroll. This in itself is a minimal, but a positive improvement. Other architectures could benefit from this too in the future. This is not an algorithmic change.
The affine transform for large matrices (first layer after FT) on NEON now utilizes the same optimized code path as >=SSSE3, which makes the memory accesses more sequential and makes better use of the available registers, which allows for code that has longer dependency chains.
Benchmarks from Redshift#161, profile-build with apple clang
george@Georges-MacBook-Air nets % ./stockfish-b82d93 bench 2>&1 | tail -4 (current master)
===========================
Total time (ms) : 2167
Nodes searched : 4667742
Nodes/second : 2154011
george@Georges-MacBook-Air nets % ./stockfish-7377b8 bench 2>&1 | tail -4 (this patch)
===========================
Total time (ms) : 1842
Nodes searched : 4667742
Nodes/second : 2534061
This is a solid 18% improvement overall, larger in a bench with NNUE-only, not mixed.
Improvement is also observed on armv7-neon (Raspberry Pi, and older phones), around 5% speedup.
No changes for architectures other than NEON.
closes https://github.com/official-stockfish/Stockfish/pull/3837
No functional changes.
Initialize continuation history with a slighlty negative value -71 instead of zero.
The idea is, because the most history entries will be later negative anyway, to shift
the starting values a little bit in the "correct" direction. Of course the effect of
initialization dimishes with greater depth so I had the apprehension that the LTC test
would be difficult to pass, but it passed.
STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 34520 W: 9076 L: 8803 D: 16641
Ptnml(0-2): 136, 3837, 9047, 4098, 142
https://tests.stockfishchess.org/tests/view/61aa52e39e8855bba1a3776b
LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.00>
Total: 75568 W: 19620 L: 19254 D: 36694
Ptnml(0-2): 44, 7773, 21796, 8115, 56
https://tests.stockfishchess.org/tests/view/61aa87d39e8855bba1a383a5
closes https://github.com/official-stockfish/Stockfish/pull/3834
Bench: 4674029
In their infinite wisdom, Intel axed AVX512 from Alder Lake
chips (well, not entirely, but we kind of want to use the Gracemont
cores for chess!) but still added VNNI support.
Confusingly enough, this is not the same as VNNI256 support.
This adds a specific AVX-VNNI target that will use this AVX-VNNI
mode, by prefixing the VNNI instructions with the appropriate VEX
prefix, and avoiding AVX512 usage.
This is about 1% faster on P cores:
Result of 20 runs
==================
base (./clang-bmi2 ) = 3306337 +/- 7519
test (./clang-vnni ) = 3344226 +/- 7388
diff = +37889 +/- 4153
speedup = +0.0115
P(speedup > 0) = 1.0000
But a nice 3% faster on E cores:
Result of 20 runs
==================
base (./clang-bmi2 ) = 1938054 +/- 28257
test (./clang-vnni ) = 1994606 +/- 31756
diff = +56552 +/- 3735
speedup = +0.0292
P(speedup > 0) = 1.0000
This was measured on Clang 13. GCC 11.2 appears to generate
worse code for Alder Lake, though the speedup on the E cores
is similar.
It is possible to run the engine specifically on the P or E using binding,
for example in linux it is possible to use (for an 8 P + 8 E setup like i9-12900K):
taskset -c 0-15 ./stockfish
taskset -c 16-23 ./stockfish
where the first call binds to the P-cores and the second to the E-cores.
closes https://github.com/official-stockfish/Stockfish/pull/3824
No functional change
Improve compatibility of the last NUMA patch when running under older versions of Windows,
for instance Windows Server 2003. Reported by user "g3g6" in the following comments:
7218ec4df9
Closes https://github.com/official-stockfish/Stockfish/pull/3821
No functional change
This patch is a result of refining of tuning vondele did after
new net passed and some hand-made values adjustements - excluding
changes in other pruning heuristics and rounding value of history
divisor to the nearest power of 2.
With this patch futility pruning becomes more aggressive and
history influence on it is doubled again.
passed STC
https://tests.stockfishchess.org/tests/view/61a2c4c1a26505c2278c150d
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 33848 W: 8841 L: 8574 D: 16433
Ptnml(0-2): 100, 3745, 8988, 3970, 121
passed LTC
https://tests.stockfishchess.org/tests/view/61a327ffa26505c2278c26d9
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 22272 W: 5856 L: 5614 D: 10802
Ptnml(0-2): 12, 2230, 6412, 2468, 14
closes https://github.com/official-stockfish/Stockfish/pull/3814
bench 6302543
This idea is somewhat of a respin of smth we had in futility pruning and that was simplified away - dependence of it not only on static evaluation of position but also on move history heuristics.
Instead of aborting it when they are high there we use fraction of their sum to adjust static eval pruning criteria.
passed STC
https://tests.stockfishchess.org/tests/view/619bd438c0a4ea18ba95a27d
LLR: 2.93 (-2.94,2.94) <0.00,2.50>
Total: 113704 W: 29284 L: 28870 D: 55550
Ptnml(0-2): 357, 12884, 30044, 13122, 445
passed LTC
https://tests.stockfishchess.org/tests/view/619cb8f0c0a4ea18ba95a334
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 147136 W: 37307 L: 36770 D: 73059
Ptnml(0-2): 107, 15279, 42265, 15804, 113
closes https://github.com/official-stockfish/Stockfish/pull/3805
bench 6777918
revert 9048ac00db due to core spread problem and fix new OS compatibility with another method.
This code assumes that if one NUMA node has more than one processor groups, they are created equal(having equal amount of cores assigned to each of the groups), and also the total number of available cores contained in such groups are equal to the number of available cores within one NUMA node because of how best_node function works.
closes https://github.com/official-stockfish/Stockfish/pull/3798
fixes https://github.com/official-stockfish/Stockfish/pull/3787
No functional change.
retire msvc support and corresponding CI. No active development happens on msvc,
and build is much slower or wrong.
gcc (mingw) is our toolchain of choice also on windows, and the latter is tested.
No functional change
Current master implements a scaling of the raw NNUE output value with a formula
equivalent to 'eval = alpha * NNUE_output', where the scale factor alpha varies
between 1.8 (for early middle game) and 0.9 (for pure endgames). This feature
allows Stockfish to keep material on the board when she thinks she has the advantage,
and to seek exchanges and simplifications when she thinks she has to defend.
This patch slightly offsets the turning point between these two strategies, by adding
to Stockfish's evaluation a small "optimism" value before actually doing the scaling.
The effect is that SF will play a little bit more risky, trying to keep the tension a
little bit longer when she is defending, and keeping even more material on the board
when she has an advantage.
We note that this patch is similar in spirit to the old "Contempt" idea we used to have
in classical Stockfish, but this implementation differs in two key points:
a) it has been tested as an Elo-gainer against master;
b) the values output by the search are not changed on average by the implementation
(in other words, the optimism value changes the tension/exchange strategy, but a
displayed value of 1.0 pawn has the same signification before and after the patch).
See the old comment https://github.com/official-stockfish/Stockfish/pull/1361#issuecomment-359165141
for some images illustrating the ideas.
-------
finished yellow at STC:
LLR: -2.94 (-2.94,2.94) <0.00,2.50>
Total: 165048 W: 41705 L: 41611 D: 81732
Ptnml(0-2): 565, 18959, 43245, 19327, 428
https://tests.stockfishchess.org/tests/view/61942a3dcd645dc8291c876b
passed LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.00>
Total: 121656 W: 30762 L: 30287 D: 60607
Ptnml(0-2): 87, 12558, 35032, 13095, 56
https://tests.stockfishchess.org/tests/view/61962c58cd645dc8291c8877
-------
How to continue from there?
a) the shape (slope and amplitude) of the sigmoid used to compute the optimism value
could be tweaked to try to gain more Elo, so the parameters of the sigmoid function
in line 391 of search.cpp could be tuned with SPSA. Manual tweaking is also possible
using this Desmos page: https://www.desmos.com/calculator/jhh83sqq92
b) in a similar vein, with two recents patches affecting the scaling of the NNUE
evaluation in evaluate.cpp, now could be a good time to try a round of SPSA tuning
of the NNUE network;
c) this patch will tend to keep tension in middlegame a little bit longer, so any
patch improving the defensive aspect of play via search extensions in risky,
tactical positions would be welcome.
-------
closes https://github.com/official-stockfish/Stockfish/pull/3797
Bench: 6184852
Starting with Windows Build 20348 the behavior of the numa API has been changed:
https://docs.microsoft.com/en-us/windows/win32/procthread/numa-support
Old code only worked because there was probably a limit on how many
cores/threads can reside within one NUMA node, and the OS creates extra NUMA
nodes when necessary, however the actual mechanism of core binding is
done by "Processor Groups"(https://docs.microsoft.com/en-us/windows/win32/procthread/processor-groups). With a newer OS, one NUMA node can have many
such "Processor Groups" and we should just consistently use the number
of groups to bind the threads instead of deriving the topology from
the number of NUMA nodes.
This change is required to spread threads on all cores on Windows 11 with
a 3990X CPU. It has only 1 NUMA node with 2 groups of 64 threads each.
closes https://github.com/official-stockfish/Stockfish/pull/3787
No functional change.
In case the evaluation at root is large, discourage the use of lazyEval.
This fixes https://github.com/official-stockfish/Stockfish/issues/3772
or at least improves it significantly. In this case, poor play with large
odds can be observed, in extreme cases leading to a loss despite large
advantage:
r1bq1b1r/ppp3p1/3p1nkp/n3p3/2B1P2N/2NPB3/PPP2PPP/R3K2R b KQ - 5 9
With this patch the poor move is only considered up to depth 13, in master
up to depth 28.
The patch did not pass at LTC with Elo gainer bounds, but with slightly
positive Elo nevertheless (95% LOS).
STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 40368 W: 10318 L: 10041 D: 20009
Ptnml(0-2): 103, 4493, 10725, 4750, 113
https://tests.stockfishchess.org/tests/view/61800ad259e71df00dcc420d
LTC:
LLR: -2.94 (-2.94,2.94) <0.50,3.00>
Total: 212288 W: 52997 L: 52692 D: 106599
Ptnml(0-2): 112, 22038, 61549, 22323, 122
https://tests.stockfishchess.org/tests/view/618050d959e71df00dcc426d
closes https://github.com/official-stockfish/Stockfish/pull/3780
Bench: 7127040