- set the variable only for the required tests to keep simple the yml file
- use NDK 21.x until will be fixed the Stockfish static build problem
with NDK 23.x
- set the test for armv7, armv7-neon, armv8 builds:
- use armv7a-linux-androideabi21-clang++ compiler for armv7 armv7-neon
- enforce a static build
- silence the Warning for the unused compilation flag "-pie" with
the static build, otherwise the Github workflow stops
- use qemu to bench the build and get the signature
Many thanks to @pschneider1968 that made all the hard work with NDK :)
closes https://github.com/official-stockfish/Stockfish/pull/3924
No functional change
This patch is a result of tuning done by user @candirufish after 150k games.
Since the tuned values were really interesting and touched heuristics
that are known for their non-linear scaling I decided to run limited
games LTC match, even if the STC test was really bad (which was expected).
After seeing the results of the LTC match, I also run a VLTC (very long
time control) SPRTtest, which passed.
The main difference is in extensions: this patch allows much more
singular/double extensions, both in terms of allowing them at lower
depths and with lesser margins.
Failed STC:
https://tests.stockfishchess.org/tests/view/620d66643ec80158c0cd3b46
LLR: -2.94 (-2.94,2.94) <0.00,2.50>
Total: 4968 W: 1194 L: 1398 D: 2376
Ptnml(0-2): 47, 633, 1294, 497, 13
Performed well at LTC in a fixed-length match:
https://tests.stockfishchess.org/tests/view/620d66823ec80158c0cd3b4a
ELO: 3.36 +-1.8 (95%) LOS: 100.0%
Total: 30000 W: 7966 L: 7676 D: 14358
Ptnml(0-2): 36, 2936, 8755, 3248, 25
Passed VLTC SPRT test:
https://tests.stockfishchess.org/tests/view/620da11a26f5b17ec884f939
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 4400 W: 1326 L: 1127 D: 1947
Ptnml(0-2): 13, 309, 1348, 526, 4
closes https://github.com/official-stockfish/Stockfish/pull/3937
Bench: 6318903
Architecture:
The diagram of the "SFNNv4" architecture:
https://user-images.githubusercontent.com/8037982/153455685-cbe3a038-e158-4481-844d-9d5fccf5c33a.png
The most important architectural changes are the following:
* 1024x2 [activated] neurons are pairwise, elementwise multiplied (not quite pairwise due to implementation details, see diagram), which introduces a non-linearity that exhibits similar benefits to previously tested sigmoid activation (quantmoid4), while being slightly faster.
* The following layer has therefore 2x less inputs, which we compensate by having 2 more outputs. It is possible that reducing the number of outputs might be beneficial (as we had it as low as 8 before). The layer is now 1024->16.
* The 16 outputs are split into 15 and 1. The 1-wide output is added to the network output (after some necessary scaling due to quantization differences). The 15-wide is activated and follows the usual path through a set of linear layers. The additional 1-wide output is at least neutral, but has shown a slightly positive trend in training compared to networks without it (all 16 outputs through the usual path), and allows possibly an additional stage of lazy evaluation to be introduced in the future.
Additionally, the inference code was rewritten and no longer uses a recursive implementation. This was necessitated by the splitting of the 16-wide intermediate result into two, which was impossible to do with the old implementation with ugly hacks. This is hopefully overall for the better.
First session:
The first session was training a network from scratch (random initialization). The exact trainer used was slightly different (older) from the one used in the second session, but it should not have a measurable effect. The purpose of this session is to establish a strong network base for the second session. Small deviations in strength do not harm the learnability in the second session.
The training was done using the following command:
python3 train.py \
/home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
/home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
--gpus "$3," \
--threads 4 \
--num-workers 4 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--gamma=0.992 \
--lr=8.75e-4 \
--max_epochs=400 \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2
Every 20th net was saved and its playing strength measured against some baseline at 25k nodes per move with pure NNUE evaluation (modified binary). The exact setup is not important as long as it's consistent. The purpose is to sift good candidates from bad ones.
The dataset can be found https://drive.google.com/file/d/1UQdZN_LWQ265spwTBwDKo0t1WjSJKvWY/view
Second session:
The second training session was done starting from the best network (as determined by strength testing) from the first session. It is important that it's resumed from a .pt model and NOT a .ckpt model. The conversion can be performed directly using serialize.py
The LR schedule was modified to use gamma=0.995 instead of gamma=0.992 and LR=4.375e-4 instead of LR=8.75e-4 to flatten the LR curve and allow for longer training. The training was then running for 800 epochs instead of 400 (though it's possibly mostly noise after around epoch 600).
The training was done using the following command:
The training was done using the following command:
python3 train.py \
/data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
/data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
--gpus "$3," \
--threads 4 \
--num-workers 4 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--gamma=0.995 \
--lr=4.375e-4 \
--max_epochs=800 \
--resume-from-model /data/sopel/nnue/nnue-pytorch-training/data/exp295/nn-epoch399.pt \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$run_id
In particular note that we now use lambda=1.0 instead of lambda=0.8 (previous nets), because tests show that WDL-skipping introduced by vondele performs better with lambda=1.0. Nets were being saved every 20th epoch. In total 16 runs were made with these settings and the best nets chosen according to playing strength at 25k nodes per move with pure NNUE evaluation - these are the 4 nets that have been put on fishtest.
The dataset can be found either at ftp://ftp.chessdb.cn/pub/sopel/data_sf/T60T70wIsRightFarseerT60T74T75T76.binpack in its entirety (download might be painfully slow because hosted in China) or can be assembled in the following way:
Get the 5640ad48ae/script/interleave_binpacks.py script.
Download T60T70wIsRightFarseer.binpack https://drive.google.com/file/d/1_sQoWBl31WAxNXma2v45004CIVltytP8/view
Download farseerT74.binpack http://trainingdata.farseer.org/T74-May13-End.7z
Download farseerT75.binpack http://trainingdata.farseer.org/T75-June3rd-End.7z
Download farseerT76.binpack http://trainingdata.farseer.org/T76-Nov10th-End.7z
Run python3 interleave_binpacks.py T60T70wIsRightFarseer.binpack farseerT74.binpack farseerT75.binpack farseerT76.binpack T60T70wIsRightFarseerT60T74T75T76.binpack
Tests:
STC: https://tests.stockfishchess.org/tests/view/6203fb85d71106ed12a407b7
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 16952 W: 4775 L: 4521 D: 7656
Ptnml(0-2): 133, 1818, 4318, 2076, 131
LTC: https://tests.stockfishchess.org/tests/view/62041e68d71106ed12a40e85
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 14944 W: 4138 L: 3907 D: 6899
Ptnml(0-2): 21, 1499, 4202, 1728, 22
closes https://github.com/official-stockfish/Stockfish/pull/3927
Bench: 4919707
Most credits for this patch should go to @candirufish.
Based on his big search tuning (1M games at 20+0.1s)
https://tests.stockfishchess.org/tests/view/61fc7a6ed508ec6a1c9f4b7d
with some hand polishing on top of it, which includes :
a) correcting trend sigmoid - for some reason original tuning resulted in it being negative. This heuristic was proven to be worth some elo for years so reversing it sign is probably some random artefact;
b) remove changes to continuation history based pruning - this heuristic historically was really good at providing green STCs and then failing at LTC miserably if we tried to make it more strict, original tuning was done at short time control and thus it became more strict - which doesn't scale to longer time controls;
c) remove changes to improvement - not really indended :).
passed STC
https://tests.stockfishchess.org/tests/view/6203526e88ae2c84271c2ee2
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 16840 W: 4604 L: 4363 D: 7873
Ptnml(0-2): 82, 1780, 4449, 2033, 76
passed LTC
https://tests.stockfishchess.org/tests/view/620376e888ae2c84271c35d4
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 17232 W: 4771 L: 4542 D: 7919
Ptnml(0-2): 14, 1655, 5048, 1886, 13
closes https://github.com/official-stockfish/Stockfish/pull/3926
bench 5030992
have maximal compatibility on legacy target arch, now supporting AMD Athlon
The old behavior can anyway be selected by the user if needed, for example
make -j profile-build ARCH=x86-32 sse=yes
fixes#3904
closes https://github.com/official-stockfish/Stockfish/pull/3918
No functional change
For cross-compiling to Android on windows, the Makefile needs some tweaks.
Tested with Android NDK 23.1.7779620 and 21.4.7075529, using
Windows 10 with clean MSYS2 environment (i.e. no MINGW/GCC/Clang
toolchain in PATH) and Fedora 35, with build target:
build ARCH=armv8 COMP=ndk
The resulting binary runs fine inside Droidfish on my Samsung
Galaxy Note20 Ultra and Samsung Galaxy Tab S7+
Other builds tested to exclude regressions: MINGW64/Clang64 build
on Windows; MINGW64 cross build, native Clang and GCC builds on Fedora.
wiki docs https://github.com/glinscott/fishtest/wiki/Cross-compiling-Stockfish-for-Android-on-Windows-and-Linux
closes https://github.com/official-stockfish/Stockfish/pull/3901
No functional change
A Windows Native Build (WNB) can be done:
- on Windows, using a recent mingw-w64 g++/clang compiler
distributed by msys2, cygwin and others
- on Linux, using mingw-w64 g++ to cross compile
Improvements:
- check for a WNB in a proper way and set a variable to simplify the code
- set the proper EXE for a WNB
- use the proper name for the mingw-w64 clang compiler
- use the static linking for a WNB
- use wine to make a PGO cross compile on Linux (also with Intel SDE)
- enable the LTO build for mingw-w64 g++ compiler
- set `lto=auto` to use the make's job server, if available, or otherwise
to fall back to autodetection of the number of CPU threads
- clean up all the temporary LTO files saved in the local directory
Tested on:
- msys2 MINGW64 (g++), UCRT64 (g++), MINGW32 (g++), CLANG64 (clang)
environments
- cygwin mingw-w64 g++
- Ubuntu 18.04 & 21.10 mingw-w64 PGO cross compile (also with Intel SDE)
closes#3891
No functional change
This updates estimates from 2yr ago #2401, and adds missing terms.
All tests run at 10+0.1 (STC), 20000 games, error bars +- 1.8 Elo, book 8moves_v3.png.
A table of Elo values with the links to the corresponding tests can be found at the PR
closes https://github.com/official-stockfish/Stockfish/pull/3868
Non-functional Change
This is a reintroduction of an idea that was simplified away approximately 1 year ago.
There are some tweaks to it :
a) exclude promotions;
b) exclude Pv Nodes from it - Pv Nodes logic for captures is really different from non Pv nodes so it makes a lot of sense;
c) use a big grain of capture history - idea is taken from my recent patches in futility pruning.
passed STC
https://tests.stockfishchess.org/tests/view/61bd90f857a0d0f327c373b7
LLR: 2.96 (-2.94,2.94) <0.00,2.50>
Total: 86640 W: 22474 L: 22110 D: 42056
Ptnml(0-2): 268, 9732, 22963, 10082, 275
passed LTC
https://tests.stockfishchess.org/tests/view/61be094457a0d0f327c38aa3
LLR: 2.95 (-2.94,2.94) <0.50,3.00>
Total: 23240 W: 6079 L: 5838 D: 11323
Ptnml(0-2): 14, 2261, 6824, 2512, 9
https://github.com/official-stockfish/Stockfish/pull/3864
bench 4493723
This patch is a follow up of previous 2 patches that introduced more reductions for PV nodes with low delta and more pruning for nodes with low delta. Instead of writing separate heuristics now it adjust reductions based on delta / rootDelta - it allows to remove 3 separate adjustements of pruning/LMR in different places and also makes reduction dependence on delta and rootDelta smoother. Also now it works for all pruning heuristics and not just 2.
Passed STC
https://tests.stockfishchess.org/tests/view/61ba9b6c57a0d0f327c2d48b
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 79192 W: 20513 L: 20163 D: 38516
Ptnml(0-2): 238, 8900, 21024, 9142, 292
passed LTC
https://tests.stockfishchess.org/tests/view/61baf77557a0d0f327c2eb8e
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 158400 W: 41134 L: 40572 D: 76694
Ptnml(0-2): 101, 16372, 45745, 16828, 154
closes https://github.com/official-stockfish/Stockfish/pull/3862
bench 4651538
This idea is somewhat similar to extentions in LMR but has a different flavour.
If result of LMR was really good - thus exceeded alpha by some pretty
big given margin, we can extend move after LMR in full depth search with 0 window.
The idea is that this move is probably a fail high with somewhat of a big
probability so extending it makes a lot of sense
passed STC
https://tests.stockfishchess.org/tests/view/61ad45ea56fcf33bce7d74b7
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 59680 W: 15531 L: 15215 D: 28934
Ptnml(0-2): 193, 6711, 15734, 6991, 211
passed LTC
https://tests.stockfishchess.org/tests/view/61ad9ff356fcf33bce7d8646
LLR: 2.95 (-2.94,2.94) <0.50,3.00>
Total: 59104 W: 15321 L: 14992 D: 28791
Ptnml(0-2): 53, 6023, 17065, 6364, 47
closes https://github.com/official-stockfish/Stockfish/pull/3838
bench 4881329
This patch optimizes the NEON implementation in two ways.
The activation layer after the feature transformer is rewritten to make it easier for the compiler to see through dependencies and unroll. This in itself is a minimal, but a positive improvement. Other architectures could benefit from this too in the future. This is not an algorithmic change.
The affine transform for large matrices (first layer after FT) on NEON now utilizes the same optimized code path as >=SSSE3, which makes the memory accesses more sequential and makes better use of the available registers, which allows for code that has longer dependency chains.
Benchmarks from Redshift#161, profile-build with apple clang
george@Georges-MacBook-Air nets % ./stockfish-b82d93 bench 2>&1 | tail -4 (current master)
===========================
Total time (ms) : 2167
Nodes searched : 4667742
Nodes/second : 2154011
george@Georges-MacBook-Air nets % ./stockfish-7377b8 bench 2>&1 | tail -4 (this patch)
===========================
Total time (ms) : 1842
Nodes searched : 4667742
Nodes/second : 2534061
This is a solid 18% improvement overall, larger in a bench with NNUE-only, not mixed.
Improvement is also observed on armv7-neon (Raspberry Pi, and older phones), around 5% speedup.
No changes for architectures other than NEON.
closes https://github.com/official-stockfish/Stockfish/pull/3837
No functional changes.
Initialize continuation history with a slighlty negative value -71 instead of zero.
The idea is, because the most history entries will be later negative anyway, to shift
the starting values a little bit in the "correct" direction. Of course the effect of
initialization dimishes with greater depth so I had the apprehension that the LTC test
would be difficult to pass, but it passed.
STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 34520 W: 9076 L: 8803 D: 16641
Ptnml(0-2): 136, 3837, 9047, 4098, 142
https://tests.stockfishchess.org/tests/view/61aa52e39e8855bba1a3776b
LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.00>
Total: 75568 W: 19620 L: 19254 D: 36694
Ptnml(0-2): 44, 7773, 21796, 8115, 56
https://tests.stockfishchess.org/tests/view/61aa87d39e8855bba1a383a5
closes https://github.com/official-stockfish/Stockfish/pull/3834
Bench: 4674029
In their infinite wisdom, Intel axed AVX512 from Alder Lake
chips (well, not entirely, but we kind of want to use the Gracemont
cores for chess!) but still added VNNI support.
Confusingly enough, this is not the same as VNNI256 support.
This adds a specific AVX-VNNI target that will use this AVX-VNNI
mode, by prefixing the VNNI instructions with the appropriate VEX
prefix, and avoiding AVX512 usage.
This is about 1% faster on P cores:
Result of 20 runs
==================
base (./clang-bmi2 ) = 3306337 +/- 7519
test (./clang-vnni ) = 3344226 +/- 7388
diff = +37889 +/- 4153
speedup = +0.0115
P(speedup > 0) = 1.0000
But a nice 3% faster on E cores:
Result of 20 runs
==================
base (./clang-bmi2 ) = 1938054 +/- 28257
test (./clang-vnni ) = 1994606 +/- 31756
diff = +56552 +/- 3735
speedup = +0.0292
P(speedup > 0) = 1.0000
This was measured on Clang 13. GCC 11.2 appears to generate
worse code for Alder Lake, though the speedup on the E cores
is similar.
It is possible to run the engine specifically on the P or E using binding,
for example in linux it is possible to use (for an 8 P + 8 E setup like i9-12900K):
taskset -c 0-15 ./stockfish
taskset -c 16-23 ./stockfish
where the first call binds to the P-cores and the second to the E-cores.
closes https://github.com/official-stockfish/Stockfish/pull/3824
No functional change
Improve compatibility of the last NUMA patch when running under older versions of Windows,
for instance Windows Server 2003. Reported by user "g3g6" in the following comments:
7218ec4df9
Closes https://github.com/official-stockfish/Stockfish/pull/3821
No functional change
This patch is a result of refining of tuning vondele did after
new net passed and some hand-made values adjustements - excluding
changes in other pruning heuristics and rounding value of history
divisor to the nearest power of 2.
With this patch futility pruning becomes more aggressive and
history influence on it is doubled again.
passed STC
https://tests.stockfishchess.org/tests/view/61a2c4c1a26505c2278c150d
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 33848 W: 8841 L: 8574 D: 16433
Ptnml(0-2): 100, 3745, 8988, 3970, 121
passed LTC
https://tests.stockfishchess.org/tests/view/61a327ffa26505c2278c26d9
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 22272 W: 5856 L: 5614 D: 10802
Ptnml(0-2): 12, 2230, 6412, 2468, 14
closes https://github.com/official-stockfish/Stockfish/pull/3814
bench 6302543
This idea is somewhat of a respin of smth we had in futility pruning and that was simplified away - dependence of it not only on static evaluation of position but also on move history heuristics.
Instead of aborting it when they are high there we use fraction of their sum to adjust static eval pruning criteria.
passed STC
https://tests.stockfishchess.org/tests/view/619bd438c0a4ea18ba95a27d
LLR: 2.93 (-2.94,2.94) <0.00,2.50>
Total: 113704 W: 29284 L: 28870 D: 55550
Ptnml(0-2): 357, 12884, 30044, 13122, 445
passed LTC
https://tests.stockfishchess.org/tests/view/619cb8f0c0a4ea18ba95a334
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 147136 W: 37307 L: 36770 D: 73059
Ptnml(0-2): 107, 15279, 42265, 15804, 113
closes https://github.com/official-stockfish/Stockfish/pull/3805
bench 6777918
revert 9048ac00db due to core spread problem and fix new OS compatibility with another method.
This code assumes that if one NUMA node has more than one processor groups, they are created equal(having equal amount of cores assigned to each of the groups), and also the total number of available cores contained in such groups are equal to the number of available cores within one NUMA node because of how best_node function works.
closes https://github.com/official-stockfish/Stockfish/pull/3798
fixes https://github.com/official-stockfish/Stockfish/pull/3787
No functional change.
retire msvc support and corresponding CI. No active development happens on msvc,
and build is much slower or wrong.
gcc (mingw) is our toolchain of choice also on windows, and the latter is tested.
No functional change
Current master implements a scaling of the raw NNUE output value with a formula
equivalent to 'eval = alpha * NNUE_output', where the scale factor alpha varies
between 1.8 (for early middle game) and 0.9 (for pure endgames). This feature
allows Stockfish to keep material on the board when she thinks she has the advantage,
and to seek exchanges and simplifications when she thinks she has to defend.
This patch slightly offsets the turning point between these two strategies, by adding
to Stockfish's evaluation a small "optimism" value before actually doing the scaling.
The effect is that SF will play a little bit more risky, trying to keep the tension a
little bit longer when she is defending, and keeping even more material on the board
when she has an advantage.
We note that this patch is similar in spirit to the old "Contempt" idea we used to have
in classical Stockfish, but this implementation differs in two key points:
a) it has been tested as an Elo-gainer against master;
b) the values output by the search are not changed on average by the implementation
(in other words, the optimism value changes the tension/exchange strategy, but a
displayed value of 1.0 pawn has the same signification before and after the patch).
See the old comment https://github.com/official-stockfish/Stockfish/pull/1361#issuecomment-359165141
for some images illustrating the ideas.
-------
finished yellow at STC:
LLR: -2.94 (-2.94,2.94) <0.00,2.50>
Total: 165048 W: 41705 L: 41611 D: 81732
Ptnml(0-2): 565, 18959, 43245, 19327, 428
https://tests.stockfishchess.org/tests/view/61942a3dcd645dc8291c876b
passed LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.00>
Total: 121656 W: 30762 L: 30287 D: 60607
Ptnml(0-2): 87, 12558, 35032, 13095, 56
https://tests.stockfishchess.org/tests/view/61962c58cd645dc8291c8877
-------
How to continue from there?
a) the shape (slope and amplitude) of the sigmoid used to compute the optimism value
could be tweaked to try to gain more Elo, so the parameters of the sigmoid function
in line 391 of search.cpp could be tuned with SPSA. Manual tweaking is also possible
using this Desmos page: https://www.desmos.com/calculator/jhh83sqq92
b) in a similar vein, with two recents patches affecting the scaling of the NNUE
evaluation in evaluate.cpp, now could be a good time to try a round of SPSA tuning
of the NNUE network;
c) this patch will tend to keep tension in middlegame a little bit longer, so any
patch improving the defensive aspect of play via search extensions in risky,
tactical positions would be welcome.
-------
closes https://github.com/official-stockfish/Stockfish/pull/3797
Bench: 6184852
Starting with Windows Build 20348 the behavior of the numa API has been changed:
https://docs.microsoft.com/en-us/windows/win32/procthread/numa-support
Old code only worked because there was probably a limit on how many
cores/threads can reside within one NUMA node, and the OS creates extra NUMA
nodes when necessary, however the actual mechanism of core binding is
done by "Processor Groups"(https://docs.microsoft.com/en-us/windows/win32/procthread/processor-groups). With a newer OS, one NUMA node can have many
such "Processor Groups" and we should just consistently use the number
of groups to bind the threads instead of deriving the topology from
the number of NUMA nodes.
This change is required to spread threads on all cores on Windows 11 with
a 3990X CPU. It has only 1 NUMA node with 2 groups of 64 threads each.
closes https://github.com/official-stockfish/Stockfish/pull/3787
No functional change.
In case the evaluation at root is large, discourage the use of lazyEval.
This fixes https://github.com/official-stockfish/Stockfish/issues/3772
or at least improves it significantly. In this case, poor play with large
odds can be observed, in extreme cases leading to a loss despite large
advantage:
r1bq1b1r/ppp3p1/3p1nkp/n3p3/2B1P2N/2NPB3/PPP2PPP/R3K2R b KQ - 5 9
With this patch the poor move is only considered up to depth 13, in master
up to depth 28.
The patch did not pass at LTC with Elo gainer bounds, but with slightly
positive Elo nevertheless (95% LOS).
STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 40368 W: 10318 L: 10041 D: 20009
Ptnml(0-2): 103, 4493, 10725, 4750, 113
https://tests.stockfishchess.org/tests/view/61800ad259e71df00dcc420d
LTC:
LLR: -2.94 (-2.94,2.94) <0.50,3.00>
Total: 212288 W: 52997 L: 52692 D: 106599
Ptnml(0-2): 112, 22038, 61549, 22323, 122
https://tests.stockfishchess.org/tests/view/618050d959e71df00dcc426d
closes https://github.com/official-stockfish/Stockfish/pull/3780
Bench: 7127040
Maintain for each root move an exponential average of the search value with a weight ratio of 2:1 (new value vs old values). Then the average score is used as the center of the initial aspiration window instead of the previous score.
Stats indicate (see PR) that the deviation for previous score is in general greater than using average score, so later seems a better estimation of the next search value. This is probably the reason this patch succeded besides smoothing the sometimes wild swings in search score. An additional observation is that at higher depth previous score is above but average score below zero. So for average score more/less fail/low highs should be occur than previous score.
STC:
LLR: 2.97 (-2.94,2.94) <0.00,2.50>
Total: 59792 W: 15106 L: 14792 D: 29894
Ptnml(0-2): 144, 6718, 15869, 7010, 155
https://tests.stockfishchess.org/tests/view/61841612d7a085ad008eef06
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 46448 W: 11835 L: 11537 D: 23076
Ptnml(0-2): 21, 4756, 13374, 5050, 23
https://tests.stockfishchess.org/tests/view/618463abd7a085ad008eef3e
closes https://github.com/official-stockfish/Stockfish/pull/3776
Bench: 6719976
Currently we handle the UCI_Elo with a double randomization. This
seems not necessary and a bit involuted.
This patch removes the first randomization and unifies the 2 cases.
closes https://github.com/official-stockfish/Stockfish/pull/3769
No functional change.
To help with debugging, the worker sends the output of
stderr (suitable truncated) to the action log on the
server, in case a build fails. For this to work it is
important that there is no spurious output to stderr.
closes https://github.com/official-stockfish/Stockfish/pull/3773
No functional change
Official release version of Stockfish 14.1
Bench: 6334068
---
Today, we have the pleasure to announce Stockfish 14.1.
As usual, downloads will be freely available at stockfishchess.org/download [1].
With Stockfish 14.1 our users get access to the strongest chess engine
available today. In the period leading up to this release, Stockfish
convincingly won several chess engine tournaments, including the TCEC 21
superfinal, the TCEC Cup 9, and the Computer Chess Championship for
Fischer Random Chess (Chess960). In the latter tournament, Stockfish
was undefeated in 599 out of 600 games played.
Compared to Stockfish 14, this release introduces a more advanced NNUE
architecture and various search improvements. In self play testing, using
a book of balanced openings, Stockfish 14.1 wins three times more game
pairs than it loses [2]. At this high level, draws are very common, so the
Elo difference to Stockfish 14 is about 17 Elo. The NNUE evaluation method,
introduced to top level chess with Stockfish 12 about one year ago [3],
has now been adopted by several other strong CPU based chess engines.
The Stockfish project builds on a thriving community of enthusiasts
(thanks everybody!) that contribute their expertise, time, and resources
to build a free and open-source chess engine that is robust,
widely available, and very strong. We invite our chess fans to join the
fishtest testing framework and programmers to contribute to the project [4].
Stay safe and enjoy chess!
The Stockfish team
[1] https://stockfishchess.org/download/
[2] https://tests.stockfishchess.org/tests/view/6175c320af70c2be1788fa2b
[3] https://github.com/official-stockfish/Stockfish/discussions/3628
[4] https://stockfishchess.org/get-involved/
Idea of this patch is the following: in case we already have four moves that
exceeded alpha in the current node, the probability of finding fifth should
be reasonably low. Note that four is completely arbitrary - there could and
probably should be some tweaks, both in tweaking best move count threshold
for more reductions and tweaking how they work - for example making more
reductions with best move count linearly.
passed STC:
https://tests.stockfishchess.org/tests/view/615f614783dd501a05b0aee2
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 141816 W: 36056 L: 35686 D: 70074
Ptnml(0-2): 499, 15131, 39273, 15511, 494
passed LTC:
https://tests.stockfishchess.org/tests/view/615fdff683dd501a05b0af35
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 68536 W: 17221 L: 16891 D: 34424
Ptnml(0-2): 38, 6573, 20725, 6885, 47
closes https://github.com/official-stockfish/Stockfish/pull/3736
Bench: 6131513
This patch updates the stat_bonus() function (used in the history tables to
help move ordering), keeping the same quadratic for small depths but changing
the values for depth >= 9:
The old bonus formula was increasing from zero at depth 1 to 4100 at depth 14,
then used the strange, small value of 73 for all depths >= 15.
The new bonus formula increases from 0 at depth 1 to 2000 at depth 8, then
keeps 2000 for all depths >= 8.
passed STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 169624 W: 42875 L: 42454 D: 84295
Ptnml(0-2): 585, 19340, 44557, 19729, 601
https://tests.stockfishchess.org/tests/view/615bd69e9d256038a969b97c
passed LTC:
LLR: 3.07 (-2.94,2.94) <0.50,3.50>
Total: 37336 W: 9456 L: 9191 D: 18689
Ptnml(0-2): 20, 3810, 10747, 4067, 24
https://tests.stockfishchess.org/tests/view/615c75d99d256038a969b9b2
closes https://github.com/official-stockfish/Stockfish/pull/3731
Bench: 6261865
When playing games in MultiPV mode we must take care to only track the
best move changing for the first PV line. Otherwise, SF will spend most
of its time for the initial moves after the book exit.
This has been observed and reported on Discord, but can also be seen in
games played in Stefan Pohl's MultiPV experiment.
Tested with MultiPV=4.
STC:
https://tests.stockfishchess.org/tests/view/615c24b59d256038a969b990
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 1744 W: 694 L: 447 D: 603
Ptnml(0-2): 32, 125, 358, 278, 79
LTC:
https://tests.stockfishchess.org/tests/view/615c31769d256038a969b993
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 2048 W: 723 L: 525 D: 800
Ptnml(0-2): 10, 158, 511, 314, 31
closes https://github.com/official-stockfish/Stockfish/pull/3729
Bench: 5714575
Respin of multi-thread idea that was simplified away recently: basically doing
more reductions with thread count since Lazy SMP naturally widens search. With
drawish book this idea got simplified away but with less drawish book it again
gains elo, maybe trying to reinstall other ideas that were simplified away
previously can be beneficial.
passed STC
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 39736 W: 10205 L: 9986 D: 19545
Ptnml(0-2): 45, 4254, 11064, 4447, 58
https://tests.stockfishchess.org/tests/view/615750702d02f48db3961b00
passed LTC
LLR: 2.97 (-2.94,2.94) <0.50,3.50>
Total: 60352 W: 15530 L: 15218 D: 29604
Ptnml(0-2): 24, 5900, 18016, 6212, 24
https://tests.stockfishchess.org/tests/view/6157d8935488e26ea5eace7f
closes https://github.com/official-stockfish/Stockfish/pull/3724
Bench 5714575
Idea is to extend some quiet ttMoves if a lot of things indicate that
the transposition table move is going to be a good move:
1) move being a killer - so being the best move in nearby node;
2) reply continuation history is really good.
This is basically saying that move is good "in general" in this position,
that it is a good reply to the opponent move and that it was the best in
this position somewhere in search - so extending it makes a lot of sense.
In general in past year we had a lot of extensions of different types,
maybe there is something more in it :)
passed STC
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 42944 W: 10932 L: 10695 D: 21317
Ptnml(0-2): 141, 4869, 11210, 5116, 136
https://tests.stockfishchess.org/tests/view/614cca8e7bdc23e77ceb89f0
passed LTC
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 156848 W: 39473 L: 38893 D: 78482
Ptnml(0-2): 125, 16327, 44913, 16961, 98
https://tests.stockfishchess.org/tests/view/614cf93d7bdc23e77ceb8a13
closes https://github.com/official-stockfish/Stockfish/pull/3719
Bench: 5714575
In master, during singular move analysis, when both the transposition value
and a reduced search for the other moves seem to indicate a fail high, we
heuristically prune the whole subtree and return an fail high score.
This patch is a little bit more cautious in this case, and instead of the
risky cutoff, we now search the ttMove with a reduced depth (by two plies).
STC:
https://tests.stockfishchess.org/tests/view/614dafe07bdc23e77ceb8a89
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 46728 W: 11909 L: 11666 D: 23153
Ptnml(0-2): 181, 5288, 12168, 5561, 166
LTC:
https://tests.stockfishchess.org/tests/view/614dc84abe4c07e0ecac3c95
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 74520 W: 18809 L: 18450 D: 37261
Ptnml(0-2): 45, 7735, 21346, 8084, 50
closes https://github.com/official-stockfish/Stockfish/pull/3718
Bench: 5499262
This patch relax a little bit the condition for doubly singular moves
(ie moves that are so forced that we think that they deserve a local
double extension of the search). We lower the margin and allow up to
six such double extensions in the path between the root and the critical
node.
Original idea by Siad Daboul (@TopoIogist) in PR #3709
Tested with the previous commit:
passed STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 33048 W: 8458 L: 8236 D: 16354
Ptnml(0-2): 120, 3701, 8660, 3923, 120
https://tests.stockfishchess.org/tests/view/614b24347bdc23e77ceb88fe
passed LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 54176 W: 13712 L: 13406 D: 27058
Ptnml(0-2): 36, 5653, 15399, 5969, 31
https://tests.stockfishchess.org/tests/view/614b3b727bdc23e77ceb8911
closes https://github.com/official-stockfish/Stockfish/pull/3714
Bench: 5792377
This patch detects some search explosions (due to double extensions in
search.cpp) which can happen in some pathological positions, and takes
measures to ensure progress in search even for these pathological situations.
While a small number of double extensions can be useful during search
(for example to resolve a tactical sequence), a sustained regime of
double extensions leads to search explosion and a non-finishing search.
See the discussion in https://github.com/official-stockfish/Stockfish/pull/3544
and the issue https://github.com/official-stockfish/Stockfish/issues/3532 .
The implemented algorithm is the following:
a) at each node during search, store the current depth in the stack.
Double extensions are by definition levels of the stack where the
depth at ply N is strictly higher than depth at ply N-1.
b) during search, calculate for each thread a running average of the
number of double extensions in the last 4096 visited nodes.
c) if one thread has more than 2% of double extensions for a sustained
period of time (6 millions consecutive nodes, or about 4 seconds on
my iMac), we decide that this thread is in an explosion state and
we calm down this thread by preventing it to do any double extension
for the next 6 millions nodes.
To calculate the running averages, we also introduced a auxiliary class
generalizing the computations of ttHitAverage variable we already had in
code. The implementation uses an exponential moving average of period 4096
and resolution 1/1024, and all computations are done with integers for
efficiency.
-----------
Example where the patch solves a search explosion:
```
./stockfish
ucinewgame
position fen 8/Pk6/8/1p6/8/P1K5/8/6B1 w - - 37 130
go infinite
```
This algorithm does not affect search in normal, non-pathological positions.
We verified, for instance, that the usual bench is unchanged up to depth 20
at least, and that the node numbers are unchanged for a search of the starting
position at depth 32.
-------------
See https://github.com/official-stockfish/Stockfish/pull/3714
Bench: 5575265
This patch introduces extension for captures and promotions. Every capture or
promotion that is not the first move in the list gets extended at PvNodes and
cutNodes. Special thanks to @locutus2 - all my previous attepmts that failed
on this idea were done only for PvNodes - idea to include also cutNodes was
based on his latest passed patch.
STC
https://tests.stockfishchess.org/tests/view/6134abf325b9b35584838574
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 188920 W: 47754 L: 47304 D: 93862
Ptnml(0-2): 595, 21754, 49344, 22140, 627
LTC
https://tests.stockfishchess.org/tests/view/613521de25b9b355848385d7
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 8768 W: 2283 L: 2098 D: 4387
Ptnml(0-2): 7, 866, 2452, 1053, 6
closes https://github.com/official-stockfish/Stockfish/pull/3692
bench: 5564555
The new network caused some issues initially due to the very narrow neuron set between the first two FC layers. Necessary changes were hacked together to make it work. This patch is a mature approach to make the affine transform code faster, more readable, and easier to maintain should the layer sizes change again.
The following changes were made:
* ClippedReLU always produces a multiple of 32 outputs. This is about as good of a solution for AffineTransform's SIMD requirements as it can get without a bigger rewrite.
* All self-contained simd helpers are moved to a separate file (simd.h). Inline asm is utilized to work around GCC's issues with code generation and register assignment. See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101693, https://godbolt.org/z/da76fY1n7
* AffineTransform has 2 specializations. While it's more lines of code due to the boilerplate, the logic in both is significantly reduced, as these two are impossible to nicely combine into one.
1) The first specialization is for cases when there's >=128 inputs. It uses a different approach to perform the affine transform and can make full use of AVX512 without any edge cases. Furthermore, it has higher theoretical throughput because less loads are needed in the hot path, requiring only a fixed amount of instructions for horizontal additions at the end, which are amortized by the large number of inputs.
2) The second specialization is made to handle smaller layers where performance is still necessary but edge cases need to be handled. AVX512 implementation for this was ommited by mistake, a remnant from the temporary implementation for the new... This could be easily reintroduced if needed. A slightly more detailed description of both implementations is in the code.
Overall it should be a minor speedup, as shown on fishtest:
passed STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 51520 W: 4074 L: 3888 D: 43558
Ptnml(0-2): 111, 3136, 19097, 3288, 128
and various tests shown in the pull request
closes https://github.com/official-stockfish/Stockfish/pull/3663
No functional change
Introduces a new NNUE network architecture and associated network parameters
The summary of the changes:
* Position for each perspective mirrored such that the king is on e..h files. Cuts the feature transformer size in half, while preserving enough knowledge to be good. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.b40q4rb1w7on.
* The number of neurons after the feature transformer increased two-fold, to 1024x2. This is possibly mostly due to the now very optimized feature transformer update code.
* The number of neurons after the second layer is reduced from 16 to 8, to reduce the speed impact. This, perhaps surprisingly, doesn't harm the strength much. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.6qkocr97fezq
The AffineTransform code did not work out-of-the box with the smaller number of neurons after the second layer, so some temporary changes have been made to add a special case for InputDimensions == 8. Also additional 0 padding is added to the output for some archs that cannot process inputs by <=8 (SSE2, NEON). VNNI uses an implementation that can keep all outputs in the registers while reducing the number of loads by 3 for each 16 inputs, thanks to the reduced number of output neurons. However GCC is particularily bad at optimization here (and perhaps why the current way the affine transform is done even passed sprt) (see https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit# for details) and more work will be done on this in the following days. I expect the current VNNI implementation to be improved and extended to other architectures.
The network was trained with a slightly modified version of the pytorch trainer (https://github.com/glinscott/nnue-pytorch); the changes are in https://github.com/glinscott/nnue-pytorch/pull/143
The training utilized 2 datasets.
dataset A - https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing
dataset B - as described in ba01f4b954
The training process was as following:
train on dataset A for 350 epochs, take the best net in terms of elo at 20k nodes per move (it's fine to take anything from later stages of training).
convert the .ckpt to .pt
--resume-from-model from the .pt file, train on dataset B for <600 epochs, take the best net. Lambda=0.8, applied before the loss function.
The first training command:
python3 train.py \
../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
--gpus "$3," \
--threads 1 \
--num-workers 1 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--smart-fen-skipping \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--max_epochs=600 \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2
The second training command:
python3 serialize.py \
--features=HalfKAv2_hm^ \
../nnue-pytorch-training/experiment_131/run_6/default/version_0/checkpoints/epoch-499.ckpt \
../nnue-pytorch-training/experiment_$1/base/base.pt
python3 train.py \
../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
--gpus "$3," \
--threads 1 \
--num-workers 1 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--smart-fen-skipping \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=0.8 \
--max_epochs=600 \
--resume-from-model ../nnue-pytorch-training/experiment_$1/base/base.pt \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2
STC: https://tests.stockfishchess.org/tests/view/611120b32a8a49ac5be798c4
LLR: 2.97 (-2.94,2.94) <-0.50,2.50>
Total: 22480 W: 2434 L: 2251 D: 17795
Ptnml(0-2): 101, 1736, 7410, 1865, 128
LTC: https://tests.stockfishchess.org/tests/view/611152b32a8a49ac5be798ea
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9776 W: 442 L: 333 D: 9001
Ptnml(0-2): 5, 295, 4180, 402, 6
closes https://github.com/official-stockfish/Stockfish/pull/3646
bench: 5189338
This patch improves the codegen in the AffineTransform::forward function for architectures >=SSSE3. Current code works directly on memory and the compiler cannot see that the stores through outptr do not alias the loads through weights and input32. The solution implemented is to perform the affine transform with local variables as accumulators and only store the result to memory at the end. The number of accumulators required is OutputDimensions / OutputSimdWidth, which means that for the 1024->16 affine transform it requires 4 registers with SSSE3, 2 with AVX2, 1 with AVX512. It also cuts the number of stores required by NumRegs * 256 for each node evaluated. The local accumulators are expected to be assigned to registers, but even if this cannot be done in some case due to register pressure it will help the compiler to see that there is no aliasing between the loads and stores and may still result in better codegen.
See https://godbolt.org/z/59aTKbbYc for codegen comparison.
passed STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 140328 W: 10635 L: 10358 D: 119335
Ptnml(0-2): 302, 8339, 52636, 8554, 333
closes https://github.com/official-stockfish/Stockfish/pull/3634
No functional change
combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets.
Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-d8609abe8caf.nnue:
The initial net nn-d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer.
Training data gen command (generates in chunks of 200k positions):
generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack
PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence):
python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val
See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data.
Based on that Michael generated nn-76a8a7ffb820.nnue in the following way:
The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch
python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-d8609abe8caf.pt
This run is thus started from Segio Vieri's net nn-d8609abe8caf.nnue
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack
model.py modifications:
loss = torch.pow(torch.abs(p - q), 2.6).mean()
LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch.
optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992)
For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created
The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time.
passed LTC
https://tests.stockfishchess.org/tests/view/60fa45aed8a6b65b2f3a77a4
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53040 W: 1732 L: 1575 D: 49733
Ptnml(0-2): 14, 1432, 23474, 1583, 17
passed STC
https://tests.stockfishchess.org/tests/view/60f9fee2d8a6b65b2f3a7775
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 37928 W: 3178 L: 3001 D: 31749
Ptnml(0-2): 100, 2446, 13695, 2623, 100.
closes https://github.com/official-stockfish/Stockfish/pull/3626
Bench: 5169957
The main idea is that illegal moves influencing search or
qsearch obviously can't be any sort of good. The only reason
why initially legality checks for search and qsearch were done
after they actually can influence some heuristics is because
legality check is expensive computationally. Eventually in
search it was moved to the place where it makes sure that
illegal moves can't influence search.
This patch shows that the same can be done for qsearch + it
passed STC with elo-gaining bounds + it removes 3 lines of code
because one no longer needs to increment/decrement movecount
on illegal moves.
passed STC with elo-gaining bounds
https://tests.stockfishchess.org/tests/view/60f20aefd1189bed71812da0
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 61512 W: 4688 L: 4492 D: 52332
Ptnml(0-2): 139, 3730, 22848, 3874, 165
The same version functionally but with moving condition ever earlier
passed LTC with simplification bounds.
https://tests.stockfishchess.org/tests/view/60f292cad1189bed71812de9
LLR: 2.98 (-2.94,2.94) <-2.50,0.50>
Total: 60944 W: 1724 L: 1685 D: 57535
Ptnml(0-2): 11, 1556, 27298, 1597, 10
closes https://github.com/official-stockfish/Stockfish/pull/3618
bench 4709569
Not all linux users will have libatomic installed.
When using clang as the system compiler with compiler-rt as the default
runtime library instead of libgcc, atomic builtins may be provided by compiler-rt.
This change allows such users to pass RTLIB=compiler-rt to make sure
the build doesn't error out on the missing (unnecessary) libatomic.
closes https://github.com/official-stockfish/Stockfish/pull/3597
No functional change
Official release version of Stockfish 14
Bench: 4770936
---
Today, we have the pleasure to announce Stockfish 14.
As usual, downloads will be freely available at https://stockfishchess.org
The engine is now significantly stronger than just a few months ago,
and wins four times more game pairs than it loses against the previous
release version [0]. Stockfish 14 is now at least 400 Elo ahead of
Stockfish 7, a top engine in 2016 [1]. During the last five years,
Stockfish has thus gained about 80 Elo per year.
Stockfish 14 evaluates positions more accurately than Stockfish 13 as
a result of two major steps forward in defining and training the
efficiently updatable neural network (NNUE) that provides the evaluation
for positions.
First, the collaboration with the Leela Chess Zero team - announced
previously [2] - has come to fruition. The LCZero team has provided a
collection of billions of positions evaluated by Leela that we have
combined with billions of positions evaluated by Stockfish to train the
NNUE net that powers Stockfish 14. The fact that we could use and combine
these datasets freely was essential for the progress made and demonstrates
the power of open source and open data [3].
Second, the architecture of the NNUE network was significantly updated:
the new network is not only larger, but more importantly, it deals better
with large material imbalances and can specialize for multiple phases of
the game [4]. A new project, kick-started by Gary Linscott and
Tomasz Sobczyk, led to a GPU accelerated net trainer written in
pytorch.[5] This tool allows for training high-quality nets in a couple
of hours.
Finally, this release features some search refinements, minor bug
fixes and additional improvements. For example, Stockfish is now about
90 Elo stronger for chess960 (Fischer random chess) at short time control.
The Stockfish project builds on a thriving community of enthusiasts
(thanks everybody!) that contribute their expertise, time, and resources
to build a free and open-source chess engine that is robust, widely
available, and very strong. We invite our chess fans to join the fishtest
testing framework and programmers to contribute to the project on
github [6].
Stay safe and enjoy chess!
The Stockfish team
[0] https://tests.stockfishchess.org/tests/view/60dae5363beab81350aca077
[1] https://nextchessmove.com/dev-builds
[2] https://stockfishchess.org/blog/2021/stockfish-13/
[3] https://lczero.org/blog/2021/06/the-importance-of-open-data/
[4] https://github.com/official-stockfish/Stockfish/commit/e8d64af1
[5] https://github.com/glinscott/nnue-pytorch/
[6] https://stockfishchess.org/get-involved/
In the so-called "hybrid" method of evaluation of current master, we use the
classical eval (because of its speed) instead of the NNUE eval when the classical
material balance approximation hints that the position is "winning enough" to
rely on the classical eval.
This trade-off idea between speed and accuracy works well in general, but in
some fortress positions the classical eval is just bad. So in shuffling branches
of the search tree, we (slowly) increase the thresehold so that eventually we
don't trust classical anymore and switch to NNUE evaluation.
This patch increases that threshold faster, so that we switch to NNUE quicker
in shuffling branches. Idea is to incite Stockfish to spend less time in fortresses
lines in the search tree, and spend more time searching the critical lines.
passed STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 47872 W: 3908 L: 3720 D: 40244
Ptnml(0-2): 122, 3053, 17419, 3199, 143
https://tests.stockfishchess.org/tests/view/60cef34b457376eb8bcab79d
passed LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 73616 W: 2326 L: 2143 D: 69147
Ptnml(0-2): 21, 1940, 32705, 2119, 23
https://tests.stockfishchess.org/tests/view/60cf6d842114332881e73528
Retested at LTC against lastest master:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 18264 W: 642 L: 532 D: 17090
Ptnml(0-2): 6, 479, 8055, 583, 9
https://tests.stockfishchess.org/tests/view/60d18cd540925195e7a6c351
closes https://github.com/official-stockfish/Stockfish/pull/3578
Bench: 5139233
This patch removes the UCI option for setting Contempt in classical evaluation.
It is exactly equivalent to using Contempt=0 for the UCI contempt value and keeping
the dynamic part in the algo (renaming this dynamic part `trend` to better describe
what it does). We have tried quite hard to implement a working Contempt feature for
NNUE but nothing really worked, so it is probably time to give up.
Interested chess fans wishing to keep playing with the UCI option for Contempt and
use it with the classical eval are urged to download the version tagged "SF_Classical"
of Stockfish (dated 31 July 2020), as it was the last version where our search
algorithm was tuned for the classical eval and is probably our strongest classical
player ever: https://github.com/official-stockfish/Stockfish/tags
Passed STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 72904 W: 6228 L: 6175 D: 60501
Ptnml(0-2): 221, 5006, 25971, 5007, 247
https://tests.stockfishchess.org/tests/view/60c98bf9457376eb8bcab18d
Passed LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 45168 W: 1601 L: 1547 D: 42020
Ptnml(0-2): 38, 1331, 19786, 1397, 32
https://tests.stockfishchess.org/tests/view/60c9c7fa457376eb8bcab1bb
closes https://github.com/official-stockfish/Stockfish/pull/3575
Bench: 4947716
This patch increase the weight of pawns and pieces from 28 to 32
in the scaling formula we apply to the output of the NNUE pure eval.
Increasing this gradient for pawns and pieces means that Stockfish
will try a little harder to keep material when she has the advantage,
and try a little bit harder to escape into an endgame when she is
under pressure.
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 53168 W: 4371 L: 4177 D: 44620
Ptnml(0-2): 160, 3389, 19283, 3601, 151
https://tests.stockfishchess.org/tests/view/60cefd1d457376eb8bcab7ab
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 10888 W: 386 L: 288 D: 10214
Ptnml(0-2): 3, 260, 4821, 356, 4
https://tests.stockfishchess.org/tests/view/60cf709d2114332881e7352b
closes https://github.com/official-stockfish/Stockfish/pull/3571
Bench: 4965430
trained with the Python command
c:\nnue>python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_10 --resume-from-model ./pt/nn-3b20abec10c1.pt
`
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack .
Net nn-3b20abec10c1.nnue was chosen as the --resume-from-model with the idea that through learning, the manually hex edited values will be learned and will not need to be manually adjusted going forward. They would also be fine tuned by the learning process.
passed STC:
https://tests.stockfishchess.org/tests/view/60cdf91e457376eb8bcab66f
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 18256 W: 1639 L: 1479 D: 15138
Ptnml(0-2): 59, 1179, 6505, 1313, 72
passed LTC:
https://tests.stockfishchess.org/tests/view/60ce2166457376eb8bcab6e1
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 18792 W: 654 L: 542 D: 17596
Ptnml(0-2): 9, 490, 8291, 592, 14
closes https://github.com/official-stockfish/Stockfish/pull/3570
Bench: 5020972
The Cygwin environment has two g++ compilers, each with a different problem
for compiling Stockfish at the moment:
(a) g++.exe : full posix build compiler, linked to cygwin dll.
=> This one has a problem embedding the net.
(b) x86_64-w64-mingw32-g++.exe : native Windows build compiler.
=> This one manages to embed the net, but has a problem related to libgcov
when we use the profile-build target of Stockfish.
This patch solves the problem for compiler (b), so that our recommended command line
if you want to build an optimized version of Stockfish on Cygwin becomes something
like the following (you can change the ARCH value to whatever you want, but note
the COMP and CXX variables pointing at the right compiler):
```
make -j profile-build ARCH=x86-64-modern COMP=mingw CXX=x86_64-w64-mingw32-c++.exe
```
closes https://github.com/official-stockfish/Stockfish/pull/3569
No functional change
move to github actions to replace travis CI.
First version, testing on linux using gcc and clang.
gcc build with sanitizers and valgrind.
No functional change
Optimization of vondele's nn-33c9d39e5eb6.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60ca68be457376eb8bcab28b
Setting: ck values are default based on how large the parameters are
The new values for this net are the raw values at the end of the tuning (80k games)
The significant changes are in buckets 1 and 2 (5-12 pieces) so the main difference is in playing endgames if we compare it to nn-33c9. There is also change in bucket 7 (29-32 pieces) but not as substantial as the changes in buckets 1 and 2. If we interpret the changes based on an experiment a few months ago, this new net plays more optimistically during endgames and less optimistically during openings.
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 49504 W: 4246 L: 4053 D: 41205
Ptnml(0-2): 140, 3282, 17749, 3407, 174
https://tests.stockfishchess.org/tests/view/60cbd752457376eb8bcab478
LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 88720 W: 4926 L: 4651 D: 79143
Ptnml(0-2): 105, 4048, 35793, 4295, 119
https://tests.stockfishchess.org/tests/view/60cc7828457376eb8bcab4fa
closes https://github.com/official-stockfish/Stockfish/pull/3566
Bench: 4758885
This net was created by @pleomati, who manually edited with an hex editor
10 values randomly chosen in the LCSFNet10 net (nn-6ad41a9207d0.nnue) to
create this one. The LCSFNet10 net was trained by Joost VandeVondele from
a dataset combining Stockfish games and Leela games (16x10^9 positions from
SF self-play at depth 9, and 6.3x10^9 positions from Leela games, so overall
72% of Stockfish positions and 28% of Leela positions).
passed STC 10+0.1:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 50888 W: 5881 L: 5654 D: 39353
Ptnml(0-2): 281, 4290, 16085, 4497, 291
https://tests.stockfishchess.org/tests/view/60cbfa68457376eb8bcab49a
passed LTC 60+0.6:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 25480 W: 1498 L: 1338 D: 22644
Ptnml(0-2): 36, 1155, 10193, 1325, 31
https://tests.stockfishchess.org/tests/view/60cc4af8457376eb8bcab4d4
closes https://github.com/official-stockfish/Stockfish/pull/3564
Bench: 4904930
This reverts commit "Fix for Cygwin's environment build-profile", as it was
giving errors for "make clean" on some Windows environments. See comments in
68bf362ea2
Possibly somebody can propose a solution that would fix Cygwin builds and
not break on other system too, stay tuned! :-)
No functional change
The Cygwin environment has two g++ compilers, each with a different problem
for compiling Stockfish at the moment:
(a) g++.exe : full posix build compiler, linked to cygwin dll.
=> This one has a problem embedding the net.
(b) x86_64-w64-mingw32-g++.exe : native Windows build compiler.
=> This one manages to embed the net, but has a problem related to libgcov
when we use the profile-build target of Stockfish.
This patch solves the problem for compiler (b), so that our recommended command line
if you want to build an optimized version of Stockfish on Cygwin becomes something
like the following (you can change the ARCH value to whatever you want, but note
the COMP and CXX variables pointing at the right compiler):
```
make -j profile-build ARCH=x86-64-modern COMP=mingw CXX=x86_64-w64-mingw32-c++.exe
```
closes https://github.com/official-stockfish/Stockfish/pull/3463
No functional change
of a root move leading to a 3-fold repetition.
With this small fix a draw ranking and thus a draw score is being applied.
This works for both, ranking by dtz or wdl tables.
Fixes https://github.com/official-stockfish/Stockfish/issues/3542
(No functional change without TBs.)
Bench: 4877339
This net is the result of training on data used by the Leela project. More precisely,
we shuffled T60 and T74 data kindly provided by borg (for different Tnn, the data is
a result of Leela selfplay with differently sized Leela nets).
The data is available at vondele's google drive:
https://drive.google.com/drive/folders/1mftuzYdl9o6tBaceR3d_VBQIrgKJsFpl.
The Leela data comes in small chunks of .binpack files. To shuffle them, we simply
used a small python script to randomly rename the files, and then concatenated them
using `cat`. As validation data we picked a file of T60 data. We will further investigate
T74 data.
The training for the NNUE architecture used 200 epochs with the Python trainer from
the Stockfish project. Unlike the previous run we tried with this data, this run does
not have adjusted scaling — not because we didn't want to, but because we forgot.
However, this training randomly skips 40% more positions than previous run. The loss
was very spiky and decreased slower than it does usually.
Training loss: https://github.com/official-stockfish/images/blob/main/training-loss-8e47cf062333.png
Validation loss: https://github.com/official-stockfish/images/blob/main/validation-loss-8e47cf062333.png
This is the exact training command:
python train.py --smart-fen-skipping --random-fen-skipping 14 --batch-size 16384 --threads 4 --num-workers 4 --gpus 1 trainingdata\training_data.binpack validationdata\val.binpack
---
10k STC result:
ELO: 3.61 +-3.3 (95%) LOS: 98.4%
Total: 10000 W: 1241 L: 1137 D: 7622
Ptnml(0-2): 68, 841, 3086, 929, 76
https://tests.stockfishchess.org/tests/view/60c67e50457376eb8bcaae70
10k LTC result:
ELO: 2.71 +-2.4 (95%) LOS: 98.8%
Total: 10000 W: 659 L: 581 D: 8760
Ptnml(0-2): 22, 485, 3900, 579, 14
https://tests.stockfishchess.org/tests/view/60c69deb457376eb8bcaae98
Passed LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9648 W: 685 L: 545 D: 8418
Ptnml(0-2): 22, 448, 3740, 596, 18
https://tests.stockfishchess.org/tests/view/60c6d41c457376eb8bcaaecf
---
closes https://github.com/official-stockfish/Stockfish/pull/3550
Bench: 4877339
Compute optimal register count for feature transformer accumulation dynamically.
This also introduces a change where AVX512 would only use 8 registers instead of 16
(now possible due to a 2x increase in feature transformer size).
closes https://github.com/official-stockfish/Stockfish/pull/3543
No functional change
This patch restricts LMR extensions (of non-transposition table moves) from being
used when the transposition table move was extended by two plies via singular
extension. This may serve to limit search explosions in certain positions.
This makes a lot of sense because the precondition for the tt-move to have been
singular extended by two plies is that the result of the alternate search (with
excluded the tt-move) has been a hard fail low: it is natural to later search less
for non tt-moves in this situation.
The current state of depth/extensions/reductions management is getting quite tricky
in our search algo, see https://github.com/official-stockfish/Stockfish/pull/3546#issuecomment-860174549
for some discussion. Suggestions welcome!
Passed STC
https://tests.stockfishchess.org/tests/view/60c3f293457376eb8bcaac8d
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 117984 W: 9698 L: 9430 D: 98856
Ptnml(0-2): 315, 7708, 42703, 7926, 340
passed LTC
https://tests.stockfishchess.org/tests/view/60c46ea5457376eb8bcaacc7
LLR: 2.97 (-2.94,2.94) <0.50,3.50>
Total: 11280 W: 401 L: 302 D: 10577
Ptnml(0-2): 2, 271, 4998, 364, 5
closes https://github.com/official-stockfish/Stockfish/pull/3546
Bench: 4709974
Load feature transformer weights in bulk on little-endian machines.
This is in particular useful to test new nets with c-chess-cli,
see https://github.com/lucasart/c-chess-cli/issues/44
```
$ time ./stockfish.exe uci
Before : 0m0.914s
After : 0m0.483s
```
No functional change
Cleaner vector code structure in feature transformer. This patch just
regroups the parts of the inner loop for each SIMD instruction set.
Tested for non-regression:
LLR: 2.96 (-2.94,2.94) <-2.50,0.50>
Total: 115760 W: 9835 L: 9831 D: 96094
Ptnml(0-2): 326, 7776, 41715, 7694, 369
https://tests.stockfishchess.org/tests/view/60b96b39457376eb8bcaa26e
It would be nice if a future patch could use some of the macros at
the top of the file to unify the code between the distincts SIMD
instruction sets (of course, unifying the Relu will be the challenge).
closes https://github.com/official-stockfish/Stockfish/pull/3506
No functional change
This simplification patch implements two changes:
1. it simplifies away the so-called "lazy" path in the NNUE evaluation internals,
where we trusted the psqt head alone to avoid the costly "positional" head in
some cases;
2. it raises a little bit the NNUEThreshold1 in evaluate.cpp (from 682 to 800),
which increases the limit where we switched from NNUE eval to Classical eval.
Both effects increase the number of positional evaluations done by our new net
architecture, but the results of our tests below seem to indicate that the loss
of speed will be compensated by the gain of eval quality.
STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 26280 W: 2244 L: 2137 D: 21899
Ptnml(0-2): 72, 1755, 9405, 1810, 98
https://tests.stockfishchess.org/tests/view/60ae73f112066fd299795a51
LTC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 20592 W: 750 L: 677 D: 19165
Ptnml(0-2): 9, 614, 8980, 681, 12
https://tests.stockfishchess.org/tests/view/60ae88e812066fd299795a82
closes https://github.com/official-stockfish/Stockfish/pull/3503
Bench: 3817907
Definition of the lazy threshold moved to evaluate.cpp where all others are.
Lazy threshold only used for real searches, not used for the "eval" call.
This preserves the purity of NNUE evaluation, which is useful to verify
consistency between the engine and the NNUE trainer.
closes https://github.com/official-stockfish/Stockfish/pull/3499
No functional change
Our new nets output two values for the side to move in the last layer.
We can interpret the first value as a material evaluation of the
position, and the second one as the dynamic, positional value of the
location of pieces.
This patch changes the balance for the (materialist, positional) parts
of the score from (128, 128) to (121, 135) when the piece material is
equal between the two players, but keeps the standard (128, 128) balance
when one player is at least an exchange up.
Passed STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 15936 W: 1421 L: 1266 D: 13249
Ptnml(0-2): 37, 1037, 5694, 1134, 66
https://tests.stockfishchess.org/tests/view/60a82df9ce8ea25a3ef0408f
Passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 13904 W: 516 L: 410 D: 12978
Ptnml(0-2): 4, 374, 6088, 484, 2
https://tests.stockfishchess.org/tests/view/60a8bbf9ce8ea25a3ef04101
closes https://github.com/official-stockfish/Stockfish/pull/3492
Bench: 3856635
The Tempo variable was introduced 10 years ago in our search because the
classical evaluation function was antisymmetrical in White and Black by design
to gain speed:
Eval(White to play) = -Eval(Black to play)
Nowadays our neural networks know which side is to play in a position when
they evaluate a position and are trained on real games, so the neural network
encodes the advantage of moving as an output of search. This patch shows that
the Tempo variable is not necessary anymore.
STC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 33512 W: 2805 L: 2709 D: 27998
Ptnml(0-2): 80, 2209, 12095, 2279, 93
https://tests.stockfishchess.org/tests/view/60a44ceace8ea25a3ef03d30
LTC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 53920 W: 1807 L: 1760 D: 50353
Ptnml(0-2): 16, 1617, 23650, 1658, 19
https://tests.stockfishchess.org/tests/view/60a477f0ce8ea25a3ef03d49
We also tried a match (20000 games) at STC using purely classical, result was neutral:
https://tests.stockfishchess.org/tests/view/60a4eebcce8ea25a3ef03db5
Note: there are two locations left in search.cpp where we assume antisymmetry
of evaluation (in relation with a speed optimization for null moves in lines
770 and 1439), but as the values are just used for heuristic pruning this
approximation should not hurt too much because the order of magnitude is still
true most of the time.
closes https://github.com/official-stockfish/Stockfish/pull/3481
Bench: 4015864
This improves the speed of NNUE by a bit on old hardware that code path
is intended for, like a Pentium III 1.13 GHz:
10 repeats of "./stockfish bench 16 1 13 default depth NNUE":
Before:
54 642 504 897 cycles (± 0.12%)
62 301 937 829 instructions (± 0.03%)
After:
54 320 821 928 cycles (± 0.13%)
62 084 742 699 instructions (± 0.02%)
Speed of go depth 20 from startpos:
Before: 53103 nps
After: 53856 nps
closes https://github.com/official-stockfish/Stockfish/pull/3476
No functional change.
This patch increases lmrDepth threshold for continuation history based pruning in search.
This part of code for a long time was known to be really TC sensitive - decreasing
this threshold easily passed lower time controls but failed badly at LTC,
on the other hand it increase was part of a tuning that resulted
in being negative at STC but was +12 elo at 180+1.8.
After recent simplification of special conditions that sometimes
increase it from 4 to 5 it was logical to overall test at longer
time controls if 5 is better than 4 with deeper searches.
reduces strenght on STC
https://tests.stockfishchess.org/tests/view/60a3a8bbce8ea25a3ef03c74
ELO: -2.57 +-2.0 (95%) LOS: 0.6%
Total: 20000 W: 1820 L: 1968 D: 16212
Ptnml(0-2): 68, 1582, 6836, 1458, 56
Passed LTC with STC bounds
https://tests.stockfishchess.org/tests/view/60a027395085663412d090ce
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 175256 W: 6774 L: 6548 D: 161934
Ptnml(0-2): 91, 5808, 75604, 6034, 91
Passed VLTC with LTC bounds
https://tests.stockfishchess.org/tests/view/60a2bccce229097940a037a7
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 65736 W: 1224 L: 1092 D: 63420
Ptnml(0-2): 5, 1012, 30706, 1136, 9
closes https://github.com/official-stockfish/Stockfish/pull/3473
bench 3689330
- Comment for Countemove pruning -> Continuation history
- Fix comment in input_slice.h
- Shorter lines in Makefile
- Comment for scale factor
- Fix comment for pinners in see_ge()
- Change Thread.id() signature to size_t
- Trailing space in reprosearch.sh
- Add Douglas Matos Gomes to the AUTHORS file
- Introduce comment for undo_null_move()
- Use Stockfish coding style for export_net()
- Change date in AUTHORS file
closes https://github.com/official-stockfish/Stockfish/pull/3416
No functional change
e2k (Elbrus 2000) - this is a VLIW/EPIC architecture,
the like Intel Itanium (IA-64) architecture.
The architecture has half native / half software support
for most Intel/AMD SIMD (e.g. MMX/SSE/SSE2/SSE3/SSSE3/SSE4.1/SSE4.2/AES/AVX/AVX2 & 3DNow!/SSE4a/XOP/FMA4) via intrinsics.
https://en.wikipedia.org/wiki/Elbrus_2000
closes https://github.com/official-stockfish/Stockfish/pull/3425
No functional change
This PR adds an ability to export any currently loaded network.
The export_net command now takes an optional filename parameter.
If the loaded net is not the embedded net the filename parameter is required.
Two changes were required to support this:
* the "architecture" string, which is really just a some kind of description in the net, is now saved into netDescription on load and correctly saved on export.
* the AffineTransform scrambles weights for some architectures and sparsifies them, such that retrieving the index is hard. This is solved by having a temporary scrambled<->unscrambled index lookup table when loading the network, and the actual index is saved for each individual weight that makes it to canSaturate16. This increases the size of the canSaturate16 entries by 6 bytes.
closes https://github.com/official-stockfish/Stockfish/pull/3456
No functional change
This patch broadens and simplifies definition of PvNode that is likely to fail low.
New definition can be described as following "If node was already researched
at depth >= current depth and failed low there" which is more logical than the
previous version and takes less space + allows to not recompute it every time during move loop.
Passed simplification STC
https://tests.stockfishchess.org/tests/view/609148bf95e7f1852abd2e82
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 20128 W: 1865 L: 1751 D: 16512
Ptnml(0-2): 63, 1334, 7165, 1430, 72
Passed simplification LTC
https://tests.stockfishchess.org/tests/view/6091691295e7f1852abd2e8b
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 95128 W: 3498 L: 3481 D: 88149
Ptnml(0-2): 41, 2956, 41549, 2981, 37
closes https://github.com/official-stockfish/Stockfish/pull/3455
Bench: 3933037
Introduce variable tempo for nnue depending on logarithm of estimated
strength, where strength is the product of time and number of threads.
The original idea here was that NNUE is best with a slightly different
tempo value to classical, since its style of play is slightly different.
It turns out that the best tempo for NNUE varies with strength of play,
so a formula is used which gives about 19 for STC and 24 for LTC under
current fishtest settings.
STC 10+0.1:
LLR: 2.94 (-2.94,2.94) {-0.20,1.10}
Total: 120816 W: 11155 L: 10861 D: 98800
Ptnml(0-2): 406, 8728, 41933, 8848, 493
https://tests.stockfishchess.org/tests/view/60735b3a8141753378960534
LTC 60+0.6:
LLR: 2.94 (-2.94,2.94) {0.20,0.90}
Total: 35688 W: 1392 L: 1234 D: 33062
Ptnml(0-2): 23, 1079, 15473, 1255, 14
https://tests.stockfishchess.org/tests/view/6073ffbc814175337896057f
Passed non-regression SMP test at LTC 20+0.2 (8 threads):
LLR: 2.95 (-2.94,2.94) {-0.70,0.20}
Total: 11008 W: 317 L: 267 D: 10424
Ptnml(0-2): 2, 245, 4962, 291, 4
https://tests.stockfishchess.org/tests/view/60749ea881417533789605a4
closes https://github.com/official-stockfish/Stockfish/pull/3426
Bench 4075325
A lot of optimizations happend since the NNUE was introduced
and since then some parts of the code were left unused. This
got to the point where asserts were have to be made just to
let people know that modifying something will not have any
effects or may even break everything due to the assumptions
being made. Removing these parts removes those inexisting
"false dependencies". Additionally:
* append_changed_indices now takes the king pos and stateinfo
explicitly, no more misleading pos parameter
* IndexList is removed in favor of a generic ValueList.
Feature transformer just instantiates the type it needs.
* The update cost and refresh requirement is deferred to the
feature set once again, but now doesn't go through the whole
FeatureSet machinery and just calls HalfKP directly.
* accumulator no longer has a singular dimension.
* The PS constants and the PieceSquareIndex array are made local
to the HalfKP feature set because they are specific to it and
DO differ for other feature sets.
* A few names are changed to more descriptive
Passed STC non-regression:
https://tests.stockfishchess.org/tests/view/608421dd95e7f1852abd2790
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 180008 W: 16186 L: 16258 D: 147564
Ptnml(0-2): 587, 12593, 63725, 12503, 596
closes https://github.com/official-stockfish/Stockfish/pull/3441
No functional change
NNUE evaluation is incapable of recognizing trivially drawn bishop endgames
(the wrong-colored rook pawn), which are in fact ubiquitous and stock standard
in chess analysis. Switching off NNUE evaluation in KBPs vs KPs endgames is
a measure that stops Stockfish from trading down to a drawn version of these
endings when we presumably have advantage. The patch is able to edge over master
in endgame positions.
Patch tested for Elo gain with the "endgame.epd" book, and verified for
non-regression with our usual book (see the pull request for details).
STC:
LLR: 2.93 (-2.94,2.94) {-0.20,1.10}
Total: 33232 W: 6655 L: 6497 D: 20080
Ptnml(0-2): 4, 2342, 11769, 2494, 7
https://tests.stockfishchess.org/tests/view/6074a52981417533789605b8
LTC:
LLR: 2.93 (-2.94,2.94) {0.20,0.90}
Total: 159056 W: 29799 L: 29378 D: 99879
Ptnml(0-2): 7, 9004, 61085, 9425, 7
https://tests.stockfishchess.org/tests/view/6074c39a81417533789605ca
Closes https://github.com/official-stockfish/Stockfish/pull/3427
Bench: 4503918
blah
This patch changes the pop_lsb() signature from Square pop_lsb(Bitboard*) to
Square pop_lsb(Bitboard&). This is more idomatic for C++ style signatures.
Passed a non-regression STC test:
LLR: 2.93 (-2.94,2.94) {-1.25,0.25}
Total: 21280 W: 1928 L: 1847 D: 17505
Ptnml(0-2): 71, 1427, 7558, 1518, 66
https://tests.stockfishchess.org/tests/view/6053a1e22433018de7a38e2f
We have verified that the generated binary is identical on gcc-10.
Closes https://github.com/official-stockfish/Stockfish/pull/3404
No functional change.
our net currently is not trained on FRC games, and so doesn't know about the important pattern of a bishop that is cornered in FRC.
This patch introduces a term we have in the classical evaluation for this case, and adds it to the NNUE eval.
Since fishtest doesn't support FRC right now, the patch was tested locally at STC conditions,
starting from the book of FRC starting positions.
Score of master vs patch: 993 - 2226 - 6781 [0.438] 10000
Which corresponds to approximately 40 Elo
The patch passes non-regression testing for traditional chess (where it adds one branch).
passed STC:
https://tests.stockfishchess.org/tests/view/604fa2532433018de7a38b67
LLR: 2.95 (-2.94,2.94) {-1.25,0.25}
Total: 30560 W: 2701 L: 2636 D: 25223
Ptnml(0-2): 88, 2056, 10921, 2133, 82
passed STC also in an earlier version:
https://tests.stockfishchess.org/tests/view/604f61282433018de7a38b4d
closes https://github.com/official-stockfish/Stockfish/pull/3398
No functional change
We remark that in current master, most of our use cases for between_bb() can be
optimized if the second parameter of the function is added to the segment. So we
change the definition of between_bb(s1, s2) such that it excludes s1 but includes s2.
We also use a precomputed array for between_bb() for another small speed gain
(see https://tests.stockfishchess.org/tests/view/604d09f72433018de7a389fb).
Passed STC:
LLR: 2.96 (-2.94,2.94) {-0.25,1.25}
Total: 18736 W: 1746 L: 1607 D: 15383
Ptnml(0-2): 61, 1226, 6644, 1387, 50
https://tests.stockfishchess.org/tests/view/60428c84ddcba5f0627bb6e4
Yellow LTC:
LTC:
LLR: -3.00 (-2.94,2.94) {0.25,1.25}
Total: 39144 W: 1431 L: 1413 D: 36300
Ptnml(0-2): 13, 1176, 17184, 1178, 21
https://tests.stockfishchess.org/tests/view/605128702433018de7a38ca1
Closes https://github.com/official-stockfish/Stockfish/pull/3397
---------
Verified for correctness by running perft on the following position:
./stockfish
position fen 4rrk1/1p1nq3/p7/2p1P1pp/3P2bp/3Q1Bn1/PPPB4/1K2R1NR w - - 40 21
go perft 6
Nodes searched: 6136386434
--------
No functional change
The codebase contains multiple functions returning by const-value.
This patch is a small cleanup making those function returns
by value instead, removing the const specifier.
closes https://github.com/official-stockfish/Stockfish/pull/3328
No functional change
We introduce a metric for each internal node in search, called DistanceFromPV.
This distance indicated how far the current node is from the principal variation.
We then use this distance to search the nodes which are close to the PV a little
deeper (up to 4 plies deeper than the PV): this improves the quality of the search
at these nodes and bring better updates for the PV during search.
STC:
LLR: 2.96 (-2.94,2.94) {-0.25,1.25}
Total: 54936 W: 5047 L: 4850 D: 45039
Ptnml(0-2): 183, 3907, 19075, 4136, 167
https://tests.stockfishchess.org/tests/view/6037b88e7f517a561bc4a392
LTC:
LLR: 2.95 (-2.94,2.94) {0.25,1.25}
Total: 49608 W: 1880 L: 1703 D: 46025
Ptnml(0-2): 22, 1514, 21555, 1691, 22
https://tests.stockfishchess.org/tests/view/6038271b7f517a561bc4a3cb
Closes https://github.com/official-stockfish/Stockfish/pull/3369
Bench: 5037279
2021-02-26 19:45:29 +01:00
67 changed files with 3969 additions and 2682 deletions
- git log HEAD | grep "\b[Bb]ench[ :]\+[0-9]\{7\}" | head -n 1 | sed "s/[^0-9]*\([0-9]*\).*/\1/g" > git_sig
- export benchref=$(cat git_sig)
- echo "Reference bench:" $benchref
# Compiler version string
- $COMPILER -v
# test help target
- make help
# Verify bench number against various builds
- export CXXFLAGS="-Werror -D_GLIBCXX_DEBUG"
- make clean && make -j2 ARCH=x86-64-modern optimize=no debug=yes build && ../tests/signature.sh $benchref
- export CXXFLAGS="-Werror"
- make clean && make -j2 ARCH=x86-64-modern build && ../tests/signature.sh $benchref
- make clean && make -j2 ARCH=x86-64-ssse3 build && ../tests/signature.sh $benchref
- make clean && make -j2 ARCH=x86-64-sse3-popcnt build && ../tests/signature.sh $benchref
- make clean && make -j2 ARCH=x86-64 build && ../tests/signature.sh $benchref
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make clean && make -j2 ARCH=general-64 build && ../tests/signature.sh $benchref; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make clean && make -j2 ARCH=x86-32 optimize=no debug=yes build && ../tests/signature.sh $benchref; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make clean && make -j2 ARCH=x86-32-sse41-popcnt build && ../tests/signature.sh $benchref; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make clean && make -j2 ARCH=x86-32-sse2 build && ../tests/signature.sh $benchref; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make clean && make -j2 ARCH=x86-32 build && ../tests/signature.sh $benchref; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make clean && make -j2 ARCH=general-32 build && ../tests/signature.sh $benchref; fi
# workaround: exclude a custom version of llvm+clang, which doesn't find llvm-profdata on ubuntu
- if [[ "$TRAVIS_OS_NAME" != "linux" || "$COMP" == "gcc" ]]; then make clean && make -j2 ARCH=x86-64-modern profile-build && ../tests/signature.sh $benchref; fi
# compile only for some more advanced architectures (might not run in travis)
- make clean && make -j2 ARCH=x86-64-avx2 build
- make clean && make -j2 ARCH=x86-64-bmi2 build
- make clean && make -j2 ARCH=x86-64-avx512 build
- make clean && make -j2 ARCH=x86-64-vnni512 build
- make clean && make -j2 ARCH=x86-64-vnni256 build
#
# Check perft and reproducible search
- make clean && make -j2 ARCH=x86-64-modern build
- ../tests/perft.sh
- ../tests/reprosearch.sh
#
# Valgrind
#
- export CXXFLAGS="-O1 -fno-inline"
- if [ -x "$(command -v valgrind )" ]; then make clean && make -j2 ARCH=x86-64-modern debug=yes optimize=no build > /dev/null && ../tests/instrumented.sh --valgrind; fi
- if [ -x "$(command -v valgrind )" ]; then ../tests/instrumented.sh --valgrind-thread; fi
#
# Sanitizer
#
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make clean && make -j2 ARCH=x86-64-modern sanitize=undefined optimize=no debug=yes build > /dev/null && ../tests/instrumented.sh --sanitizer-undefined; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make clean && make -j2 ARCH=x86-64-modern sanitize=thread optimize=no debug=yes build > /dev/null && ../tests/instrumented.sh --sanitizer-thread; fi
[Stockfish](https://stockfishchess.org) is a free, powerful UCI chess engine
@ -23,21 +23,28 @@ intrinsics available on most CPUs (sse2, avx2, neon, or similar).
This distribution of Stockfish consists of the following files:
* Readme.md, the file you are currently reading.
* [Readme.md](https://github.com/official-stockfish/Stockfish/blob/master/README.md), the file you are currently reading.
* Copying.txt, a text file containing the GNU General Public License version 3.
* AUTHORS, a text file with the list of authors for the project
* [Copying.txt](https://github.com/official-stockfish/Stockfish/blob/master/Copying.txt), a text file containing the GNU General Public License version 3.
* src, a subdirectory containing the full source code, including a Makefile
* [AUTHORS](https://github.com/official-stockfish/Stockfish/blob/master/AUTHORS), a text file with the list of authors for the project
* [src](https://github.com/official-stockfish/Stockfish/tree/master/src), a subdirectory containing the full source code, including a Makefile
that can be used to compile Stockfish on Unix-like systems.
* a file with the .nnue extension, storing the neural network for the NNUE
* a file with the .nnue extension, storing the neural network for the NNUE
evaluation. Binary distributions will have this file embedded.
## UCI options
## The UCI protocol and available options
Currently, Stockfish has the following UCI options:
The Universal Chess Interface (UCI) is a standard protocol used to communicate with
a chess engine, and is the recommended way to do so for typical graphical user interfaces
(GUI) or chess tools. Stockfish implements the majority of it options as described
in [the UCI protocol](https://www.shredderchess.com/download/div/uci.zip).
Developers can see the default values for UCI options available in Stockfish by typing
`./stockfish uci` in a terminal, but the majority of users will typically see them and
change them via a chess GUI. This is a list of available UCI options in Stockfish:
* #### Threads
The number of CPU threads used for searching a position. For best performance, set
@ -115,14 +122,6 @@ Currently, Stockfish has the following UCI options:
Limit Syzygy tablebase probing to positions with at most this many pieces left
(including kings and pawns).
* #### Contempt
A positive value for contempt favors middle game positions and avoids draws,
effective for the classical evaluation only.
* #### Analysis Contempt
By default, contempt is set to prefer the side to move. Set this option to "White"
or "Black" to analyse with contempt for that side, or "Off" to disable contempt.
* #### Move Overhead
Assume a time delay of x ms due to network and GUI overheads. This is useful to
avoid losses on time in those cases.
@ -138,6 +137,34 @@ Currently, Stockfish has the following UCI options:
* #### Debug Log File
Write all communication to and from the engine into a text file.
For developers the following non-standard commands might be of interest, mainly useful for debugging: