The speedup is around 0.25% using gcc 11.3.1 (bmi2, nnue bench, depth 16
and 23) while it is neutral using clang (same conditions).
According to `perf` that integer division was one of the most time-consuming
instructions in search (gcc disassembly).
Passed STC:
https://tests.stockfishchess.org/tests/view/628a17fe24a074e5cd59b3aa
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 22232 W: 5992 L: 5751 D: 10489
Ptnml(0-2): 88, 2235, 6218, 2498, 77
yellow LTC:
https://tests.stockfishchess.org/tests/view/628a35d7ccae0450e35106f7
LLR: -2.95 (-2.94,2.94) <0.50,3.00>
Total: 320168 W: 85853 L: 85326 D: 148989
Ptnml(0-2): 185, 29698, 99821, 30165, 215
This patch also suggests that UHO STC is sensible to small speedups (< 0.50%).
closes https://github.com/official-stockfish/Stockfish/pull/4032
No functional change
This patch provides command line flags `--help` and `--license` as well as the corresponding `help` and `license` commands.
```
$ ./stockfish --help
Stockfish 200522 by the Stockfish developers (see AUTHORS file)
Stockfish is a powerful chess engine and free software licensed under the GNU GPLv3.
Stockfish is normally used with a separate graphical user interface (GUI).
Stockfish implements the universal chess interface (UCI) to exchange information.
For further information see https://github.com/official-stockfish/Stockfish#readme
or the corresponding README.md and Copying.txt files distributed with this program.
```
The idea is to provide a minimal help that links to the README.md file,
not replicating information that is already available elsewhere.
We use this opportunity to explicitly report the license as well.
closes https://github.com/official-stockfish/Stockfish/pull/4027
No functional change.
train a net using training data with a
heavier weight on positions having 16 pieces on the board. More specifically,
with a relative weight of `i * (32-i)/(16 * 16)+1` (where i is the number of pieces on the board).
This is done with the trainer branch https://github.com/glinscott/nnue-pytorch/pull/173
The command used is:
```
python train.py $datafile $datafile $restarttype $restartfile --gpus 1 --threads 4 --num-workers 12 --random-fen-skipping=3 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --features=HalfKAv2_hm^ --lambda=1.00 --max_epochs=$epochs --seed $RANDOM --default_root_dir exp/run_$i
```
The datafile is T60T70wIsRightFarseerT60T74T75T76.binpack, the restart is from the master net.
passed STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 22728 W: 6197 L: 5945 D: 10586
Ptnml(0-2): 105, 2453, 6001, 2695, 110
https://tests.stockfishchess.org/tests/view/625cf944ff677a888877cd90
passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 35664 W: 9535 L: 9264 D: 16865
Ptnml(0-2): 30, 3524, 10455, 3791, 32
https://tests.stockfishchess.org/tests/view/625d3c32ff677a888877d7ca
closes https://github.com/official-stockfish/Stockfish/pull/3989
Bench: 7269563
Official release version of Stockfish 15
Bench: 8129754
---
A new major release of Stockfish is now available at https://stockfishchess.org
Stockfish 15 continues to push the boundaries of chess, providing unrivalled
analysis and playing strength. In our testing, Stockfish 15 is ahead of
Stockfish 14 by 36 Elo points and wins nine times more game pairs than it
loses[1].
Improvements to the engine have made it possible for Stockfish to end up
victorious in tournaments at all sorts of time controls ranging from bullet to
classical and even at Fischer random chess[2]. At CCC, Stockfish won all of
the latest tournaments: CCC 16 Bullet, Blitz and Rapid, CCC 960 championship,
and the CCC 17 Rapid. At TCEC, Stockfish won the Season 21, Cup 9, FRC 4 and
in the current Season 22 superfinal, at the time of writing, has won 16 game
pairs and not yet lost a single one.
This progress is the result of a dedicated team of developers that comes up
with new ideas and improvements. For Stockfish 15, we tested nearly 13000
different changes and retained the best 200. These include the fourth
generation of our NNUE network architecture, as well as various search
improvements. To perform these tests, contributors provide CPU time for
testing, and in the last year, they have collectively played roughly a
billion chess games. In the last few years, our distributed testing
framework, Fishtest, has been operated superbly and has been developed and
improved extensively. This work by Pasquale Pigazzini, Tom Vijlbrief, Michel
Van den Bergh, and various other developers[3] is an essential part of the
success of the Stockfish project.
Indeed, the Stockfish project builds on a thriving community of enthusiasts
to offer a free and open-source chess engine that is robust, widely
available, and very strong. We invite our chess fans to join the Fishtest
testing framework and programmers to contribute to the project[4].
The Stockfish team
[1] https://tests.stockfishchess.org/tests/view/625d156dff677a888877d1be
[2] https://en.wikipedia.org/wiki/Stockfish_(chess)#Competition_results
[3] https://github.com/glinscott/fishtest/blob/master/AUTHORS
[4] https://stockfishchess.org/get-involved/
This patch chooses the delta value (which skews the nnue evaluation between positional and materialistic)
depending on the material: If the material is low, delta will be higher and the evaluation is shifted
to the positional value. If the material is high, the evaluation will be shifted to the psqt value.
I don't think slightly negative values of delta should be a concern.
Passed STC:
https://tests.stockfishchess.org/tests/view/62418513b3b383e86185766f
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 28808 W: 7832 L: 7564 D: 13412
Ptnml(0-2): 147, 3186, 7505, 3384, 182
Passed LTC:
https://tests.stockfishchess.org/tests/view/62419137b3b383e861857842
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 58632 W: 15776 L: 15450 D: 27406
Ptnml(0-2): 42, 5889, 17149, 6173, 63
closes https://github.com/official-stockfish/Stockfish/pull/3971
Bench: 7588855
This idea is a mix of koivisto idea of threat history and heuristic that
was simplified some time ago in LMR - decreasing reduction for moves that evade a capture.
Instead of doing so in LMR this patch does it in movepicker - to do this it
calculates squares that are attacked by different piece types and pieces that are located
on this squares and boosts up weight of moves that make this pieces land on a square that is not under threat.
Boost is greater for pieces with bigger material values.
Special thanks to koivisto and seer authors for explaining me ideas behind threat history.
Passed STC:
https://tests.stockfishchess.org/tests/view/62406e473b32264b9aa1478b
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 19816 W: 5320 L: 5072 D: 9424
Ptnml(0-2): 86, 2165, 5172, 2385, 100
Passed LTC:
https://tests.stockfishchess.org/tests/view/62407f2e3b32264b9aa149c8
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 51200 W: 13805 L: 13500 D: 23895
Ptnml(0-2): 44, 5023, 15164, 5322, 47
closes https://github.com/official-stockfish/Stockfish/pull/3970
bench 7736491
Via the ttPv flag an implicit tree of current and former PV nodes is maintained. In addition this tree is grown or shrinked at the leafs dependant on the search results. But now the shrinking step has been removed.
As the frequency of ttPv nodes decreases with depth the shown scaling behavior (STC barely passed but LTC scales well) of the tests was expected.
STC:
LLR: 2.93 (-2.94,2.94) <-2.25,0.25>
Total: 270408 W: 71593 L: 71785 D: 127030
Ptnml(0-2): 1339, 31024, 70630, 30912, 1299
https://tests.stockfishchess.org/tests/view/622fbf9dc9e950cbfc2376d6
LTC:
LLR: 2.96 (-2.94,2.94) <-2.25,0.25>
Total: 34368 W: 9135 L: 8992 D: 16241
Ptnml(0-2): 28, 3423, 10135, 3574, 24
https://tests.stockfishchess.org/tests/view/62305257c9e950cbfc238964
closes https://github.com/official-stockfish/Stockfish/pull/3963
Bench: 7044203
This commit generalizes the feature transform to use vec_t macros
that are architecture defined instead of using a seperate code path for each one.
It should make some old architectures (MMX, including improvements by Fanael) faster
and make further such improvements easier in the future.
Includes some corrections to CI for mingw.
closes https://github.com/official-stockfish/Stockfish/pull/3955
closes https://github.com/official-stockfish/Stockfish/pull/3928
No functional change
- set the variable only for the required tests to keep simple the yml file
- use NDK 21.x until will be fixed the Stockfish static build problem
with NDK 23.x
- set the test for armv7, armv7-neon, armv8 builds:
- use armv7a-linux-androideabi21-clang++ compiler for armv7 armv7-neon
- enforce a static build
- silence the Warning for the unused compilation flag "-pie" with
the static build, otherwise the Github workflow stops
- use qemu to bench the build and get the signature
Many thanks to @pschneider1968 that made all the hard work with NDK :)
closes https://github.com/official-stockfish/Stockfish/pull/3924
No functional change
This patch is a result of tuning done by user @candirufish after 150k games.
Since the tuned values were really interesting and touched heuristics
that are known for their non-linear scaling I decided to run limited
games LTC match, even if the STC test was really bad (which was expected).
After seeing the results of the LTC match, I also run a VLTC (very long
time control) SPRTtest, which passed.
The main difference is in extensions: this patch allows much more
singular/double extensions, both in terms of allowing them at lower
depths and with lesser margins.
Failed STC:
https://tests.stockfishchess.org/tests/view/620d66643ec80158c0cd3b46
LLR: -2.94 (-2.94,2.94) <0.00,2.50>
Total: 4968 W: 1194 L: 1398 D: 2376
Ptnml(0-2): 47, 633, 1294, 497, 13
Performed well at LTC in a fixed-length match:
https://tests.stockfishchess.org/tests/view/620d66823ec80158c0cd3b4a
ELO: 3.36 +-1.8 (95%) LOS: 100.0%
Total: 30000 W: 7966 L: 7676 D: 14358
Ptnml(0-2): 36, 2936, 8755, 3248, 25
Passed VLTC SPRT test:
https://tests.stockfishchess.org/tests/view/620da11a26f5b17ec884f939
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 4400 W: 1326 L: 1127 D: 1947
Ptnml(0-2): 13, 309, 1348, 526, 4
closes https://github.com/official-stockfish/Stockfish/pull/3937
Bench: 6318903
Architecture:
The diagram of the "SFNNv4" architecture:
https://user-images.githubusercontent.com/8037982/153455685-cbe3a038-e158-4481-844d-9d5fccf5c33a.png
The most important architectural changes are the following:
* 1024x2 [activated] neurons are pairwise, elementwise multiplied (not quite pairwise due to implementation details, see diagram), which introduces a non-linearity that exhibits similar benefits to previously tested sigmoid activation (quantmoid4), while being slightly faster.
* The following layer has therefore 2x less inputs, which we compensate by having 2 more outputs. It is possible that reducing the number of outputs might be beneficial (as we had it as low as 8 before). The layer is now 1024->16.
* The 16 outputs are split into 15 and 1. The 1-wide output is added to the network output (after some necessary scaling due to quantization differences). The 15-wide is activated and follows the usual path through a set of linear layers. The additional 1-wide output is at least neutral, but has shown a slightly positive trend in training compared to networks without it (all 16 outputs through the usual path), and allows possibly an additional stage of lazy evaluation to be introduced in the future.
Additionally, the inference code was rewritten and no longer uses a recursive implementation. This was necessitated by the splitting of the 16-wide intermediate result into two, which was impossible to do with the old implementation with ugly hacks. This is hopefully overall for the better.
First session:
The first session was training a network from scratch (random initialization). The exact trainer used was slightly different (older) from the one used in the second session, but it should not have a measurable effect. The purpose of this session is to establish a strong network base for the second session. Small deviations in strength do not harm the learnability in the second session.
The training was done using the following command:
python3 train.py \
/home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
/home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
--gpus "$3," \
--threads 4 \
--num-workers 4 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--gamma=0.992 \
--lr=8.75e-4 \
--max_epochs=400 \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2
Every 20th net was saved and its playing strength measured against some baseline at 25k nodes per move with pure NNUE evaluation (modified binary). The exact setup is not important as long as it's consistent. The purpose is to sift good candidates from bad ones.
The dataset can be found https://drive.google.com/file/d/1UQdZN_LWQ265spwTBwDKo0t1WjSJKvWY/view
Second session:
The second training session was done starting from the best network (as determined by strength testing) from the first session. It is important that it's resumed from a .pt model and NOT a .ckpt model. The conversion can be performed directly using serialize.py
The LR schedule was modified to use gamma=0.995 instead of gamma=0.992 and LR=4.375e-4 instead of LR=8.75e-4 to flatten the LR curve and allow for longer training. The training was then running for 800 epochs instead of 400 (though it's possibly mostly noise after around epoch 600).
The training was done using the following command:
The training was done using the following command:
python3 train.py \
/data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
/data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
--gpus "$3," \
--threads 4 \
--num-workers 4 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--gamma=0.995 \
--lr=4.375e-4 \
--max_epochs=800 \
--resume-from-model /data/sopel/nnue/nnue-pytorch-training/data/exp295/nn-epoch399.pt \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$run_id
In particular note that we now use lambda=1.0 instead of lambda=0.8 (previous nets), because tests show that WDL-skipping introduced by vondele performs better with lambda=1.0. Nets were being saved every 20th epoch. In total 16 runs were made with these settings and the best nets chosen according to playing strength at 25k nodes per move with pure NNUE evaluation - these are the 4 nets that have been put on fishtest.
The dataset can be found either at ftp://ftp.chessdb.cn/pub/sopel/data_sf/T60T70wIsRightFarseerT60T74T75T76.binpack in its entirety (download might be painfully slow because hosted in China) or can be assembled in the following way:
Get the 5640ad48ae/script/interleave_binpacks.py script.
Download T60T70wIsRightFarseer.binpack https://drive.google.com/file/d/1_sQoWBl31WAxNXma2v45004CIVltytP8/view
Download farseerT74.binpack http://trainingdata.farseer.org/T74-May13-End.7z
Download farseerT75.binpack http://trainingdata.farseer.org/T75-June3rd-End.7z
Download farseerT76.binpack http://trainingdata.farseer.org/T76-Nov10th-End.7z
Run python3 interleave_binpacks.py T60T70wIsRightFarseer.binpack farseerT74.binpack farseerT75.binpack farseerT76.binpack T60T70wIsRightFarseerT60T74T75T76.binpack
Tests:
STC: https://tests.stockfishchess.org/tests/view/6203fb85d71106ed12a407b7
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 16952 W: 4775 L: 4521 D: 7656
Ptnml(0-2): 133, 1818, 4318, 2076, 131
LTC: https://tests.stockfishchess.org/tests/view/62041e68d71106ed12a40e85
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 14944 W: 4138 L: 3907 D: 6899
Ptnml(0-2): 21, 1499, 4202, 1728, 22
closes https://github.com/official-stockfish/Stockfish/pull/3927
Bench: 4919707
Most credits for this patch should go to @candirufish.
Based on his big search tuning (1M games at 20+0.1s)
https://tests.stockfishchess.org/tests/view/61fc7a6ed508ec6a1c9f4b7d
with some hand polishing on top of it, which includes :
a) correcting trend sigmoid - for some reason original tuning resulted in it being negative. This heuristic was proven to be worth some elo for years so reversing it sign is probably some random artefact;
b) remove changes to continuation history based pruning - this heuristic historically was really good at providing green STCs and then failing at LTC miserably if we tried to make it more strict, original tuning was done at short time control and thus it became more strict - which doesn't scale to longer time controls;
c) remove changes to improvement - not really indended :).
passed STC
https://tests.stockfishchess.org/tests/view/6203526e88ae2c84271c2ee2
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 16840 W: 4604 L: 4363 D: 7873
Ptnml(0-2): 82, 1780, 4449, 2033, 76
passed LTC
https://tests.stockfishchess.org/tests/view/620376e888ae2c84271c35d4
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 17232 W: 4771 L: 4542 D: 7919
Ptnml(0-2): 14, 1655, 5048, 1886, 13
closes https://github.com/official-stockfish/Stockfish/pull/3926
bench 5030992