1
0
Fork 0
mirror of https://github.com/sockspls/badfish synced 2025-04-29 16:23:09 +00:00
Commit graph

47 commits

Author SHA1 Message Date
Disservin
2d0237db3f add clang-format
This introduces clang-format to enforce a consistent code style for Stockfish.

Having a documented and consistent style across the code will make contributing easier
for new developers, and will make larger changes to the codebase easier to make.

To facilitate formatting, this PR includes a Makefile target (`make format`) to format the code,
this requires clang-format (version 17 currently) to be installed locally.

Installing clang-format is straightforward on most OS and distros
(e.g. with https://apt.llvm.org/, brew install clang-format, etc), as this is part of quite commonly
used suite of tools and compilers (llvm / clang).

Additionally, a CI action is present that will verify if the code requires formatting,
and comment on the PR as needed. Initially, correct formatting is not required, it will be
done by maintainers as part of the merge or in later commits, but obviously this is encouraged.

fixes https://github.com/official-stockfish/Stockfish/issues/3608
closes https://github.com/official-stockfish/Stockfish/pull/4790

Co-Authored-By: Joost VandeVondele <Joost.VandeVondele@gmail.com>
2023-10-22 16:06:27 +02:00
FauziAkram
edb4ab924f Standardize Comments
use double slashes (//) only for comments.

closes #4820

No functional change.
2023-10-21 10:25:03 +02:00
cj5716
fce4cc1829 Make casting styles consistent
Make casting styles consistent with the rest of the code.

closes https://github.com/official-stockfish/Stockfish/pull/4793

No functional change
2023-09-22 19:14:29 +02:00
Disservin
3c0e86a91e Cleanup includes
Reorder a few includes, include "position.h" where it was previously missing
and apply include-what-you-use suggestions. Also make the order of the includes
consistent, in the following way:

1. Related header (for .cpp files)
2. A blank line
3. C/C++ headers
4. A blank line
5. All other header files

closes https://github.com/official-stockfish/Stockfish/pull/4763
fixes https://github.com/official-stockfish/Stockfish/issues/4707

No functional change
2023-09-03 08:24:51 +02:00
Stéphane Nicolet
f84eb1f3ef Improve some comments
- clarify the examples for the bench command
- typo  in search.cpp

closes https://github.com/official-stockfish/Stockfish/pull/4710

No functional change
2023-07-28 23:38:49 +02:00
Joost VandeVondele
e8a5c64988 Consolidate to centipawns conversion
avoid doing this calculations in multiple places.

closes https://github.com/official-stockfish/Stockfish/pull/4686

No functional change
2023-07-16 15:14:50 +02:00
Joost VandeVondele
af110e02ec Remove classical evaluation
since the introduction of NNUE (first released with Stockfish 12), we
have maintained the classical evaluation as part of SF in frozen form.
The idea that this code could lead to further inputs to the NN or
search did not materialize. Now, after five releases, this PR removes
the classical evaluation from SF. Even though this evaluation is
probably the best of its class, it has become unimportant for the
engine's strength, and there is little need to maintain this
code (roughly 25% of SF) going forward, or to expend resources on
trying to improve its integration in the NNUE eval.

Indeed, it had still a very limited use in the current SF, namely
for the evaluation of positions that are nearly decided based on
material difference, where the speed of the classical evaluation
outweights its inaccuracies. This impact on strength is small,
roughly 2Elo, and probably decreasing in importance as the TC grows.

Potentially, removal of this code could lead to the development of
techniques to have faster, but less accurate NN evaluation,
for certain positions.

STC
https://tests.stockfishchess.org/tests/view/64a320173ee09aa549c52157
Elo: -2.35 ± 1.1 (95%) LOS: 0.0%
Total: 100000 W: 24916 L: 25592 D: 49492
Ptnml(0-2): 287, 12123, 25841, 11477, 272
nElo: -4.62 ± 2.2 (95%) PairsRatio: 0.95

LTC
https://tests.stockfishchess.org/tests/view/64a320293ee09aa549c5215b
 Elo: -1.74 ± 1.0 (95%) LOS: 0.0%
Total: 100000 W: 25010 L: 25512 D: 49478
Ptnml(0-2): 44, 11069, 28270, 10579, 38
nElo: -3.72 ± 2.2 (95%) PairsRatio: 0.96

VLTC SMP
https://tests.stockfishchess.org/tests/view/64a3207c3ee09aa549c52168
 Elo: -1.70 ± 0.9 (95%) LOS: 0.0%
Total: 100000 W: 25673 L: 26162 D: 48165
Ptnml(0-2): 8, 9455, 31569, 8954, 14
nElo: -3.95 ± 2.2 (95%) PairsRatio: 0.95

closes https://github.com/official-stockfish/Stockfish/pull/4674

Bench: 1444646
2023-07-11 22:56:49 +02:00
MinetaS
587bc647d7 Remove non_pawn_material in NNUE::evaluate
After "Use NNUE complexity in search, retune related parameters" commit,
the effect of non-pawn material adjustment has been nearly diminished.
This patch removes pos.non_pawn_material as a simplification, which
passed non-regression tests with both STC and LTC.

Passed non-regression STC:
LLR: 2.95 (-2.94,2.94) <-1.75,0.25>
Total: 75152 W: 20030 L: 19856 D: 35266
Ptnml(0-2): 215, 8281, 20459, 8357, 264
https://tests.stockfishchess.org/tests/view/641ab471db43ab2ba6f7bc58

Passed non-regression LTC:
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 193864 W: 51870 L: 51829 D: 90165
Ptnml(0-2): 86, 18968, 58794, 18987, 97
https://tests.stockfishchess.org/tests/view/641b4fe6db43ab2ba6f7db96

closes https://github.com/official-stockfish/Stockfish/pull/4461

Bench: 5020718
2023-03-25 09:25:49 +01:00
Joost VandeVondele
08385527dd Introduce a function to compute NNUE accumulator
This patch introduces `hint_common_parent_position()` to signal that potentially several child nodes will require an NNUE eval. By populating explicitly the accumulator, these subsequent evaluations can be performed more efficiently.

This was based on the observation that calculating the evaluation in an excluded move position yielded a significant Elo gain, even though the evaluation itself was already available (work by pb00067).

Sopel wrote the code to perform just the accumulator update. This PR is based on cleaned up code that

passed STC:
https://tests.stockfishchess.org/tests/view/63f62f9be74a12625bcd4aa0
 LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 110368 W: 29607 L: 29167 D: 51594
Ptnml(0-2): 41, 10551, 33572, 10967, 53

and in an the earlier (equivalent) version

passed STC:
https://tests.stockfishchess.org/tests/view/63f3c3fee74a12625bcce2a6
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 47552 W: 12786 L: 12467 D: 22299
Ptnml(0-2): 120, 5107, 12997, 5438, 114

passed LTC:
https://tests.stockfishchess.org/tests/view/63f45cc2e74a12625bccfa63
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 110368 W: 29607 L: 29167 D: 51594
Ptnml(0-2): 41, 10551, 33572, 10967, 53

closes https://github.com/official-stockfish/Stockfish/pull/4402

Bench: 3726250
2023-02-23 13:25:35 +01:00
Sebastian Buchwald
e9e7a7b83f Replace some std::string occurrences with std::string_view
std::string_view is more lightweight than std::string. Furthermore,
std::string_view variables can be declared constexpr.

closes https://github.com/official-stockfish/Stockfish/pull/4328

No functional change
2023-01-09 20:28:24 +01:00
Stefano Di Martino
5a88c5bb9b Modernize code base a little bit
Removed sprintf() which generated a warning, because of security reasons.
Replace NULL with nullptr
Replace typedef with using
Do not inherit from std::vector. Use composition instead.
optimize mutex-unlocking

closes https://github.com/official-stockfish/Stockfish/pull/4327

No functional change
2023-01-09 20:25:13 +01:00
Sebastian Buchwald
31acd6bab7 Warn if a global function has no previous declaration
If a global function has no previous declaration, either the declaration
is missing in the corresponding header file or the function should be
declared static. Static functions are local to the translation unit,
which allows the compiler to apply some optimizations earlier (when
compiling the translation unit rather than during link-time
optimization).

The commit enables the warning for gcc, clang, and mingw. It also fixes
the reported warnings by declaring the functions static or by adding a
header file (benchmark.h).

closes https://github.com/official-stockfish/Stockfish/pull/4325

No functional change
2023-01-09 20:18:39 +01:00
Sebastian Buchwald
b60f9cc451 Update copyright years
Happy New Year!

closes https://github.com/official-stockfish/Stockfish/pull/4315

No functional change
2023-01-02 19:07:38 +01:00
Joost VandeVondele
ad2aa8c06f Normalize evaluation
Normalizes the internal value as reported by evaluate or search
to the UCI centipawn result used in output. This value is derived from
the win_rate_model() such that Stockfish outputs an advantage of
"100 centipawns" for a position if the engine has a 50% probability to win
from this position in selfplay at fishtest LTC time control.

The reason to introduce this normalization is that our evaluation is, since NNUE,
no longer related to the classical parameter PawnValueEg (=208). This leads to
the current evaluation changing quite a bit from release to release, for example,
the eval needed to have 50% win probability at fishtest LTC (in cp and internal Value):

June 2020  :   113cp (237)
June 2021  :   115cp (240)
April 2022 :   134cp (279)
July 2022  :   167cp (348)

With this patch, a 100cp advantage will have a fixed interpretation,
i.e. a 50% win chance. To keep this value steady, it will be needed to update the win_rate_model()
from time to time, based on fishtest data. This analysis can be performed with
a set of scripts currently available at https://github.com/vondele/WLD_model

fixes https://github.com/official-stockfish/Stockfish/issues/4155
closes https://github.com/official-stockfish/Stockfish/pull/4216

No functional change
2022-11-05 09:15:53 +01:00
Dubslow
442c40b43d Use NNUE complexity in search, retune related parameters
This builds on ideas of xoto10 and mstembera to use more output from NNUE in the search algorithm.

passed STC:
https://tests.stockfishchess.org/tests/view/62ae454fe7ee5525ef88a957
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 89208 W: 24127 L: 23753 D: 41328
Ptnml(0-2): 400, 9886, 23642, 10292, 384

passed LTC:
https://tests.stockfishchess.org/tests/view/62acc6ddd89eb6cf1e0750a1
LLR: 2.93 (-2.94,2.94) <0.50,3.00>
Total: 56352 W: 15430 L: 15115 D: 25807
Ptnml(0-2): 44, 5501, 16782, 5794, 55

closes https://github.com/official-stockfish/Stockfish/pull/4088

bench 5332964
2022-06-20 08:30:57 +02:00
xoto10
7f1333ccf8 Blend nnue complexity with classical.
Following mstembera's test of the complexity value derived from nnue values,
this change blends that idea with the old complexity calculation.

STC 10+0.1:
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 42320 W: 11436 L: 11148 D: 19736
Ptnml(0-2): 209, 4585, 11263, 4915, 188
https://tests.stockfishchess.org/tests/live_elo/6295c9239c8c2fcb2bad7fd9

LTC 60+0.6:
LLR: 2.98 (-2.94,2.94) <0.50,3.00>
Total: 34600 W: 9393 L: 9125 D: 16082
Ptnml(0-2): 32, 3323, 10319, 3597, 29
https://tests.stockfishchess.org/tests/view/6295fd5d9c8c2fcb2bad88cf

closes https://github.com/official-stockfish/Stockfish/pull/4046

Bench 6078140
2022-06-02 07:47:23 +02:00
Topologist
471d93063a Play more positional in endgames
This patch chooses the delta value (which skews the nnue evaluation between positional and materialistic)
depending on the material: If the material is low, delta will be higher and the evaluation is shifted
to the positional value. If the material is high, the evaluation will be shifted to the psqt value.
I don't think slightly negative values of delta should be a concern.

Passed STC:
https://tests.stockfishchess.org/tests/view/62418513b3b383e86185766f
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 28808 W: 7832 L: 7564 D: 13412
Ptnml(0-2): 147, 3186, 7505, 3384, 182

Passed LTC:
https://tests.stockfishchess.org/tests/view/62419137b3b383e861857842
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 58632 W: 15776 L: 15450 D: 27406
Ptnml(0-2): 42, 5889, 17149, 6173, 63

closes https://github.com/official-stockfish/Stockfish/pull/3971

Bench: 7588855
2022-03-28 22:43:52 +02:00
mstembera
5f781d366e Clean up and simplify some nnue code.
Remove some unnecessary code and it's execution during inference. Also the change on line 49 in nnue_architecture.h results in a more efficient SIMD code path through ClippedReLU::propagate().

passed STC:
https://tests.stockfishchess.org/tests/view/6217d3bfda649bba32ef25d5
LLR: 2.94 (-2.94,2.94) <-2.25,0.25>
Total: 12056 W: 3281 L: 3092 D: 5683
Ptnml(0-2): 55, 1213, 3312, 1384, 64

passed STC SMP:
https://tests.stockfishchess.org/tests/view/6217f344da649bba32ef295e
LLR: 2.94 (-2.94,2.94) <-2.25,0.25>
Total: 27376 W: 7295 L: 7137 D: 12944
Ptnml(0-2): 52, 2859, 7715, 3003, 59

closes https://github.com/official-stockfish/Stockfish/pull/3944

No functional change

bench: 6820724
2022-02-25 08:37:57 +01:00
Tomasz Sobczyk
cb9c2594fc Update architecture to "SFNNv4". Update network to nn-6877cd24400e.nnue.
Architecture:

The diagram of the "SFNNv4" architecture:
https://user-images.githubusercontent.com/8037982/153455685-cbe3a038-e158-4481-844d-9d5fccf5c33a.png

The most important architectural changes are the following:

* 1024x2 [activated] neurons are pairwise, elementwise multiplied (not quite pairwise due to implementation details, see diagram), which introduces a non-linearity that exhibits similar benefits to previously tested sigmoid activation (quantmoid4), while being slightly faster.
* The following layer has therefore 2x less inputs, which we compensate by having 2 more outputs. It is possible that reducing the number of outputs might be beneficial (as we had it as low as 8 before). The layer is now 1024->16.
* The 16 outputs are split into 15 and 1. The 1-wide output is added to the network output (after some necessary scaling due to quantization differences). The 15-wide is activated and follows the usual path through a set of linear layers. The additional 1-wide output is at least neutral, but has shown a slightly positive trend in training compared to networks without it (all 16 outputs through the usual path), and allows possibly an additional stage of lazy evaluation to be introduced in the future.

Additionally, the inference code was rewritten and no longer uses a recursive implementation. This was necessitated by the splitting of the 16-wide intermediate result into two, which was impossible to do with the old implementation with ugly hacks. This is hopefully overall for the better.

First session:

The first session was training a network from scratch (random initialization). The exact trainer used was slightly different (older) from the one used in the second session, but it should not have a measurable effect. The purpose of this session is to establish a strong network base for the second session. Small deviations in strength do not harm the learnability in the second session.

The training was done using the following command:

python3 train.py \
    /home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
    /home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
    --gpus "$3," \
    --threads 4 \
    --num-workers 4 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --gamma=0.992 \
    --lr=8.75e-4 \
    --max_epochs=400 \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

Every 20th net was saved and its playing strength measured against some baseline at 25k nodes per move with pure NNUE evaluation (modified binary). The exact setup is not important as long as it's consistent. The purpose is to sift good candidates from bad ones.

The dataset can be found https://drive.google.com/file/d/1UQdZN_LWQ265spwTBwDKo0t1WjSJKvWY/view

Second session:

The second training session was done starting from the best network (as determined by strength testing) from the first session. It is important that it's resumed from a .pt model and NOT a .ckpt model. The conversion can be performed directly using serialize.py

The LR schedule was modified to use gamma=0.995 instead of gamma=0.992 and LR=4.375e-4 instead of LR=8.75e-4 to flatten the LR curve and allow for longer training. The training was then running for 800 epochs instead of 400 (though it's possibly mostly noise after around epoch 600).

The training was done using the following command:

The training was done using the following command:

python3 train.py \
        /data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
        /data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
        --gpus "$3," \
        --threads 4 \
        --num-workers 4 \
        --batch-size 16384 \
        --progress_bar_refresh_rate 20 \
        --random-fen-skipping 3 \
        --features=HalfKAv2_hm^ \
        --lambda=1.0 \
        --gamma=0.995 \
        --lr=4.375e-4 \
        --max_epochs=800 \
        --resume-from-model /data/sopel/nnue/nnue-pytorch-training/data/exp295/nn-epoch399.pt \
        --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$run_id

In particular note that we now use lambda=1.0 instead of lambda=0.8 (previous nets), because tests show that WDL-skipping introduced by vondele performs better with lambda=1.0. Nets were being saved every 20th epoch. In total 16 runs were made with these settings and the best nets chosen according to playing strength at 25k nodes per move with pure NNUE evaluation - these are the 4 nets that have been put on fishtest.

The dataset can be found either at ftp://ftp.chessdb.cn/pub/sopel/data_sf/T60T70wIsRightFarseerT60T74T75T76.binpack in its entirety (download might be painfully slow because hosted in China) or can be assembled in the following way:

Get the 5640ad48ae/script/interleave_binpacks.py script.
Download T60T70wIsRightFarseer.binpack https://drive.google.com/file/d/1_sQoWBl31WAxNXma2v45004CIVltytP8/view
Download farseerT74.binpack http://trainingdata.farseer.org/T74-May13-End.7z
Download farseerT75.binpack http://trainingdata.farseer.org/T75-June3rd-End.7z
Download farseerT76.binpack http://trainingdata.farseer.org/T76-Nov10th-End.7z
Run python3 interleave_binpacks.py T60T70wIsRightFarseer.binpack farseerT74.binpack farseerT75.binpack farseerT76.binpack T60T70wIsRightFarseerT60T74T75T76.binpack

Tests:

STC: https://tests.stockfishchess.org/tests/view/6203fb85d71106ed12a407b7
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 16952 W: 4775 L: 4521 D: 7656
Ptnml(0-2): 133, 1818, 4318, 2076, 131

LTC: https://tests.stockfishchess.org/tests/view/62041e68d71106ed12a40e85
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 14944 W: 4138 L: 3907 D: 6899
Ptnml(0-2): 21, 1499, 4202, 1728, 22

closes https://github.com/official-stockfish/Stockfish/pull/3927

Bench: 4919707
2022-02-10 19:54:31 +01:00
Brad Knox
ad926d34c0 Update copyright years
Happy New Year!

closes https://github.com/official-stockfish/Stockfish/pull/3881

No functional change
2022-01-06 15:45:45 +01:00
Stéphane Nicolet
74776dbcd5 Simplification in evaluate_nnue.cpp
Removes the test on non-pawn-material before applying the positional/materialistic bonus.

Passed STC:
LLR: 2.94 (-2.94,2.94) <-2.25,0.25>
Total: 46904 W: 12197 L: 12059 D: 22648
Ptnml(0-2): 170, 5243, 12479, 5399, 161
https://tests.stockfishchess.org/tests/view/61be57cf57a0d0f327c3999d

Passed LTC:
LLR: 2.95 (-2.94,2.94) <-2.25,0.25>
Total: 18760 W: 4958 L: 4790 D: 9012
Ptnml(0-2): 14, 1942, 5301, 2108, 15
https://tests.stockfishchess.org/tests/view/61bed1fb57a0d0f327c3afa9

closes https://github.com/official-stockfish/Stockfish/pull/3866

Bench: 4826206
2021-12-19 15:44:01 +01:00
hengyu
64f21ecdae Small clean-up
remove unneeded calculation.

closes https://github.com/official-stockfish/Stockfish/pull/3807

No functional change.
2021-12-01 17:59:20 +01:00
Stefano Cardanobile
2214fcecf7 Rewrite NNUE evaluation adjustments
Make the eval code in the evaluate_nnue.cpp more similar to the rest of the codebase:

* remove multiple variable assignment
* make if conditions explicit and indent on multiple lines

passed STC
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 59032 W: 14834 L: 14751 D: 29447
Ptnml(0-2): 176, 6310, 16459, 6397, 174
https://tests.stockfishchess.org/tests/view/616f250540f619782fd4f76d

closes https://github.com/official-stockfish/Stockfish/pull/3753

No functional change
2021-10-23 12:22:02 +02:00
xoto10
f21a66f70d Small clean-up, Sept 2021
Closes https://github.com/official-stockfish/Stockfish/pull/3485

No functional change
2021-10-07 09:41:57 +02:00
Michael Chaly
e8788d1b32 Combo of various parameter tweaks
Combination of parameter tweaks in search, evaluation and time management.
Original patches by snicolet xoto10 lonfom169 and Vizvezdenec.

Includes:

* Use bigger grain of positional evaluation more frequently (up to 1 exchange difference in non-pawn-material);
* More extra time according to increment;
* Increase margin for singular extensions;
* Do more aggresive parent node futility pruning.

Passed STC
https://tests.stockfishchess.org/tests/view/6147deab3733d0e0dd9f313d
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 45488 W: 11691 L: 11450 D: 22347
Ptnml(0-2): 145, 5208, 11824, 5395, 172

Passed LTC
https://tests.stockfishchess.org/tests/view/6147f1d53733d0e0dd9f3141
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 62520 W: 15808 L: 15482 D: 31230
Ptnml(0-2): 43, 6439, 17960, 6785, 33

closes https://github.com/official-stockfish/Stockfish/pull/3710

bench 5575265
2021-09-21 19:48:40 +02:00
Stéphane Nicolet
b51b094419 Simplify format_cp_aligned_dot()
closes https://github.com/official-stockfish/Stockfish/pull/3583

No functional change
2021-07-03 09:25:16 +02:00
Joost VandeVondele
2e2865d34b Fix build error on OSX
directly use integer version for cp calculation.

fixes https://github.com/official-stockfish/Stockfish/issues/3573

closes https://github.com/official-stockfish/Stockfish/pull/3574

No functional change
2021-06-21 23:14:58 +02:00
Tomasz Sobczyk
2e745956c0 Change trace with NNUE eval support
This patch adds some more output to the `eval` command. It adds a board display
with estimated piece values (method is remove-piece, evaluate, put-piece), and
splits the NNUE evaluation with (psqt,layers) for each bucket for the NNUE net.

Example:

```

./stockfish
position fen 3Qb1k1/1r2ppb1/pN1n2q1/Pp1Pp1Pr/4P2p/4BP2/4B1R1/1R5K b - - 11 40
eval

 Contributing terms for the classical eval:
+------------+-------------+-------------+-------------+
|    Term    |    White    |    Black    |    Total    |
|            |   MG    EG  |   MG    EG  |   MG    EG  |
+------------+-------------+-------------+-------------+
|   Material |  ----  ---- |  ----  ---- | -0.73 -1.55 |
|  Imbalance |  ----  ---- |  ----  ---- | -0.21 -0.17 |
|      Pawns |  0.35 -0.00 |  0.19 -0.26 |  0.16  0.25 |
|    Knights |  0.04 -0.08 |  0.12 -0.01 | -0.08 -0.07 |
|    Bishops | -0.34 -0.87 | -0.17 -0.61 | -0.17 -0.26 |
|      Rooks |  0.12  0.00 |  0.08  0.00 |  0.04  0.00 |
|     Queens |  0.00  0.00 | -0.27 -0.07 |  0.27  0.07 |
|   Mobility |  0.84  1.76 |  0.01  0.66 |  0.83  1.10 |
|King safety | -0.99 -0.17 | -0.72 -0.10 | -0.27 -0.07 |
|    Threats |  0.27  0.27 |  0.73  0.86 | -0.46 -0.59 |
|     Passed |  0.00  0.00 |  0.79  0.82 | -0.79 -0.82 |
|      Space |  0.61  0.00 |  0.24  0.00 |  0.37  0.00 |
|   Winnable |  ----  ---- |  ----  ---- |  0.00 -0.03 |
+------------+-------------+-------------+-------------+
|      Total |  ----  ---- |  ----  ---- | -1.03 -2.14 |
+------------+-------------+-------------+-------------+

 NNUE derived piece values:
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |   Q   |   b   |       |   k   |       |
|       |       |       | +12.4 | -1.62 |       |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |   r   |       |       |   p   |   p   |   b   |       |
|       | -3.89 |       |       | -0.84 | -1.19 | -3.32 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|   p   |   N   |       |   n   |       |       |   q   |       |
| -1.81 | +3.71 |       | -4.82 |       |       | -5.04 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|   P   |   p   |       |   P   |   p   |       |   P   |   r   |
| +1.16 | -0.91 |       | +0.55 | +0.12 |       | +0.50 | -4.02 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   P   |       |       |   p   |
|       |       |       |       | +2.33 |       |       | +1.17 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   B   |   P   |       |       |
|       |       |       |       | +4.79 | +1.54 |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   B   |       |   R   |       |
|       |       |       |       | +4.54 |       | +6.03 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |   R   |       |       |       |       |       |   K   |
|       | +4.81 |       |       |       |       |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+

 NNUE network contributions (Black to move)
+------------+------------+------------+------------+
|   Bucket   |  Material  | Positional |   Total    |
|            |   (PSQT)   |  (Layers)  |            |
+------------+------------+------------+------------+
|  0         |  +  0.32   |  -  1.46   |  -  1.13   |
|  1         |  +  0.25   |  -  0.68   |  -  0.43   |
|  2         |  +  0.46   |  -  1.72   |  -  1.25   |
|  3         |  +  0.55   |  -  1.80   |  -  1.25   |
|  4         |  +  0.48   |  -  1.77   |  -  1.29   |
|  5         |  +  0.40   |  -  2.00   |  -  1.60   |
|  6         |  +  0.57   |  -  2.12   |  -  1.54   | <-- this bucket is used
|  7         |  +  3.38   |  -  2.00   |  +  1.37   |
+------------+------------+------------+------------+

Classical evaluation   -1.00 (white side)
NNUE evaluation        +1.54 (white side)
Final evaluation       +2.38 (white side) [with scaled NNUE, hybrid, ...]

```

Also renames the export_net() function to save_eval() while there.

closes https://github.com/official-stockfish/Stockfish/pull/3562

No functional change
2021-06-19 11:57:01 +02:00
Stéphane Nicolet
f193778446 Do not use lazy evaluation inside NNUE
This simplification patch implements two changes:

1. it simplifies away the so-called "lazy" path in the NNUE evaluation internals,
   where we trusted the psqt head alone to avoid the costly "positional" head in
   some cases;
2. it raises a little bit the NNUEThreshold1 in evaluate.cpp (from 682 to 800),
   which increases the limit where we switched from NNUE eval to Classical eval.

Both effects increase the number of positional evaluations done by our new net
architecture, but the results of our tests below seem to indicate that the loss
of speed will be compensated by the gain of eval quality.

STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 26280 W: 2244 L: 2137 D: 21899
Ptnml(0-2): 72, 1755, 9405, 1810, 98
https://tests.stockfishchess.org/tests/view/60ae73f112066fd299795a51

LTC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 20592 W: 750 L: 677 D: 19165
Ptnml(0-2): 9, 614, 8980, 681, 12
https://tests.stockfishchess.org/tests/view/60ae88e812066fd299795a82

closes https://github.com/official-stockfish/Stockfish/pull/3503

Bench: 3817907
2021-05-27 01:21:56 +02:00
Tomasz Sobczyk
9d53129075 Expose the lazy threshold for the feature transformer PSQT as a parameter.
Definition of the lazy threshold moved to evaluate.cpp where all others are.
Lazy threshold only used for real searches, not used for the "eval" call.
This preserves the purity of NNUE evaluation, which is useful to verify
consistency between the engine and the NNUE trainer.

closes https://github.com/official-stockfish/Stockfish/pull/3499

No functional change
2021-05-25 21:40:51 +02:00
Stéphane Nicolet
a2f01c07eb Sometimes change the (materialist, positional) balance
Our new nets output two values for the side to move in the last layer.
We can interpret the first value as a material evaluation of the
position, and the second one as the dynamic, positional value of the
location of pieces.

This patch changes the balance for the (materialist, positional) parts
of the score from (128, 128) to (121, 135) when the piece material is
equal between the two players, but keeps the standard (128, 128) balance
when one player is at least an exchange up.

Passed STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 15936 W: 1421 L: 1266 D: 13249
Ptnml(0-2): 37, 1037, 5694, 1134, 66
https://tests.stockfishchess.org/tests/view/60a82df9ce8ea25a3ef0408f

Passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 13904 W: 516 L: 410 D: 12978
Ptnml(0-2): 4, 374, 6088, 484, 2
https://tests.stockfishchess.org/tests/view/60a8bbf9ce8ea25a3ef04101

closes https://github.com/official-stockfish/Stockfish/pull/3492

Bench: 3856635
2021-05-22 21:09:22 +02:00
Tomasz Sobczyk
e8d64af123 New NNUE architecture and net
Introduces a new NNUE network architecture and associated network parameters,
as obtained by a new pytorch trainer.

The network is already very strong at short TC, without regression at longer TC,
and has potential for further improvements.

https://tests.stockfishchess.org/tests/view/60a159c65085663412d0921d
TC: 10s+0.1s, 1 thread
ELO: 21.74 +-3.4 (95%) LOS: 100.0%
Total: 10000 W: 1559 L: 934 D: 7507
Ptnml(0-2): 38, 701, 2972, 1176, 113

https://tests.stockfishchess.org/tests/view/60a187005085663412d0925b
TC: 60s+0.6s, 1 thread
ELO: 5.85 +-1.7 (95%) LOS: 100.0%
Total: 20000 W: 1381 L: 1044 D: 17575
Ptnml(0-2): 27, 885, 7864, 1172, 52

https://tests.stockfishchess.org/tests/view/60a2beede229097940a03806
TC: 20s+0.2s, 8 threads
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 34272 W: 1610 L: 1452 D: 31210
Ptnml(0-2): 30, 1285, 14350, 1439, 32

https://tests.stockfishchess.org/tests/view/60a2d687e229097940a03c72
TC: 60s+0.6s, 8 threads
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 45544 W: 1262 L: 1214 D: 43068
Ptnml(0-2): 12, 1129, 20442, 1177, 12

The network has been trained (by vondele) using the https://github.com/glinscott/nnue-pytorch/ trainer (started by glinscott),
specifically the branch https://github.com/Sopel97/nnue-pytorch/tree/experiment_56.
The data used are in 64 billion positions (193GB total) generated and scored with the current master net
d8: https://drive.google.com/file/d/1hOOYSDKgOOp38ZmD0N4DV82TOLHzjUiF/view?usp=sharing
d9: https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing
d10: https://drive.google.com/file/d/1ZC5upzBYMmMj1gMYCkt6rCxQG0GnO3Kk/view?usp=sharing
fishtest_d9: https://drive.google.com/file/d/1GQHt0oNgKaHazwJFTRbXhlCN3FbUedFq/view?usp=sharing

This network also contains a few architectural changes with respect to the current master:

    Size changed from 256x2-32-32-1 to 512x2-16-32-1
        ~15-20% slower
        ~2x larger
        adds a special path for 16 valued ClippedReLU
        fixes affine transform code for 16 inputs/outputs, buy using InputDimensions instead of PaddedInputDimensions
            this is safe now because the inputs are processed in groups of 4 in the current affine transform code
    The feature set changed from HalfKP to HalfKAv2
        Includes information about the kings like HalfKA
        Packs king features better, resulting in 8% size reduction compared to HalfKA
    The board is flipped for the black's perspective, instead of rotated like in the current master
    PSQT values for each feature
        the feature transformer now outputs a part that is fowarded directly to the output and allows learning piece values more directly than the previous network architecture. The effect is visible for high imbalance positions, where the current master network outputs evaluations skewed towards zero.
        8 PSQT values per feature, chosen based on (popcount(pos.pieces()) - 1) / 4
        initialized to classical material values on the start of the training
    8 subnetworks (512x2->16->32->1), chosen based on (popcount(pos.pieces()) - 1) / 4
        only one subnetwork is evaluated for any position, no or marginal speed loss

A diagram of the network is available: https://user-images.githubusercontent.com/8037982/118656988-553a1700-b7eb-11eb-82ef-56a11cbebbf2.png
A more complete description: https://github.com/glinscott/nnue-pytorch/blob/master/docs/nnue.md

closes https://github.com/official-stockfish/Stockfish/pull/3474

Bench: 3806488
2021-05-18 18:06:23 +02:00
Tomasz Sobczyk
58054fd0fa Exporting the currently loaded network file
This PR adds an ability to export any currently loaded network.
The export_net command now takes an optional filename parameter.
If the loaded net is not the embedded net the filename parameter is required.

Two changes were required to support this:

* the "architecture" string, which is really just a some kind of description in the net, is now saved into netDescription on load and correctly saved on export.
* the AffineTransform scrambles weights for some architectures and sparsifies them, such that retrieving the index is hard. This is solved by having a temporary scrambled<->unscrambled index lookup table when loading the network, and the actual index is saved for each individual weight that makes it to canSaturate16. This increases the size of the canSaturate16 entries by 6 bytes.

closes https://github.com/official-stockfish/Stockfish/pull/3456

No functional change
2021-05-11 19:36:11 +02:00
Tomasz Sobczyk
fbbd4adc3c Unify naming convention of the NNUE code
matches the rest of the stockfish code base

closes https://github.com/official-stockfish/Stockfish/pull/3437

No functional change
2021-04-24 12:49:29 +02:00
Dieter Dobbelaere
7ffae17f85 Add Stockfish namespace.
fixes #3350 and is a small cleanup that might make it easier to use SF
in separate projects, like a NNUE trainer or similar.

closes https://github.com/official-stockfish/Stockfish/pull/3370

No functional change.
2021-03-07 14:26:54 +01:00
Joost VandeVondele
c4d67d77c9 Update copyright years
No functional change
2021-01-08 17:04:23 +01:00
mstembera
9b7983a452 Cleaned up MakeIndex()
The index order in kpp_board_index[][] is reversed to be more optimal for the access pattern

STC https://tests.stockfishchess.org/tests/view/5fbd74f967cbf42301d6b24f
LLR: 2.93 (-2.94,2.94) {-1.25,0.25}
Total: 27504 W: 2686 L: 2607 D: 22211
Ptnml(0-2): 84, 2001, 9526, 2034, 107

closes https://github.com/official-stockfish/Stockfish/pull/3233

No functional change
2020-11-29 16:36:49 +01:00
Tomasz Sobczyk
3f6451eff7 Manually align arrays on the stack
as a workaround to issues with overaligned alignas() on stack variables in gcc < 9.3 on windows.

closes https://github.com/official-stockfish/Stockfish/pull/3217

fixes #3216

No functional change
2020-11-04 19:52:42 +01:00
Stéphane Nicolet
9a64e737cf Small cleanups 12
- Clean signature of functions in namespace NNUE
- Add comment for countermove based pruning
- Remove bestMoveCount variable
- Add const qualifier to kpp_board_index array
- Fix spaces in get_best_thread()
- Fix indention in capture LMR code in search.cpp
- Rename TtmemDeleter to LargePageDeleter

Closes https://github.com/official-stockfish/Stockfish/pull/3063

No functional change
2020-09-21 10:41:10 +02:00
Sami Kiminki
485d517c68 Add large page support for NNUE weights and simplify TT mem management
Use TT memory functions to allocate memory for the NNUE weights. This
should provide a small speed-up on systems where large pages are not
automatically used, including Windows and some Linux distributions.

Further, since we now have a wrapper for std::aligned_alloc(), we can
simplify the TT memory management a bit:

- We no longer need to store separate pointers to the hash table and
  its underlying memory allocation.
- We also get to merge the Linux-specific and default implementations
  of aligned_ttmem_alloc().

Finally, we'll enable the VirtualAlloc code path with large page
support also for Win32.

STC: https://tests.stockfishchess.org/tests/view/5f66595823a84a47b9036fba
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 14896 W: 1854 L: 1686 D: 11356
Ptnml(0-2): 65, 1224, 4742, 1312, 105

closes https://github.com/official-stockfish/Stockfish/pull/3081

No functional change.
2020-09-21 08:43:48 +02:00
syzygy1
fc27d158c0 Bug fix in do_null_move() and NNUE simplification.
This fixes #3108 and removes some NNUE code that is currently not used.

At the moment, do_null_move() copies the accumulator from the previous
state into the new state, which is correct. It then clears the "computed_score"
flag because the side to move has changed, and with the other side to move
NNUE will return a completely different evaluation (normally with changed
sign but also with different NNUE-internal tempo bonus).

The problem is that do_null_move() clears the wrong flag. It clears the
computed_score flag of the old state, not of the new state. It turns out
that this almost never affects the search. For example, fixing it does not
change the current bench (but it does change the previous bench). This is
because the search code usually avoids calling evaluate() after a null move.

This PR corrects do_null_move() by removing the computed_score flag altogether.
The flag is not needed because nnue_evaluate() is never called twice on a position.

This PR also removes some unnecessary {}s and inserts a few blank lines
in the modified NNUE files in line with SF coding style.

Resulf ot STC non-regression test:
LLR: 2.95 (-2.94,2.94) {-1.25,0.25}
Total: 26328 W: 3118 L: 3012 D: 20198
Ptnml(0-2): 126, 2208, 8397, 2300, 133
https://tests.stockfishchess.org/tests/view/5f553ccc2d02727c56b36db1

closes https://github.com/official-stockfish/Stockfish/pull/3109

bench: 4109324
2020-09-08 22:53:17 +02:00
Stéphane Nicolet
406979ea12 Embed default net, and simplify using non-default nets
covers the most important cases from the user perspective:

It embeds the default net in the binary, so a download of that binary will result
in a working engine with the default net. The engine will be functional in the default mode
without any additional user action.

It allows non-default nets to be used, which will be looked for in up to
three directories (working directory, location of the binary, and optionally a specific default directory).
This mechanism is also kept for those developers that use MSVC,
the one compiler that doesn't have an easy mechanism for embedding data.

It is possible to disable embedding, and instead specify a specific directory, e.g. linux distros might want to use
CXXFLAGS="-DNNUE_EMBEDDING_OFF -DDEFAULT_NNUE_DIRECTORY=/usr/share/games/stockfish/" make -j ARCH=x86-64 profile-build

passed STC non-regression:
https://tests.stockfishchess.org/tests/view/5f4a581c150f0aef5f8ae03a
LLR: 2.95 (-2.94,2.94) {-1.25,-0.25}
Total: 66928 W: 7202 L: 7147 D: 52579
Ptnml(0-2): 291, 5309, 22211, 5360, 293

closes https://github.com/official-stockfish/Stockfish/pull/3070

fixes https://github.com/official-stockfish/Stockfish/issues/3030

No functional change.
2020-08-29 21:56:00 +02:00
syzygy1
9b4967071e Remove EvalList
This patch removes the EvalList structure from the Position object and generally simplifies the interface between do_move() and the NNUE code.

The NNUE evaluation function first calculates the "accumulator". The accumulator consists of two halves: one for white's perspective, one for black's perspective.

If the "friendly king" has moved or the accumulator for the parent position is not available, the accumulator for this half has to be calculated from scratch. To do this, the NNUE node needs to know the positions and types of all non-king pieces and the position of the friendly king. This information can easily be obtained from the Position object.

If the "friendly king" has not moved, its half of the accumulator can be calculated by incrementally updating the accumulator for the previous position. For this, the NNUE code needs to know which pieces have been added to which squares and which pieces have been removed from which squares. In principle this information can be derived from the Position object and StateInfo struct (in the same way as undo_move() does this). However, it is probably a bit faster to prepare this information in do_move(), so I have kept the DirtyPiece struct. Since the DirtyPiece struct now stores the squares rather than "PieceSquare" indices, there are now at most three "dirty pieces" (previously two). A promotion move that captures a piece removes the capturing pawn and the captured piece from the board (to SQ_NONE) and moves the promoted piece to the promotion square (from SQ_NONE).

An STC test has confirmed a small speedup:

https://tests.stockfishchess.org/tests/view/5f43f06b5089a564a10d850a
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 87704 W: 9763 L: 9500 D: 68441
Ptnml(0-2): 426, 6950, 28845, 7197, 434

closes https://github.com/official-stockfish/Stockfish/pull/3068

No functional change
2020-08-26 07:11:26 +02:00
Stéphane Nicolet
81d716f5cc Reformat code in little-endian patch
Reformat code and rename the function to "read_little_endian()" in the recent
commit by Ronald de Man for support of big endian systems.

closes https://github.com/official-stockfish/Stockfish/pull/3016

No functional change
-----

Recommended net: https://tests.stockfishchess.org/api/nn/nn-82215d0fd0df.nnue
2020-08-17 12:15:57 +02:00
syzygy1
72dc7a5c54 Assume network file is in little-endian byte order
This patch fixes the byte order when reading 16- and 32-bit values from the network file on a big-endian machine.

Bytes are ordered in read_le() using unsigned arithmetic, which doesn't need tricks to determine the endianness of the machine. Unfortunately the compiler doesn't seem to be able to optimise the ordering operation, but reading in the weights is not a time-critical operation and the extra time it takes should not be noticeable.

Big endian systems are still untested with NNUE.

fixes #3007

closes https://github.com/official-stockfish/Stockfish/pull/3009

No functional change.
2020-08-16 21:10:26 +02:00
mstembera
6eb186c97e Try to match relative magnitude of NNUE eval to classical
The idea is that since we are mixing NNUE and classical evals matching their magnitudes closer allows for better comparisons.

STC https://tests.stockfishchess.org/tests/view/5f35a65411a9b1a1dbf18e2b
LLR: 2.94 (-2.94,2.94) {-0.50,1.50}
Total: 9840 W: 1150 L: 1027 D: 7663
Ptnml(0-2): 49, 772, 3175, 855, 69

LTC https://tests.stockfishchess.org/tests/view/5f35bcbe11a9b1a1dbf18e47
LLR: 2.93 (-2.94,2.94) {0.25,1.75}
Total: 44424 W: 2492 L: 2294 D: 39638
Ptnml(0-2): 42, 2015, 17915, 2183, 57

also corrects the location to clamp the evaluation (non-function on bench).

closes https://github.com/official-stockfish/Stockfish/pull/3003

bench: 3905447
2020-08-14 16:39:52 +02:00
nodchip
84f3e86790 Add NNUE evaluation
This patch ports the efficiently updatable neural network (NNUE) evaluation to Stockfish.

Both the NNUE and the classical evaluations are available, and can be used to
assign a value to a position that is later used in alpha-beta (PVS) search to find the
best move. The classical evaluation computes this value as a function of various chess
concepts, handcrafted by experts, tested and tuned using fishtest. The NNUE evaluation
computes this value with a neural network based on basic inputs. The network is optimized
and trained on the evalutions of millions of positions at moderate search depth.

The NNUE evaluation was first introduced in shogi, and ported to Stockfish afterward.
It can be evaluated efficiently on CPUs, and exploits the fact that only parts
of the neural network need to be updated after a typical chess move.
[The nodchip repository](https://github.com/nodchip/Stockfish) provides additional
tools to train and develop the NNUE networks.

This patch is the result of contributions of various authors, from various communities,
including: nodchip, ynasu87, yaneurao (initial port and NNUE authors), domschl, FireFather,
rqs, xXH4CKST3RXx, tttak, zz4032, joergoster, mstembera, nguyenpham, erbsenzaehler,
dorzechowski, and vondele.

This new evaluation needed various changes to fishtest and the corresponding infrastructure,
for which tomtor, ppigazzini, noobpwnftw, daylen, and vondele are gratefully acknowledged.

The first networks have been provided by gekkehenker and sergiovieri, with the latter
net (nn-97f742aaefcd.nnue) being the current default.

The evaluation function can be selected at run time with the `Use NNUE` (true/false) UCI option,
provided the `EvalFile` option points the the network file (depending on the GUI, with full path).

The performance of the NNUE evaluation relative to the classical evaluation depends somewhat on
the hardware, and is expected to improve quickly, but is currently on > 80 Elo on fishtest:

60000 @ 10+0.1 th 1
https://tests.stockfishchess.org/tests/view/5f28fe6ea5abc164f05e4c4c
ELO: 92.77 +-2.1 (95%) LOS: 100.0%
Total: 60000 W: 24193 L: 8543 D: 27264
Ptnml(0-2): 609, 3850, 9708, 10948, 4885

40000 @ 20+0.2 th 8
https://tests.stockfishchess.org/tests/view/5f290229a5abc164f05e4c58
ELO: 89.47 +-2.0 (95%) LOS: 100.0%
Total: 40000 W: 12756 L: 2677 D: 24567
Ptnml(0-2): 74, 1583, 8550, 7776, 2017

At the same time, the impact on the classical evaluation remains minimal, causing no significant
regression:

sprt @ 10+0.1 th 1
https://tests.stockfishchess.org/tests/view/5f2906a2a5abc164f05e4c5b
LLR: 2.94 (-2.94,2.94) {-6.00,-4.00}
Total: 34936 W: 6502 L: 6825 D: 21609
Ptnml(0-2): 571, 4082, 8434, 3861, 520

sprt @ 60+0.6 th 1
https://tests.stockfishchess.org/tests/view/5f2906cfa5abc164f05e4c5d
LLR: 2.93 (-2.94,2.94) {-6.00,-4.00}
Total: 10088 W: 1232 L: 1265 D: 7591
Ptnml(0-2): 49, 914, 3170, 843, 68

The needed networks can be found at https://tests.stockfishchess.org/nns
It is recommended to use the default one as indicated by the `EvalFile` UCI option.

Guidelines for testing new nets can be found at
https://github.com/glinscott/fishtest/wiki/Creating-my-first-test#nnue-net-tests

Integration has been discussed in various issues:
https://github.com/official-stockfish/Stockfish/issues/2823
https://github.com/official-stockfish/Stockfish/issues/2728

The integration branch will be closed after the merge:
https://github.com/official-stockfish/Stockfish/pull/2825
https://github.com/official-stockfish/Stockfish/tree/nnue-player-wip

closes https://github.com/official-stockfish/Stockfish/pull/2912

This will be an exciting time for computer chess, looking forward to seeing the evolution of
this approach.

Bench: 4746616
2020-08-06 16:37:45 +02:00