Current master implements a scaling of the raw NNUE output value with a formula
equivalent to 'eval = alpha * NNUE_output', where the scale factor alpha varies
between 1.8 (for early middle game) and 0.9 (for pure endgames). This feature
allows Stockfish to keep material on the board when she thinks she has the advantage,
and to seek exchanges and simplifications when she thinks she has to defend.
This patch slightly offsets the turning point between these two strategies, by adding
to Stockfish's evaluation a small "optimism" value before actually doing the scaling.
The effect is that SF will play a little bit more risky, trying to keep the tension a
little bit longer when she is defending, and keeping even more material on the board
when she has an advantage.
We note that this patch is similar in spirit to the old "Contempt" idea we used to have
in classical Stockfish, but this implementation differs in two key points:
a) it has been tested as an Elo-gainer against master;
b) the values output by the search are not changed on average by the implementation
(in other words, the optimism value changes the tension/exchange strategy, but a
displayed value of 1.0 pawn has the same signification before and after the patch).
See the old comment https://github.com/official-stockfish/Stockfish/pull/1361#issuecomment-359165141
for some images illustrating the ideas.
-------
finished yellow at STC:
LLR: -2.94 (-2.94,2.94) <0.00,2.50>
Total: 165048 W: 41705 L: 41611 D: 81732
Ptnml(0-2): 565, 18959, 43245, 19327, 428
https://tests.stockfishchess.org/tests/view/61942a3dcd645dc8291c876b
passed LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.00>
Total: 121656 W: 30762 L: 30287 D: 60607
Ptnml(0-2): 87, 12558, 35032, 13095, 56
https://tests.stockfishchess.org/tests/view/61962c58cd645dc8291c8877
-------
How to continue from there?
a) the shape (slope and amplitude) of the sigmoid used to compute the optimism value
could be tweaked to try to gain more Elo, so the parameters of the sigmoid function
in line 391 of search.cpp could be tuned with SPSA. Manual tweaking is also possible
using this Desmos page: https://www.desmos.com/calculator/jhh83sqq92
b) in a similar vein, with two recents patches affecting the scaling of the NNUE
evaluation in evaluate.cpp, now could be a good time to try a round of SPSA tuning
of the NNUE network;
c) this patch will tend to keep tension in middlegame a little bit longer, so any
patch improving the defensive aspect of play via search extensions in risky,
tactical positions would be welcome.
-------
closes https://github.com/official-stockfish/Stockfish/pull/3797
Bench: 6184852
In case the evaluation at root is large, discourage the use of lazyEval.
This fixes https://github.com/official-stockfish/Stockfish/issues/3772
or at least improves it significantly. In this case, poor play with large
odds can be observed, in extreme cases leading to a loss despite large
advantage:
r1bq1b1r/ppp3p1/3p1nkp/n3p3/2B1P2N/2NPB3/PPP2PPP/R3K2R b KQ - 5 9
With this patch the poor move is only considered up to depth 13, in master
up to depth 28.
The patch did not pass at LTC with Elo gainer bounds, but with slightly
positive Elo nevertheless (95% LOS).
STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 40368 W: 10318 L: 10041 D: 20009
Ptnml(0-2): 103, 4493, 10725, 4750, 113
https://tests.stockfishchess.org/tests/view/61800ad259e71df00dcc420d
LTC:
LLR: -2.94 (-2.94,2.94) <0.50,3.00>
Total: 212288 W: 52997 L: 52692 D: 106599
Ptnml(0-2): 112, 22038, 61549, 22323, 122
https://tests.stockfishchess.org/tests/view/618050d959e71df00dcc426d
closes https://github.com/official-stockfish/Stockfish/pull/3780
Bench: 7127040
combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets.
Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-d8609abe8caf.nnue:
The initial net nn-d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer.
Training data gen command (generates in chunks of 200k positions):
generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack
PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence):
python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val
See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data.
Based on that Michael generated nn-76a8a7ffb820.nnue in the following way:
The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch
python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-d8609abe8caf.pt
This run is thus started from Segio Vieri's net nn-d8609abe8caf.nnue
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack
model.py modifications:
loss = torch.pow(torch.abs(p - q), 2.6).mean()
LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch.
optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992)
For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created
The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time.
passed LTC
https://tests.stockfishchess.org/tests/view/60fa45aed8a6b65b2f3a77a4
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53040 W: 1732 L: 1575 D: 49733
Ptnml(0-2): 14, 1432, 23474, 1583, 17
passed STC
https://tests.stockfishchess.org/tests/view/60f9fee2d8a6b65b2f3a7775
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 37928 W: 3178 L: 3001 D: 31749
Ptnml(0-2): 100, 2446, 13695, 2623, 100.
closes https://github.com/official-stockfish/Stockfish/pull/3626
Bench: 5169957
In the so-called "hybrid" method of evaluation of current master, we use the
classical eval (because of its speed) instead of the NNUE eval when the classical
material balance approximation hints that the position is "winning enough" to
rely on the classical eval.
This trade-off idea between speed and accuracy works well in general, but in
some fortress positions the classical eval is just bad. So in shuffling branches
of the search tree, we (slowly) increase the thresehold so that eventually we
don't trust classical anymore and switch to NNUE evaluation.
This patch increases that threshold faster, so that we switch to NNUE quicker
in shuffling branches. Idea is to incite Stockfish to spend less time in fortresses
lines in the search tree, and spend more time searching the critical lines.
passed STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 47872 W: 3908 L: 3720 D: 40244
Ptnml(0-2): 122, 3053, 17419, 3199, 143
https://tests.stockfishchess.org/tests/view/60cef34b457376eb8bcab79d
passed LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 73616 W: 2326 L: 2143 D: 69147
Ptnml(0-2): 21, 1940, 32705, 2119, 23
https://tests.stockfishchess.org/tests/view/60cf6d842114332881e73528
Retested at LTC against lastest master:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 18264 W: 642 L: 532 D: 17090
Ptnml(0-2): 6, 479, 8055, 583, 9
https://tests.stockfishchess.org/tests/view/60d18cd540925195e7a6c351
closes https://github.com/official-stockfish/Stockfish/pull/3578
Bench: 5139233
This patch removes the UCI option for setting Contempt in classical evaluation.
It is exactly equivalent to using Contempt=0 for the UCI contempt value and keeping
the dynamic part in the algo (renaming this dynamic part `trend` to better describe
what it does). We have tried quite hard to implement a working Contempt feature for
NNUE but nothing really worked, so it is probably time to give up.
Interested chess fans wishing to keep playing with the UCI option for Contempt and
use it with the classical eval are urged to download the version tagged "SF_Classical"
of Stockfish (dated 31 July 2020), as it was the last version where our search
algorithm was tuned for the classical eval and is probably our strongest classical
player ever: https://github.com/official-stockfish/Stockfish/tags
Passed STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 72904 W: 6228 L: 6175 D: 60501
Ptnml(0-2): 221, 5006, 25971, 5007, 247
https://tests.stockfishchess.org/tests/view/60c98bf9457376eb8bcab18d
Passed LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 45168 W: 1601 L: 1547 D: 42020
Ptnml(0-2): 38, 1331, 19786, 1397, 32
https://tests.stockfishchess.org/tests/view/60c9c7fa457376eb8bcab1bb
closes https://github.com/official-stockfish/Stockfish/pull/3575
Bench: 4947716
This patch increase the weight of pawns and pieces from 28 to 32
in the scaling formula we apply to the output of the NNUE pure eval.
Increasing this gradient for pawns and pieces means that Stockfish
will try a little harder to keep material when she has the advantage,
and try a little bit harder to escape into an endgame when she is
under pressure.
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 53168 W: 4371 L: 4177 D: 44620
Ptnml(0-2): 160, 3389, 19283, 3601, 151
https://tests.stockfishchess.org/tests/view/60cefd1d457376eb8bcab7ab
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 10888 W: 386 L: 288 D: 10214
Ptnml(0-2): 3, 260, 4821, 356, 4
https://tests.stockfishchess.org/tests/view/60cf709d2114332881e7352b
closes https://github.com/official-stockfish/Stockfish/pull/3571
Bench: 4965430
This simplification patch implements two changes:
1. it simplifies away the so-called "lazy" path in the NNUE evaluation internals,
where we trusted the psqt head alone to avoid the costly "positional" head in
some cases;
2. it raises a little bit the NNUEThreshold1 in evaluate.cpp (from 682 to 800),
which increases the limit where we switched from NNUE eval to Classical eval.
Both effects increase the number of positional evaluations done by our new net
architecture, but the results of our tests below seem to indicate that the loss
of speed will be compensated by the gain of eval quality.
STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 26280 W: 2244 L: 2137 D: 21899
Ptnml(0-2): 72, 1755, 9405, 1810, 98
https://tests.stockfishchess.org/tests/view/60ae73f112066fd299795a51
LTC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 20592 W: 750 L: 677 D: 19165
Ptnml(0-2): 9, 614, 8980, 681, 12
https://tests.stockfishchess.org/tests/view/60ae88e812066fd299795a82
closes https://github.com/official-stockfish/Stockfish/pull/3503
Bench: 3817907
Definition of the lazy threshold moved to evaluate.cpp where all others are.
Lazy threshold only used for real searches, not used for the "eval" call.
This preserves the purity of NNUE evaluation, which is useful to verify
consistency between the engine and the NNUE trainer.
closes https://github.com/official-stockfish/Stockfish/pull/3499
No functional change
Our new nets output two values for the side to move in the last layer.
We can interpret the first value as a material evaluation of the
position, and the second one as the dynamic, positional value of the
location of pieces.
This patch changes the balance for the (materialist, positional) parts
of the score from (128, 128) to (121, 135) when the piece material is
equal between the two players, but keeps the standard (128, 128) balance
when one player is at least an exchange up.
Passed STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 15936 W: 1421 L: 1266 D: 13249
Ptnml(0-2): 37, 1037, 5694, 1134, 66
https://tests.stockfishchess.org/tests/view/60a82df9ce8ea25a3ef0408f
Passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 13904 W: 516 L: 410 D: 12978
Ptnml(0-2): 4, 374, 6088, 484, 2
https://tests.stockfishchess.org/tests/view/60a8bbf9ce8ea25a3ef04101
closes https://github.com/official-stockfish/Stockfish/pull/3492
Bench: 3856635
The Tempo variable was introduced 10 years ago in our search because the
classical evaluation function was antisymmetrical in White and Black by design
to gain speed:
Eval(White to play) = -Eval(Black to play)
Nowadays our neural networks know which side is to play in a position when
they evaluate a position and are trained on real games, so the neural network
encodes the advantage of moving as an output of search. This patch shows that
the Tempo variable is not necessary anymore.
STC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 33512 W: 2805 L: 2709 D: 27998
Ptnml(0-2): 80, 2209, 12095, 2279, 93
https://tests.stockfishchess.org/tests/view/60a44ceace8ea25a3ef03d30
LTC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 53920 W: 1807 L: 1760 D: 50353
Ptnml(0-2): 16, 1617, 23650, 1658, 19
https://tests.stockfishchess.org/tests/view/60a477f0ce8ea25a3ef03d49
We also tried a match (20000 games) at STC using purely classical, result was neutral:
https://tests.stockfishchess.org/tests/view/60a4eebcce8ea25a3ef03db5
Note: there are two locations left in search.cpp where we assume antisymmetry
of evaluation (in relation with a speed optimization for null moves in lines
770 and 1439), but as the values are just used for heuristic pruning this
approximation should not hurt too much because the order of magnitude is still
true most of the time.
closes https://github.com/official-stockfish/Stockfish/pull/3481
Bench: 4015864
- Comment for Countemove pruning -> Continuation history
- Fix comment in input_slice.h
- Shorter lines in Makefile
- Comment for scale factor
- Fix comment for pinners in see_ge()
- Change Thread.id() signature to size_t
- Trailing space in reprosearch.sh
- Add Douglas Matos Gomes to the AUTHORS file
- Introduce comment for undo_null_move()
- Use Stockfish coding style for export_net()
- Change date in AUTHORS file
closes https://github.com/official-stockfish/Stockfish/pull/3416
No functional change
This PR adds an ability to export any currently loaded network.
The export_net command now takes an optional filename parameter.
If the loaded net is not the embedded net the filename parameter is required.
Two changes were required to support this:
* the "architecture" string, which is really just a some kind of description in the net, is now saved into netDescription on load and correctly saved on export.
* the AffineTransform scrambles weights for some architectures and sparsifies them, such that retrieving the index is hard. This is solved by having a temporary scrambled<->unscrambled index lookup table when loading the network, and the actual index is saved for each individual weight that makes it to canSaturate16. This increases the size of the canSaturate16 entries by 6 bytes.
closes https://github.com/official-stockfish/Stockfish/pull/3456
No functional change
Introduce variable tempo for nnue depending on logarithm of estimated
strength, where strength is the product of time and number of threads.
The original idea here was that NNUE is best with a slightly different
tempo value to classical, since its style of play is slightly different.
It turns out that the best tempo for NNUE varies with strength of play,
so a formula is used which gives about 19 for STC and 24 for LTC under
current fishtest settings.
STC 10+0.1:
LLR: 2.94 (-2.94,2.94) {-0.20,1.10}
Total: 120816 W: 11155 L: 10861 D: 98800
Ptnml(0-2): 406, 8728, 41933, 8848, 493
https://tests.stockfishchess.org/tests/view/60735b3a8141753378960534
LTC 60+0.6:
LLR: 2.94 (-2.94,2.94) {0.20,0.90}
Total: 35688 W: 1392 L: 1234 D: 33062
Ptnml(0-2): 23, 1079, 15473, 1255, 14
https://tests.stockfishchess.org/tests/view/6073ffbc814175337896057f
Passed non-regression SMP test at LTC 20+0.2 (8 threads):
LLR: 2.95 (-2.94,2.94) {-0.70,0.20}
Total: 11008 W: 317 L: 267 D: 10424
Ptnml(0-2): 2, 245, 4962, 291, 4
https://tests.stockfishchess.org/tests/view/60749ea881417533789605a4
closes https://github.com/official-stockfish/Stockfish/pull/3426
Bench 4075325
NNUE evaluation is incapable of recognizing trivially drawn bishop endgames
(the wrong-colored rook pawn), which are in fact ubiquitous and stock standard
in chess analysis. Switching off NNUE evaluation in KBPs vs KPs endgames is
a measure that stops Stockfish from trading down to a drawn version of these
endings when we presumably have advantage. The patch is able to edge over master
in endgame positions.
Patch tested for Elo gain with the "endgame.epd" book, and verified for
non-regression with our usual book (see the pull request for details).
STC:
LLR: 2.93 (-2.94,2.94) {-0.20,1.10}
Total: 33232 W: 6655 L: 6497 D: 20080
Ptnml(0-2): 4, 2342, 11769, 2494, 7
https://tests.stockfishchess.org/tests/view/6074a52981417533789605b8
LTC:
LLR: 2.93 (-2.94,2.94) {0.20,0.90}
Total: 159056 W: 29799 L: 29378 D: 99879
Ptnml(0-2): 7, 9004, 61085, 9425, 7
https://tests.stockfishchess.org/tests/view/6074c39a81417533789605ca
Closes https://github.com/official-stockfish/Stockfish/pull/3427
Bench: 4503918
blah
This patch changes the pop_lsb() signature from Square pop_lsb(Bitboard*) to
Square pop_lsb(Bitboard&). This is more idomatic for C++ style signatures.
Passed a non-regression STC test:
LLR: 2.93 (-2.94,2.94) {-1.25,0.25}
Total: 21280 W: 1928 L: 1847 D: 17505
Ptnml(0-2): 71, 1427, 7558, 1518, 66
https://tests.stockfishchess.org/tests/view/6053a1e22433018de7a38e2f
We have verified that the generated binary is identical on gcc-10.
Closes https://github.com/official-stockfish/Stockfish/pull/3404
No functional change.
our net currently is not trained on FRC games, and so doesn't know about the important pattern of a bishop that is cornered in FRC.
This patch introduces a term we have in the classical evaluation for this case, and adds it to the NNUE eval.
Since fishtest doesn't support FRC right now, the patch was tested locally at STC conditions,
starting from the book of FRC starting positions.
Score of master vs patch: 993 - 2226 - 6781 [0.438] 10000
Which corresponds to approximately 40 Elo
The patch passes non-regression testing for traditional chess (where it adds one branch).
passed STC:
https://tests.stockfishchess.org/tests/view/604fa2532433018de7a38b67
LLR: 2.95 (-2.94,2.94) {-1.25,0.25}
Total: 30560 W: 2701 L: 2636 D: 25223
Ptnml(0-2): 88, 2056, 10921, 2133, 82
passed STC also in an earlier version:
https://tests.stockfishchess.org/tests/view/604f61282433018de7a38b4d
closes https://github.com/official-stockfish/Stockfish/pull/3398
No functional change
This patch give a small bonus to incite the attacking side to keep more
pawns on the board.
A consequence of this bonus is that Stockfish will tend to play positions
slightly more closed on average than master, especially when it believes
that it has an advantage.
To lower the risk of blockades where Stockfish start shuffling without
progress, we also implement a progressive decrease of the evaluation
value with the 50 moves counter (along with the necessary aging of the
transposition table to reduce the impact of the Graph History Interaction
problem): since the evaluation decreases during shuffling phases, the
engine will tend to examine the consequences of pawn breaks faster during
the search.
Passed STC:
LLR: 2.96 (-2.94,2.94) {-0.25,1.25}
Total: 26184 W: 2406 L: 2252 D: 21526
Ptnml(0-2): 85, 1784, 9223, 1892, 108
https://tests.stockfishchess.org/tests/view/600cc08b735dd7f0f0352c06
Passed LCT:
LLR: 2.95 (-2.94,2.94) {0.25,1.25}
Total: 199768 W: 7695 L: 7191 D: 184882
Ptnml(0-2): 85, 6478, 86269, 6952, 100
https://tests.stockfishchess.org/tests/view/600ccd28735dd7f0f0352c10
Closes https://github.com/official-stockfish/Stockfish/pull/3321
Bench: 3988915
Changed name from Bad Outpost to Uncontested Outpost
Scale Uncontested Outpost with number of pawns + Decrease Bishop PSQT values and general tuning
Credits for the decrease of the Bishop PSQT values: Fauzi
Credits for scaling Uncontested Outpost with number of pawns: Lolligerhans
Credits for the tunings: Fauzi
Passed STC:
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 32040 W: 6593 L: 6281 D: 19166
Ptnml(0-2): 596, 3713, 7095, 4015, 601
https://tests.stockfishchess.org/tests/view/5ffa43026019e097de3ef0f2
Passed LTC:
LLR: 2.95 (-2.94,2.94) {0.25,1.25}
Total: 84376 W: 11395 L: 10950 D: 62031
Ptnml(0-2): 652, 7930, 24623, 8287, 696
https://tests.stockfishchess.org/tests/view/5ffa6e7b6019e097de3ef0fd
closes https://github.com/official-stockfish/Stockfish/pull/3296
Bench: 4287509
Imbalance tables tweaked to contain MiddleGame and Endgame values, instead of a single value.
The idea started from Fisherman, which requested my help to tune the values back in June/July,
so I tuned the values back then, and we were able to accomplish good results,
but not enough to pass both STC and LTC tests.
So after the recent changes, I decided to give it another shot, and I am glad that it was a successful attempt.
A special thanks goes also to mstembera, which notified me a simple way to let the patch perform a little better.
Passed STC:
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 115976 W: 23124 L: 22695 D: 70157
Ptnml(0-2): 2074, 13652, 26285, 13725, 2252
https://tests.stockfishchess.org/tests/view/5fc92d2d42a050a89f02ccc8
Passed LTC:
LLR: 2.94 (-2.94,2.94) {0.25,1.25}
Total: 156304 W: 20617 L: 20024 D: 115663
Ptnml(0-2): 1138, 14647, 46084, 15050, 1233
https://tests.stockfishchess.org/tests/view/5fc9fee142a050a89f02cd3e
closes https://github.com/official-stockfish/Stockfish/pull/3255
Bench: 4278746
Include scaling change as suggested by Dietrich Kappe,
the one who trained net for Komodo. According to him,
some nets may require different scaling in order to utilize its full strength.
STC:
LLR: 2.93 (-2.94,2.94) {-0.25,1.25}
Total: 99856 W: 9669 L: 9401 D: 80786
Ptnml(0-2): 374, 7468, 34037, 7614, 435
https://tests.stockfishchess.org/tests/view/5fc2697642a050a89f02c8ec
LTC:
LLR: 2.96 (-2.94,2.94) {0.25,1.25}
Total: 29840 W: 1220 L: 1081 D: 27539
Ptnml(0-2): 10, 969, 12827, 1100, 14
https://tests.stockfishchess.org/tests/view/5fc2ea5142a050a89f02c957
Bench: 3561701
This patch removes the incrementally updated piece lists from the Position object.
This has been tried before but always failed. My reasons for trying again are:
* 32-bit systems (including phones) are now much less important than they were some years ago (and are absent from fishtest);
* NNUE may have made SF less finely tuned to the order in which moves were generated.
STC:
LLR: 2.94 (-2.94,2.94) {-1.25,0.25}
Total: 55272 W: 5260 L: 5216 D: 44796
Ptnml(0-2): 208, 4147, 18898, 4159, 224
https://tests.stockfishchess.org/tests/view/5fc2986a42a050a89f02c926
LTC:
LLR: 2.96 (-2.94,2.94) {-0.75,0.25}
Total: 16600 W: 673 L: 608 D: 15319
Ptnml(0-2): 14, 533, 7138, 604, 11
https://tests.stockfishchess.org/tests/view/5fc2f98342a050a89f02c95c
closes https://github.com/official-stockfish/Stockfish/pull/3247
Bench: 3940967
+-----------------+
| . . . . . . . . | All files are closed. Some files are
| . . . . . o o . | more valuable for rooks, because
| . . . . o . . o | they might open in the future.
| . . . o x . . x |
| o . o x . x x . |
| x o x . . . . . | x our pawns
| . x . . . . . . | o their pawns
| . . . . . . . . | ^ rooks are scored higher on these files
+-----------------+
^ ^
Files containing none of our own pawns are open or half-open (otherwise
they are closed). Rooks on (half-)open files recieve a bonus for the
future potential to act along all ranks.
This commit refines the (relative) penalty of rooks on closed files.
Files that contain one of our blocked pawns are considered less likely
to open in the future; rooks on these files are now penalized stronger.
This bonus does not generally correlate with mobility. If the condition
is sufficiently refined in the future, it may be beneficial to adjust or
override mobility scores in some cases.
LTC
LLR: 2.94 (-2.94,2.94) {0.25,1.25}
Total: 494384 W: 71565 L: 70231 D: 352588
Ptnml(0-2): 3907, 48050, 142118, 49036, 4081
https://tests.stockfishchess.org/tests/view/5fb9312e67cbf42301d6afb9
LTC (non-regression w/ book noob_3moves.epd)
LLR: 2.95 (-2.94,2.94) {-0.75,0.25}
Total: 208520 W: 27044 L: 26937 D: 154539
Ptnml(0-2): 1557, 19850, 61391, 19853, 1609
https://tests.stockfishchess.org/tests/view/5fc01ced67cbf42301d6b3df
STC
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 98392 W: 20269 L: 19868 D: 58255
Ptnml(0-2): 1804, 11297, 22589, 11706, 1800
https://tests.stockfishchess.org/tests/view/5fb7f88a67cbf42301d6af10
closes https://github.com/official-stockfish/Stockfish/pull/3242
Bench: 3682630