1
0
Fork 0
mirror of https://github.com/sockspls/badfish synced 2025-05-01 01:03:09 +00:00
Commit graph

144 commits

Author SHA1 Message Date
Muzhen Gaming
c90dd38903 Simplify away complexity in evaluation
Simplification STC: https://tests.stockfishchess.org/tests/view/64394bc0605991a801b4f6f0
LLR: 2.95 (-2.94,2.94) <-1.75,0.25>
Total: 72360 W: 19313 L: 19138 D: 33909
Ptnml(0-2): 206, 7883, 19800, 8112, 179

Simplification LTC: https://tests.stockfishchess.org/tests/view/6439e788c233ce943b6bdac1
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 224992 W: 60665 L: 60654 D: 103673
Ptnml(0-2): 96, 21875, 68526, 21920, 79

closes https://github.com/official-stockfish/Stockfish/pull/4530

Bench: 3709369
2023-04-22 10:43:29 +02:00
Linmiao Xu
37160c4b16 Update default net to nn-dabb1ed23026.nnue
Created by retraining the master net with these modifications:

* New filtering methods for existing data from T80 sep+oct2022, T79 apr2022, T78 jun+jul+aug+sep2022, T77 dec2021
* Adding new filtered data from T80 aug2022 and T78 apr+may2022
* Increasing early-fen-skipping from 28 to 30

```
python3 easy_train.py \
  --experiment-name leela96-dfrc99-T80novT79mayT60novdec-v2-T80augsepoctT79aprT78aprtosep-v6-T77dec-v3-sk30 \
  --training-dataset /data/leela96-dfrc99-T80novT79mayT60novdec-v2-T80augsepoctT79aprT78aprtosep-v6-T77dec-v3.binpack \
  --nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes \
  --start-from-engine-test-net True \
  --early-fen-skipping 30 \
  --max_epoch 900 \
  --start-lambda 1.0 \
  --end-lambda 0.7 \
  --lr 4.375e-4 \
  --gamma 0.995 \
  --tui False \
  --gpus "0," \
  --seed $RANDOM
```

The v3 filtering used for data from T77dec 2021 differs from v2 filtering in that:

* To improve binpack compression, positions after ply 28 were skipped during training by setting position scores to VALUE_NONE (32002) instead of removing them entirely
* All early-game positions with ply <= 28 were removed to maximize binpack compression
* Only bestmove captures at d6pv2 search were skipped, not 2nd bestmove captures
* Binpack compression was repaired for the remaining positions by effectively replacing bestmoves with "played moves" to maintain contiguous sequences of positions in the training game data

After improving binpack compression, The T77 dec2021 data size was reduced from 95G to 19G.

The v6 filtering used for data from T80augsepoctT79aprT78aprtosep 2022 differs from v2 in that:

* All positions with only one legal move were removed
* Tighter score differences at d6pv2 search were used to remove more positions with only one good move than before
* d6pv2 search was not used to remove positions where the best 2 moves were captures

```
python3 interleave_binpacks.py \
  nn-547-dataset/leela96-eval-filt-v2.binpack \
  nn-547-dataset/dfrc99-eval-filt-v2.binpack \
  nn-547-dataset/test80-nov2022-12tb7p-eval-filt-v2-d6.binpack \
  nn-547-dataset/T79-may2022-12tb7p-eval-filt-v2.binpack \
  nn-547-dataset/T60-nov2021-12tb7p-eval-filt-v2.binpack \
  nn-547-dataset/T60-dec2021-12tb7p-eval-filt-v2.binpack \
  filt-v6/test80-aug2022-16tb7p-filter-v6.binpack \
  filt-v6/test80-sep2022-16tb7p-filter-v6.binpack \
  filt-v6/test80-oct2022-16tb7p-filter-v6.binpack \
  filt-v6/test79-apr2022-16tb7p-filter-v6.binpack \
  filt-v6/test78-aprmay2022-16tb7p-filter-v6.binpack \
  filt-v6/test78-junjulaug2022-16tb7p-filter-v6.binpack \
  filt-v6/test78-sep2022-16tb7p-filter-v6.binpack \
  filt-v3/test77-dec2021-16tb7p-filt-v3.binpack \
  /data/leela96-dfrc99-T80novT79mayT60novdec-v2-T80augsepoctT79aprT78aprtosep-v6-T77dec-v3.binpack
```

The code for the new data filtering methods is available at:
https://github.com/linrock/Stockfish/tree/nnue-data-v3/nnue-data

The code for giving hexword names to .nnue files is at:
https://github.com/linrock/nnue-namer

Links for downloading the training data components can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move:
nn-epoch779.nnue : 0.6 +/- 3.1

Passed STC:
https://tests.stockfishchess.org/tests/view/64212412db43ab2ba6f8efb0
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 82256 W: 22185 L: 21809 D: 38262
Ptnml(0-2): 286, 9065, 22067, 9407, 303

Passed LTC:
https://tests.stockfishchess.org/tests/view/64223726db43ab2ba6f91d6c
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 30840 W: 8437 L: 8149 D: 14254
Ptnml(0-2): 14, 2891, 9323, 3177, 15

closes https://github.com/official-stockfish/Stockfish/pull/4465

bench 5101970
2023-03-29 21:37:52 +02:00
disservin
af4b62a593 NNUE namespace cleanup
This patch moves the nnue namespace in the appropiate header that correspondes with the definition.
It also makes navigation a bit easier.

closes https://github.com/official-stockfish/Stockfish/pull/4445

No functional change
2023-03-19 11:27:15 +01:00
Linmiao Xu
876906965b Update default net to nn-52471d67216a.nnue
Created by retraining the master net with modifications to the previous best dataset:

* Improving T80 oct+nov 2022 endgame lambda accuracy by rescoring with 12-16tb of syzygy 7p tablebases
* Filtering T78 jun+jul+aug 2022 with d6pv2 search to remove positions with bestmove captures or one good move
* Adding T80 sep 2022 data, rescored with 16tb of 7p tablebases, unfiltered

Trained with max-epoch 900, end-lambda 0.7, and early-fen-skipping 28.

```
python3 easy_train.py \
  --experiment-name leela96-dfrc99-T80octnovT79aprmayT78junjulaugT60novdec-filt-v2-T78sep12tb7p-T77decT80sep16tb7p-lambda7-sk28 \
  --training-dataset /data/leela96-dfrc99-T80octnovT79aprmayT78junjulaugT60novdec-filt-v2-T78sep12tb7p-T77decT80sep16tb7p.binpack \
  --nnue-pytorch-branch linrock/nnue-pytorch/easy-train-early-fen-skipping \
  --early-fen-skipping 28 \
  --start-from-engine-test-net True \
  --gpus "0," \
  --max_epoch 900 \
  --start-lambda 1.0 \
  --end-lambda 0.7 \
  --gamma 0.995 \
  --lr 4.375e-4 \
  --tui False \
  --seed $RANDOM
```

Training data was rescored and d6pv2 filtered in the same way as recent best datasets.
For preparing the merged training dataset:

```
python3 interleave_binpacks.py \
  leela96-eval-filt-v2.binpack \
  dfrc99-eval-filt-v2.binpack \
  test80-oct2022-16tb7p-eval-filt-v2-d6.binpack \
  test80-nov2022-12tb7p-eval-filt-v2-d6.binpack \
  T79-apr2022-12tb7p-eval-filt-v2.binpack \
  T79-may2022-12tb7p-eval-filt-v2.binpack \
  test78-junjulaug2022-16tb7p-eval-filt-v2-d6.binpack \
  T60-nov2021-12tb7p-eval-filt-v2.binpack \
  T60-dec2021-12tb7p-eval-filt-v2.binpack \
  T78-sep2022-12tb7p.binpack \
  test77-dec2021-16gb7p.binpack \
  test80-sep2022-16tb7p.binpack \
  /data/leela96-dfrc99-T80octnovT79aprmayT78junjulaugT60novdec-filt-v2-T78sep12tb7p-T77decT80sep16tb7p.binpack
```

Links for downloading the training data components can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move:
nn-epoch839.nnue : 0.6 +/- 1.4

Passed STC:
https://tests.stockfishchess.org/tests/view/63f9ab4be74a12625bcdf02e
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 84656 W: 22681 L: 22302 D: 39673
Ptnml(0-2): 271, 9343, 22734, 9696, 284

Passed LTC:
https://tests.stockfishchess.org/tests/view/63fa3833e74a12625bce0c0e
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 184664 W: 49933 L: 49344 D: 85387
Ptnml(0-2): 111, 17977, 55561, 18578, 105

closes https://github.com/official-stockfish/Stockfish/pull/4416

bench: 4814343
2023-02-27 22:07:52 +01:00
Joost VandeVondele
08385527dd Introduce a function to compute NNUE accumulator
This patch introduces `hint_common_parent_position()` to signal that potentially several child nodes will require an NNUE eval. By populating explicitly the accumulator, these subsequent evaluations can be performed more efficiently.

This was based on the observation that calculating the evaluation in an excluded move position yielded a significant Elo gain, even though the evaluation itself was already available (work by pb00067).

Sopel wrote the code to perform just the accumulator update. This PR is based on cleaned up code that

passed STC:
https://tests.stockfishchess.org/tests/view/63f62f9be74a12625bcd4aa0
 LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 110368 W: 29607 L: 29167 D: 51594
Ptnml(0-2): 41, 10551, 33572, 10967, 53

and in an the earlier (equivalent) version

passed STC:
https://tests.stockfishchess.org/tests/view/63f3c3fee74a12625bcce2a6
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 47552 W: 12786 L: 12467 D: 22299
Ptnml(0-2): 120, 5107, 12997, 5438, 114

passed LTC:
https://tests.stockfishchess.org/tests/view/63f45cc2e74a12625bccfa63
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 110368 W: 29607 L: 29167 D: 51594
Ptnml(0-2): 41, 10551, 33572, 10967, 53

closes https://github.com/official-stockfish/Stockfish/pull/4402

Bench: 3726250
2023-02-23 13:25:35 +01:00
Linmiao Xu
05dea2ca46 Update default net to nn-1337b1adec5b.nnue
Created by retraining the master net on a dataset composed of:

* Most of the previous best dataset filtered to remove positions likely having only one good move
* Adding training data from Leela T77 dec2021 rescored with 16tb of 7-piece tablebases

Trained with end lambda 0.7 and max epoch 900. Positions with ply <= 28 were removed from most of the previous best dataset before training began. A new nnue-pytorch trainer param for skipping early plies was used to skip plies <= 24 in the unfiltered and additional Leela T77 parts of the dataset.

```
python easy_train.py \
  --experiment-name leela96-dfrc99-T80octnovT79aprmayT60novdec-eval-filt-v2-T78augsep-12tb-T77dec-16tb-lambda7-sk24 \
  --training-dataset /data/leela96-dfrc99-T80octnovT79aprmayT60novdec-eval-filt-v2-T78augsep-12tb-T77dec-16tb.binpack \
  --nnue-pytorch-branch linrock/nnue-pytorch/easy-train-early-fen-skipping \
  --early-fen-skipping 24 \
  --gpus "0," \
  --start-from-engine-test-net True \
  --start-lambda 1.0 \
  --end-lambda 0.7 \
  --gamma 0.995 \
  --lr 4.375e-4 \
  --tui False \
  --seed $RANDOM \
  --max_epoch 900
```

The depth6 multipv2 search filtering method is the same as the one used for filtering recent best datasets, with a lower eval difference threshold to remove slightly more positions than before. These parts of the dataset were filtered:

* 96% of T60T70wIsRightFarseerT60T74T75T76.binpack
* 99% of dfrc_n5000.binpack
* T80 oct + nov 2022 data, no positions with castling flags, rescored with ~600gb 7p tablebases
* T79 apr + may 2022 data, rescored with 12tb 7p tablebases
* T60 nov + dec 2021 data, rescored with 12tb 7p tablebases

These parts of the dataset were not filtered. Positions with ply <= 24 were skipped during training:

* T78 aug + sep 2022 data, rescored with 12tb 7p tablebases
* 84% of T77 dec 2021 data, rescored with 16tb 7p tablebases

The code and exact evaluation thresholds used for data filtering can be found at:
https://github.com/linrock/Stockfish/tree/tools-filter-multipv2-eval-diff-t2/src/filter

The exact training data used can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move:
nn-epoch859.nnue : 3.5 +/ 1.2

Passed STC:
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
https://tests.stockfishchess.org/tests/view/63dfeefc73223e7f52ad769f
Total: 219744 W: 58572 L: 58002 D: 103170
Ptnml(0-2): 609, 24446, 59284, 24832, 701

Passed LTC:
https://tests.stockfishchess.org/tests/view/63e268fc73223e7f52ade7b6
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 91256 W: 24528 L: 24121 D: 42607
Ptnml(0-2): 48, 8863, 27390, 9288, 39

closes https://github.com/official-stockfish/Stockfish/pull/4387

bench 3841998
2023-02-09 07:50:27 +01:00
Linmiao Xu
596a528c6a Update default net to nn-bc24c101ada0.nnue
Created by retraining the master net with Leela T78 data from Aug+Sep 2022 added to the previous best dataset. Trained with end lambda 0.7 and started with max epoch 800. All positions with ply <= 28 were skipped:

```
python easy_train.py \
  --experiment-name leela95-dfrc96-filt-only-T80octnov-T60novdecT78augsepT79aprmay-12tb7p-sk28-lambda7 \
  --training-dataset /data/leela95-dfrc96-filt-only-T80octnov-T60novdecT78augsepT79aprmay-12tb7p.binpack \
  --nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-skip-ply-lteq-28 \
  --start-from-engine-test-net True \
  --gpus "0," \
  --start-lambda 1.0 \
  --end-lambda 0.7 \
  --gamma 0.995 \
  --lr 4.375e-4 \
  --tui False \
  --seed $RANDOM \
  --max_epoch 800
```

Around epoch 750, training was manually paused and max epoch increased to 950 before resuming. The additional Leela training data from T78 was prepared in the same way as the previous best dataset.

The exact training data used can be found at:
https://robotmoon.com/nnue-training-data/

While the local elo ratings during this experiment were much lower than in recent master nets, several later epochs had a consistent elo above zero, and this was hypothesized to represent potential strength at slower time controls.

Local elo at 25k nodes per move
leela95-dfrc96-filt-only-T80octnov-T60novdecT78augsepT79aprmay-12tb7p-sk28-lambda7
nn-epoch819.nnue : 0.4 +/- 1.1 (nn-bc24c101ada0.nnue)
nn-epoch799.nnue : 0.3 +/- 1.2
nn-epoch759.nnue : 0.3 +/- 1.1
nn-epoch839.nnue : 0.2 +/- 1.4

Passed STC
https://tests.stockfishchess.org/tests/view/63cabf6f0eefe8694a0c6013
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 41608 W: 11161 L: 10848 D: 19599
Ptnml(0-2): 116, 4496, 11281, 4781, 130

Passed LTC
https://tests.stockfishchess.org/tests/view/63cb1856344bb01c191af263
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 76760 W: 20517 L: 20137 D: 36106
Ptnml(0-2): 34, 7435, 23070, 7799, 42

closes https://github.com/official-stockfish/Stockfish/pull/4351

bench 3941848
2023-01-23 07:01:32 +01:00
Linmiao Xu
3d2381d76d Update default net to nn-1e7ca356472e.nnue
Created by retraining the master net on a dataset composed of:

* The Leela-dfrc_n5000.binpack dataset filtered with depth6 multipv2 search to remove positions with only one good move, in addition to removing positions where either of the two best moves are captures
* The same Leela T80 oct+nov 2022 training data used in recent best datasets
* Additional Leela training data from T60 nov+dec 2021 and T79 apr+may 2022

Trained with end lambda 0.7 and started with max epoch 800. All positions with ply <= 28 were skipped:

```
python easy_train.py \
  --experiment-name leela95-dfrc96-mpv-eval-fonly-T80octnov-T79aprmayT60novdec-12tb7p-sk28-lambda7 \
  --training-dataset /data/leela95-dfrc96-mpv-eval-fonly-T80octnov-T79aprmayT60novdec-12tb7p.binpack \
  --nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-skip-ply-lteq-28 \
  --start-from-engine-test-net True \
  --gpus "0," \
  --start-lambda 1.0 \
  --end-lambda 0.7 \
  --gamma 0.995 \
  --lr 4.375e-4 \
  --tui False \
  --seed $RANDOM \
  --max_epoch 800
```

Around epoch 780, training was manually paused and max epoch increased to 920 before resuming.

During depth6 multipv2 data filtering, positions were considered to have only one good move if the score of the best move was significantly better than the 2nd best move in a way that changes the outcome of the game:

* the best move leads to a significant advantage while the 2nd best move equalizes or loses
* the best move is about equal while the 2nd best move loses

The modified stockfish branch and exact score thresholds used for filtering are at:
https://github.com/linrock/Stockfish/tree/tools-filter-multipv2-eval-diff/src/filter

About 95% of the Leela portion and 96% of the DFRC portion of the Leela-dfrc_n5000.binpack dataset was filtered. Unfiltered parts of the dataset were left out.

The additional Leela training data from T60 nov+dec 2021 and T79 apr+may 2022 was WDL-rescored with about 12TB of syzygy 7-piece tablebases where the material difference is less than around 6 pawns. Best moves were exported to .plain data files during data conversion with the lc0 rescorer.

The exact training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move
experiment_leela95-dfrc96-mpv-eval-fonly-T80octnov-T79aprmayT60novdec-12tb7p-sk28-lambda7
run_0/nn-epoch899.nnue : 3.8 +/- 1.6

Passed STC
https://tests.stockfishchess.org/tests/view/63bed1f540aa064159b9c89b
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 103344 W: 27392 L: 26991 D: 48961
Ptnml(0-2): 333, 11223, 28099, 11744, 273

Passed LTC
https://tests.stockfishchess.org/tests/view/63c010415705810de2deb3ec
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 21712 W: 5891 L: 5619 D: 10202
Ptnml(0-2): 12, 2022, 6511, 2304, 7

closes https://github.com/official-stockfish/Stockfish/pull/4338

bench 4106793
2023-01-14 08:12:11 +01:00
Linmiao Xu
a6fa683418 Update default net to nn-a3dc078bafc7.nnue
This is a later epoch (epoch 859) from the same experiment run that trained yesterday's master net nn-60fa44e376d9.nnue (epoch 779). The experiment was manually paused around epoch 790 and unpaused with max epoch increased to 900 mainly to get more local elo data without letting the GPU idle.

nn-60fa44e376d9.nnue is from #4314
nn-335a9b2d8a80.nnue is from #4295

Local elo vs. nn-335a9b2d8a80.nnue at 25k nodes per move:
experiment_leela93-dfrc99-filt-only-T80-oct-nov-skip28
run_0/nn-epoch779.nnue (nn-60fa44e376d9.nnue) : 5.0 +/- 1.2
run_0/nn-epoch859.nnue (nn-a3dc078bafc7.nnue) : 5.6 +/- 1.6

Passed STC vs. nn-335a9b2d8a80.nnue
https://tests.stockfishchess.org/tests/view/63ae10495bd1e5f27f13d94f
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 37536 W: 10088 L: 9781 D: 17667
Ptnml(0-2): 110, 4006, 10223, 4325, 104

An LTC test vs. nn-335a9b2d8a80.nnue was paused due to nn-60fa44e376d9.nnue passing LTC first:
https://tests.stockfishchess.org/tests/view/63ae5d34331d5fca5113703b

Passed LTC vs. nn-60fa44e376d9.nnue
https://tests.stockfishchess.org/tests/view/63af1e41465d2b022dbce4e7
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 148704 W: 39672 L: 39155 D: 69877
Ptnml(0-2): 59, 14443, 44843, 14936, 71

closes https://github.com/official-stockfish/Stockfish/pull/4319

bench 3984365
2023-01-02 19:10:14 +01:00
Sebastian Buchwald
b60f9cc451 Update copyright years
Happy New Year!

closes https://github.com/official-stockfish/Stockfish/pull/4315

No functional change
2023-01-02 19:07:38 +01:00
Linmiao Xu
be9bc420af Update default net to nn-60fa44e376d9.nnue
Created by retraining the master net on the previous best dataset with additional filtering. No new data was added.

More of the Leela-dfrc_n5000.binpack part of the dataset was pre-filtered with depth6 multipv2 search to remove bestmove captures. About 93% of the previous Leela/SF data and 99% of the SF dfrc data was filtered. Unfiltered parts of the dataset were left out. The new Leela T80 oct+nov data is the same as before. All early game positions with ply count <= 28 were skipped during training by modifying the training data loader in nnue-pytorch.

Trained in a similar way as recent master nets, with a different nnue-pytorch branch for early ply skipping:

python3 easy_train.py \
  --experiment-name=leela93-dfrc99-filt-only-T80-oct-nov-skip28 \
  --training-dataset=/data/leela93-dfrc99-filt-only-T80-oct-nov.binpack \
  --start-from-engine-test-net True \
  --nnue-pytorch-branch=linrock/nnue-pytorch/misc-fixes-skip-ply-lteq-28 \
  --gpus="0," \
  --start-lambda=1.0 \
  --end-lambda=0.75 \
  --gamma=0.995 \
  --lr=4.375e-4 \
  --tui=False \
  --seed=$RANDOM \
  --max_epoch=800 \
  --network-testing-threads 20 \
  --num-workers 6

For the exact training data used: https://robotmoon.com/nnue-training-data/
Details about the previous best dataset: #4295

Local testing at a fixed 25k nodes:
experiment_leela93-dfrc99-filt-only-T80-oct-nov-skip28
Local Elo: run_0/nn-epoch779.nnue : 5.1 +/- 1.5

Passed STC
https://tests.stockfishchess.org/tests/view/63adb3acae97a464904fd4e8
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 36504 W: 9847 L: 9538 D: 17119
Ptnml(0-2): 108, 3981, 9784, 4252, 127

Passed LTC
https://tests.stockfishchess.org/tests/view/63ae0ae25bd1e5f27f13d884
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 36592 W: 10017 L: 9717 D: 16858
Ptnml(0-2): 17, 3461, 11037, 3767, 14

closes https://github.com/official-stockfish/Stockfish/pull/4314

bench 4015511
2023-01-01 12:28:51 +01:00
Linmiao Xu
c620886181 Update default net to nn-335a9b2d8a80.nnue
Created by retraining the master net with a combination of:

    the previous best dataset (Leela-dfrc_n5000.binpack), with about half the dataset filtered using depth6 multipv2 search to throw away positions where either of the 2 best moves are captures
    Leela T80 Oct and Nov training data rescored with best moves, adding ~9.5 billion positions

Trained effectively the same way as the previous master net:

python3 easy_train.py \
  --experiment-name=leela-dfrc-filtered-T80-oct-nov \
  --training-dataset=/data/leela-dfrc-filtered-T80-oct-nov.binpack \
  --start-from-engine-test-net True \
  --gpus="0," \
  --start-lambda=1.0 \
  --end-lambda=0.75 \
  --gamma=0.995 \
  --lr=4.375e-4 \
  --tui=False \
  --seed=$RANDOM \
  --max_epoch=800 \
  --auto-exit-timeout-on-training-finished=900 \
  --network-testing-threads 20 \
  --num-workers 6

Local testing at a fixed 25k nodes:
experiments/experiment_leela-dfrc-filtered-T80-oct-nov/training/run_0/nn-epoch779.nnue
localElo: run_0/nn-epoch779.nnue : 4.7 +/- 3.1

The new Leela T80 part of the dataset was prepared by downloading test80 training data from all of Oct 2022 and Nov 2022, rescoring with syzygy 6-piece tablebases and ~600 GB of 7-piece tablebases, saving best moves to exported .plain files, removing all positions with castling flags, then converting to binpacks and using interleave_binpacks.py to merge them together. Scripts used in this data conversion process are available at:
https://github.com/linrock/lc0-data-converter

Filtering binpack data using depth6 multipv2 search was done by modifying transform.cpp in the tools branch:
https://github.com/linrock/Stockfish/tree/tools-filter-multipv2-no-rescore

Links for downloading the training data (total size: 338 GB) are available at:
https://robotmoon.com/nnue-training-data/

Passed STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 30544 W: 8244 L: 7947 D: 14353
Ptnml(0-2): 93, 3243, 8302, 3542, 92
https://tests.stockfishchess.org/tests/view/63a0d377264a0cf18f86f82b

Passed LTC:
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 32464 W: 8866 L: 8573 D: 15025
Ptnml(0-2): 19, 3054, 9794, 3345, 20
https://tests.stockfishchess.org/tests/view/63a10bc9fb452d3c44b1e016

closes https://github.com/official-stockfish/Stockfish/pull/4295

Bench 3554904
2022-12-21 07:14:58 +01:00
Joost VandeVondele
4b4b7d1209 Update default net to nn-ad9b42354671.nnue
using trainer branch https://github.com/glinscott/nnue-pytorch/pull/208 with a slightly
tweaked loss function (power 2.5 instead of 2.6), otherwise same training as in
the previous net update https://github.com/official-stockfish/Stockfish/pull/4100

passed STC:
LLR: 2.97 (-2.94,2.94) <0.00,2.50>
Total: 367536 W: 99465 L: 98573 D: 169498
Ptnml(0-2): 1820, 40994, 97117, 42148, 1689
https://tests.stockfishchess.org/tests/view/62cc43fe50dcbecf5fc1c5b8

passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 25032 W: 6802 L: 6553 D: 11677
Ptnml(0-2): 40, 2424, 7341, 2669, 42
https://tests.stockfishchess.org/tests/view/62ce5f421dacb46e4d5fd277

closes https://github.com/official-stockfish/Stockfish/pull/4107

Bench: 5905619
2022-07-13 18:01:20 +02:00
Joost VandeVondele
85f8ee6199 Update default net to nn-3c0054ea9860.nnu
First things first...

this PR is being made from court. Today, Tord and Stéphane, with broad support
of the developer community are defending their complaint, filed in Munich, against ChessBase.
With their products Houdini 6 and Fat Fritz 2, both Stockfish derivatives,
ChessBase violated repeatedly the Stockfish GPLv3 license. Tord and Stéphane have terminated
their license with ChessBase permanently. Today we have the opportunity to present
our evidence to the judge and enforce that termination. To read up, have a look at our blog post
https://stockfishchess.org/blog/2022/public-court-hearing-soon/ and
https://stockfishchess.org/blog/2021/our-lawsuit-against-chessbase/

This PR introduces a net trained with an enhanced data set and a modified loss function in the trainer.
A slight adjustment for the scaling was needed to get a pass on standard chess.

passed STC:
https://tests.stockfishchess.org/tests/view/62c0527a49b62510394bd610
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 135008 W: 36614 L: 36152 D: 62242
Ptnml(0-2): 640, 15184, 35407, 15620, 653

passed LTC:
https://tests.stockfishchess.org/tests/view/62c17e459e7d9997a12d458e
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 28864 W: 8007 L: 7749 D: 13108
Ptnml(0-2): 47, 2810, 8466, 3056, 53

Local testing at a fixed 25k nodes resulted in
Test run1026/easy_train_data/experiments/experiment_2/training/run_0/nn-epoch799.nnue
localElo: 4.2  +-      1.6

The real strength of the net is in FRC and DFRC chess where it gains significantly.

Tested at STC with slightly different scaling:
FRC:
https://tests.stockfishchess.org/tests/view/62c13a4002ba5d0a774d20d4
Elo: 29.78 +-3.4 (95%) LOS: 100.0%
Total: 10000 W: 2007 L: 1152 D: 6841
Ptnml(0-2): 31, 686, 2804, 1355, 124
nElo: 59.24 +-6.9 (95%) PairsRatio: 2.06

DFRC:
https://tests.stockfishchess.org/tests/view/62c13a5702ba5d0a774d20d9
Elo: 55.25 +-3.9 (95%) LOS: 100.0%
Total: 10000 W: 2984 L: 1407 D: 5609
Ptnml(0-2): 51, 636, 2266, 1779, 268
nElo: 96.95 +-7.2 (95%) PairsRatio: 2.98

Tested at LTC with identical scaling:
FRC:
https://tests.stockfishchess.org/tests/view/62c26a3c9e7d9997a12d6caf
Elo: 16.20 +-2.5 (95%) LOS: 100.0%
Total: 10000 W: 1192 L: 726 D: 8082
Ptnml(0-2): 10, 403, 3727, 831, 29
nElo: 44.12 +-6.7 (95%) PairsRatio: 2.08

DFRC:
https://tests.stockfishchess.org/tests/view/62c26a539e7d9997a12d6cb2
Elo: 40.94 +-3.0 (95%) LOS: 100.0%
Total: 10000 W: 2215 L: 1042 D: 6743
Ptnml(0-2): 10, 410, 3053, 1451, 76
nElo: 92.77 +-6.9 (95%) PairsRatio: 3.64

This is due to the mixing in a significant fraction of DFRC training data in the final training round. The net is
trained using the easy_train.py script in the following way:

```
python easy_train.py \
     --training-dataset=../Leela-dfrc_n5000.binpack \
     --experiment-name=2 \
     --nnue-pytorch-branch=vondele/nnue-pytorch/lossScan4 \
     --additional-training-arg=--param-index=2 \
     --start-lambda=1.0 \
     --end-lambda=0.75 \
     --gamma=0.995 \
     --lr=4.375e-4 \
     --start-from-engine-test-net True \
     --tui=False \
     --seed=$RANDOM \
     --max_epoch=800 \
     --auto-exit-timeout-on-training-finished=900 \
     --network-testing-threads 8  \
     --num-workers 12
```

where the data set used (Leela-dfrc_n5000.binpack) is a combination of our previous best data set (mix of Leela and some SF data) and DFRC data, interleaved to form:
The data is available in https://drive.google.com/drive/folders/1S9-ZiQa_3ApmjBtl2e8SyHxj4zG4V8gG?usp=sharing
Leela mix: https://drive.google.com/file/d/1JUkMhHSfgIYCjfDNKZUMYZt6L5I7Ra6G/view?usp=sharing
DFRC: https://drive.google.com/file/d/17vDaff9LAsVo_1OfsgWAIYqJtqR8aHlm/view?usp=sharing

The training branch used is
https://github.com/vondele/nnue-pytorch/commits/lossScan4
A PR to the main trainer repo will be made later. This contains a revised loss function, now computing the loss from the score based on the win rate model, which is a more accurate representation than what we had before. Scaling constants are tweaked there as well.

closes https://github.com/official-stockfish/Stockfish/pull/4100

Bench: 5186781
2022-07-04 15:42:34 +02:00
Dubslow
442c40b43d Use NNUE complexity in search, retune related parameters
This builds on ideas of xoto10 and mstembera to use more output from NNUE in the search algorithm.

passed STC:
https://tests.stockfishchess.org/tests/view/62ae454fe7ee5525ef88a957
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 89208 W: 24127 L: 23753 D: 41328
Ptnml(0-2): 400, 9886, 23642, 10292, 384

passed LTC:
https://tests.stockfishchess.org/tests/view/62acc6ddd89eb6cf1e0750a1
LLR: 2.93 (-2.94,2.94) <0.50,3.00>
Total: 56352 W: 15430 L: 15115 D: 25807
Ptnml(0-2): 44, 5501, 16782, 5794, 55

closes https://github.com/official-stockfish/Stockfish/pull/4088

bench 5332964
2022-06-20 08:30:57 +02:00
xoto10
7f1333ccf8 Blend nnue complexity with classical.
Following mstembera's test of the complexity value derived from nnue values,
this change blends that idea with the old complexity calculation.

STC 10+0.1:
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 42320 W: 11436 L: 11148 D: 19736
Ptnml(0-2): 209, 4585, 11263, 4915, 188
https://tests.stockfishchess.org/tests/live_elo/6295c9239c8c2fcb2bad7fd9

LTC 60+0.6:
LLR: 2.98 (-2.94,2.94) <0.50,3.00>
Total: 34600 W: 9393 L: 9125 D: 16082
Ptnml(0-2): 32, 3323, 10319, 3597, 29
https://tests.stockfishchess.org/tests/view/6295fd5d9c8c2fcb2bad88cf

closes https://github.com/official-stockfish/Stockfish/pull/4046

Bench 6078140
2022-06-02 07:47:23 +02:00
Tomasz Sobczyk
c079acc26f Update NNUE architecture to SFNNv5. Update network to nn-3c0aa92af1da.nnue.
Architecture changes:

    Duplicated activation after the 1024->15 layer with squared crelu (so 15->15*2). As proposed by vondele.

Trainer changes:

    Added bias to L1 factorization, which was previously missing (no measurable improvement but at least neutral in principle)
    For retraining linearly reduce lambda parameter from 1.0 at epoch 0 to 0.75 at epoch 800.
    reduce max_skipping_rate from 15 to 10 (compared to vondele's outstanding PR)

Note: This network was trained with a ~0.8% error in quantization regarding the newly added activation function.
      This will be fixed in the released trainer version. Expect a trainer PR tomorrow.

Note: The inference implementation cuts a corner to merge results from two activation functions.
       This could possibly be resolved nicer in the future. AVX2 implementation likely not necessary, but NEON is missing.

First training session invocation:

python3 train.py \
    ../nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
    ../nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
    --gpus "$3," \
    --threads 4 \
    --num-workers 8 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --max_epochs=400 \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

Second training session invocation:

python3 train.py \
    ../nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
    ../nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
    --gpus "$3," \
    --threads 4 \
    --num-workers 8 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --start-lambda=1.0 \
    --end-lambda=0.75 \
    --gamma=0.995 \
    --lr=4.375e-4 \
    --max_epochs=800 \
    --resume-from-model /data/sopel/nnue/nnue-pytorch-training/data/exp367/nn-exp367-run3-epoch399.pt \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

Passed STC:
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 27288 W: 7445 L: 7178 D: 12665
Ptnml(0-2): 159, 3002, 7054, 3271, 158
https://tests.stockfishchess.org/tests/view/627e8c001919125939623644

Passed LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.00>
Total: 21792 W: 5969 L: 5727 D: 10096
Ptnml(0-2): 25, 2152, 6294, 2406, 19
https://tests.stockfishchess.org/tests/view/627f2a855734b18b2e2ece47

closes https://github.com/official-stockfish/Stockfish/pull/4020

Bench: 6481017
2022-05-14 12:47:22 +02:00
Joost VandeVondele
6e0680efa0 Update default net to nn-d0b74ce1e5eb.nnue
train a net using training data with a
heavier weight on positions having 16 pieces on the board. More specifically,
with a relative weight of `i * (32-i)/(16 * 16)+1` (where i is the number of pieces on the board).

This is done with the trainer branch https://github.com/glinscott/nnue-pytorch/pull/173

The command used is:
```
python train.py $datafile $datafile $restarttype $restartfile --gpus 1 --threads 4 --num-workers 12 --random-fen-skipping=3 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --features=HalfKAv2_hm^   --lambda=1.00  --max_epochs=$epochs --seed $RANDOM --default_root_dir exp/run_$i
```
The datafile is T60T70wIsRightFarseerT60T74T75T76.binpack, the restart is from the master net.

passed STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 22728 W: 6197 L: 5945 D: 10586
Ptnml(0-2): 105, 2453, 6001, 2695, 110
https://tests.stockfishchess.org/tests/view/625cf944ff677a888877cd90

passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 35664 W: 9535 L: 9264 D: 16865
Ptnml(0-2): 30, 3524, 10455, 3791, 32
https://tests.stockfishchess.org/tests/view/625d3c32ff677a888877d7ca

closes https://github.com/official-stockfish/Stockfish/pull/3989

Bench: 7269563
2022-04-19 19:59:04 +02:00
Tomasz Sobczyk
cb9c2594fc Update architecture to "SFNNv4". Update network to nn-6877cd24400e.nnue.
Architecture:

The diagram of the "SFNNv4" architecture:
https://user-images.githubusercontent.com/8037982/153455685-cbe3a038-e158-4481-844d-9d5fccf5c33a.png

The most important architectural changes are the following:

* 1024x2 [activated] neurons are pairwise, elementwise multiplied (not quite pairwise due to implementation details, see diagram), which introduces a non-linearity that exhibits similar benefits to previously tested sigmoid activation (quantmoid4), while being slightly faster.
* The following layer has therefore 2x less inputs, which we compensate by having 2 more outputs. It is possible that reducing the number of outputs might be beneficial (as we had it as low as 8 before). The layer is now 1024->16.
* The 16 outputs are split into 15 and 1. The 1-wide output is added to the network output (after some necessary scaling due to quantization differences). The 15-wide is activated and follows the usual path through a set of linear layers. The additional 1-wide output is at least neutral, but has shown a slightly positive trend in training compared to networks without it (all 16 outputs through the usual path), and allows possibly an additional stage of lazy evaluation to be introduced in the future.

Additionally, the inference code was rewritten and no longer uses a recursive implementation. This was necessitated by the splitting of the 16-wide intermediate result into two, which was impossible to do with the old implementation with ugly hacks. This is hopefully overall for the better.

First session:

The first session was training a network from scratch (random initialization). The exact trainer used was slightly different (older) from the one used in the second session, but it should not have a measurable effect. The purpose of this session is to establish a strong network base for the second session. Small deviations in strength do not harm the learnability in the second session.

The training was done using the following command:

python3 train.py \
    /home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
    /home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
    --gpus "$3," \
    --threads 4 \
    --num-workers 4 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --gamma=0.992 \
    --lr=8.75e-4 \
    --max_epochs=400 \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

Every 20th net was saved and its playing strength measured against some baseline at 25k nodes per move with pure NNUE evaluation (modified binary). The exact setup is not important as long as it's consistent. The purpose is to sift good candidates from bad ones.

The dataset can be found https://drive.google.com/file/d/1UQdZN_LWQ265spwTBwDKo0t1WjSJKvWY/view

Second session:

The second training session was done starting from the best network (as determined by strength testing) from the first session. It is important that it's resumed from a .pt model and NOT a .ckpt model. The conversion can be performed directly using serialize.py

The LR schedule was modified to use gamma=0.995 instead of gamma=0.992 and LR=4.375e-4 instead of LR=8.75e-4 to flatten the LR curve and allow for longer training. The training was then running for 800 epochs instead of 400 (though it's possibly mostly noise after around epoch 600).

The training was done using the following command:

The training was done using the following command:

python3 train.py \
        /data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
        /data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
        --gpus "$3," \
        --threads 4 \
        --num-workers 4 \
        --batch-size 16384 \
        --progress_bar_refresh_rate 20 \
        --random-fen-skipping 3 \
        --features=HalfKAv2_hm^ \
        --lambda=1.0 \
        --gamma=0.995 \
        --lr=4.375e-4 \
        --max_epochs=800 \
        --resume-from-model /data/sopel/nnue/nnue-pytorch-training/data/exp295/nn-epoch399.pt \
        --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$run_id

In particular note that we now use lambda=1.0 instead of lambda=0.8 (previous nets), because tests show that WDL-skipping introduced by vondele performs better with lambda=1.0. Nets were being saved every 20th epoch. In total 16 runs were made with these settings and the best nets chosen according to playing strength at 25k nodes per move with pure NNUE evaluation - these are the 4 nets that have been put on fishtest.

The dataset can be found either at ftp://ftp.chessdb.cn/pub/sopel/data_sf/T60T70wIsRightFarseerT60T74T75T76.binpack in its entirety (download might be painfully slow because hosted in China) or can be assembled in the following way:

Get the 5640ad48ae/script/interleave_binpacks.py script.
Download T60T70wIsRightFarseer.binpack https://drive.google.com/file/d/1_sQoWBl31WAxNXma2v45004CIVltytP8/view
Download farseerT74.binpack http://trainingdata.farseer.org/T74-May13-End.7z
Download farseerT75.binpack http://trainingdata.farseer.org/T75-June3rd-End.7z
Download farseerT76.binpack http://trainingdata.farseer.org/T76-Nov10th-End.7z
Run python3 interleave_binpacks.py T60T70wIsRightFarseer.binpack farseerT74.binpack farseerT75.binpack farseerT76.binpack T60T70wIsRightFarseerT60T74T75T76.binpack

Tests:

STC: https://tests.stockfishchess.org/tests/view/6203fb85d71106ed12a407b7
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 16952 W: 4775 L: 4521 D: 7656
Ptnml(0-2): 133, 1818, 4318, 2076, 131

LTC: https://tests.stockfishchess.org/tests/view/62041e68d71106ed12a40e85
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 14944 W: 4138 L: 3907 D: 6899
Ptnml(0-2): 21, 1499, 4202, 1728, 22

closes https://github.com/official-stockfish/Stockfish/pull/3927

Bench: 4919707
2022-02-10 19:54:31 +01:00
Brad Knox
ad926d34c0 Update copyright years
Happy New Year!

closes https://github.com/official-stockfish/Stockfish/pull/3881

No functional change
2022-01-06 15:45:45 +01:00
Joost VandeVondele
7d82f0d1f4 Update default net to nn-ac07bd334b62.nnue
Trained with essentially the same data as provided and used by Farseer (mbabigian)
for the previous master net.

T60T70wIsRightFarseerT60T74T75T76.binpack (99GB):
['T60T70wIsRightFarseer.binpack', 'farseerT74.binpack', 'farseerT75.binpack', 'farseerT76.binpack']
using the trainer branch tweakLR1PR (https://github.com/glinscott/nnue-pytorch/pull/158) and
`--gpus 1 --threads 4 --num-workers 4 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 12 --features=HalfKAv2_hm^   --lambda=1.00` options

passed STC:
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 108280 W: 28042 L: 27636 D: 52602
Ptnml(0-2): 328, 12382, 28401, 12614, 415
https://tests.stockfishchess.org/tests/view/61bcd8c257a0d0f327c34fbd

passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 259296 W: 66974 L: 66175 D: 126147
Ptnml(0-2): 146, 27096, 74452, 27721, 233
https://tests.stockfishchess.org/tests/view/61bda70957a0d0f327c37817

closes https://github.com/official-stockfish/Stockfish/pull/3870

Bench: 4633875
2021-12-22 11:02:34 +01:00
farseer
3bea736a2a Update default net to nn-4401e826ebcc.nnue
Using data T60 12/1/20 to 11/2/2021, T74 4/22/21 to 7/27/21, T75 6/3/21 to 10/16/21, T76
(half of the randomly interleaved dataset due to a mistake merging) 11/10/21 to 11/21/21,
wrongIsRight_nodes5000pv2.binpack, and WrongIsRight-Reloaded.binpack combined and shuffled
position by position.

Trained with LR=4.375e-4 and WDL filtering enabled:

python train.py --smart-fen-skipping --random-fen-skipping 0 --features=HalfKAv2_hm^
--lambda=1.0 --max_epochs=800 --seed 910688689 --batch-size 16384
--progress_bar_refresh_rate 30 --threads 4 --num-workers 4 --gpus 1
--resume-from-model C:\msys64\home\Mike\nnue-pytorch\9b3d.pt
E:\trainingdata\T60-T74-T75-T76-WiR-WiRR-PbyP.binpack
E:\trainingdata\T60-T74-T75-T76-WiR-WiRR-PbyP.binpack

Passed STC
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 41848 W: 10962 L: 10676 D: 20210 Elo +2.16
Ptnml(0-2): 142, 4699, 11016, 4865, 202
https://tests.stockfishchess.org/tests/view/61ba886857a0d0f327c2cfd6

Passed LTC
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 27776 W: 7208 L: 6953 D: 13615 Elo + 3.00
Ptnml(0-2): 14, 2808, 8007, 3027, 32
https://tests.stockfishchess.org/tests/view/61baae4d57a0d0f327c2d96f

closes https://github.com/official-stockfish/Stockfish/pull/3856

Bench: 4667591
2021-12-17 18:12:47 +01:00
Joost VandeVondele
ea1ddb6aef Update default net to nn-d93927199b3d.nnue
Using the same dataset as before but slightly reduced initial LR as in
https://github.com/vondele/nnue-pytorch/tree/tweakLR1

passed STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 51368 W: 13492 L: 13191 D: 24685
Ptnml(0-2): 168, 5767, 13526, 6042, 181
https://tests.stockfishchess.org/tests/view/61b61f43dffbe89a3580b529

passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 45128 W: 11763 L: 11469 D: 21896
Ptnml(0-2): 24, 4583, 13063, 4863, 31
https://tests.stockfishchess.org/tests/view/61b6612edffbe89a3580c447

closes https://github.com/official-stockfish/Stockfish/pull/3848

Bench: 5121336
2021-12-13 07:17:25 +01:00
Joost VandeVondele
b82d93ece4 Update default net to nn-63376713ba63.nnue.
same data set as previous trained nets, tuned the wdl model slightly for training.
https://github.com/vondele/nnue-pytorch/tree/wdlTweak1

passed STC:
https://tests.stockfishchess.org/tests/view/61abe9e456fcf33bce7d2834
LLR: 2.93 (-2.94,2.94) <0.00,2.50>
Total: 31720 W: 8385 L: 8119 D: 15216
Ptnml(0-2): 117, 3534, 8273, 3838, 98

passed LTC:
https://tests.stockfishchess.org/tests/view/61ac293756fcf33bce7d36cf
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 136136 W: 35255 L: 34741 D: 66140
Ptnml(0-2): 114, 14217, 38894, 14727, 116

closes https://github.com/official-stockfish/Stockfish/pull/3836

Bench: 4667742
2021-12-07 12:40:48 +01:00
Joost VandeVondele
327060232a Update default net to nn-cdf1785602d6.nnue
Same process as in e4a0c6c759
with the training started from the current master net.

passed STC:
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 38224 W: 10023 L: 9742 D: 18459
Ptnml(0-2): 133, 4328, 9940, 4547, 164
https://tests.stockfishchess.org/tests/view/61a8611e4ed77d629d4e836e

passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 115176 W: 29783 L: 29321 D: 56072
Ptnml(0-2): 68, 12039, 32936, 12453, 92
https://tests.stockfishchess.org/tests/view/61a8963e4ed77d629d4e8d9b

closes https://github.com/official-stockfish/Stockfish/pull/3830

Bench: 4829419
2021-12-04 10:31:22 +01:00
Joost VandeVondele
e4a0c6c759 Update default net to nn-4f56ecfca5b7.nnue
New net trained with nnue-pytorch, started from a master net on a data set of Leela
(T60.binpack+T74.binpck) Stockfish data (wrongIsRight_nodes5000pv2.binpack), and
Michael Babigian's conversion of T60 Leela data (including TB7 rescoring) (farseer.binpack)
available as a single interleaved binpack:

https://drive.google.com/file/d/1_sQoWBl31WAxNXma2v45004CIVltytP8/view?usp=sharing

The nnue-pytorch branch used is https://github.com/vondele/nnue-pytorch/tree/wdl

passed STC:
https://tests.stockfishchess.org/tests/view/61a3cc729f0c43dae1c71f1b
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 49152 W: 12842 L: 12544 D: 23766
Ptnml(0-2): 154, 5542, 12904, 5804, 172

passed LTC:
https://tests.stockfishchess.org/tests/view/61a43c6260afd064f2d724f1
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 25528 W: 6676 L: 6425 D: 12427
Ptnml(0-2): 9, 2593, 7315, 2832, 15

closes https://github.com/official-stockfish/Stockfish/pull/3816

Bench: 6885242
2021-11-29 12:56:01 +01:00
Joost VandeVondele
9ee58dc7a7 Update default net to nn-3678835b1d3d.nnue
New net trained with nnue-pytorch, started from the master net on a data set of Leela
(T60.binpack+T74.binpck) and Stockfish data (wrongIsRight_nodes5000pv2.binpack),
available as a single interleaved binpack:

https://drive.google.com/file/d/12uWZIA3F2cNbraAzQNb1jgf3tq_6HkTr/view?usp=sharing

The nnue-pytorch branch used is https://github.com/vondele/nnue-pytorch/tree/wdl, which
has the new feature to filter positions based on the likelihood of the current evaluation
leading to the game outcome. It should make it less likely to try to learn from
misevaluated positions. Standard options have been used, starting from the master net:

   --gpus 1 --threads 4 --num-workers 4 --batch-size 16384 --progress_bar_refresh_rate 300
   --smart-fen-skipping --random-fen-skipping 12 --features=HalfKAv2_hm^   --lambda=1.0

Testing with games shows neutral Elo at STC, and good performance at LTC:

STC:
https://tests.stockfishchess.org/tests/view/619eb597c0a4ea18ba95a4dc
ELO: -0.44 +-1.8 (95%) LOS: 31.2%
Total: 40000 W: 10447 L: 10498 D: 19055
Ptnml(0-2): 254, 4576, 10260, 4787, 123

LTC:
https://tests.stockfishchess.org/tests/view/619f6e87c0a4ea18ba95a53f
ELO: 3.30 +-1.8 (95%) LOS: 100.0%
Total: 33062 W: 8560 L: 8246 D: 16256
Ptnml(0-2): 54, 3358, 9352, 3754, 13

passed LTC SPRT:
https://tests.stockfishchess.org/tests/view/61a0864e8967bbf894416e65
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 29376 W: 7663 L: 7396 D: 14317
Ptnml(0-2): 67, 3017, 8205, 3380, 19

closes https://github.com/official-stockfish/Stockfish/pull/3808

Bench: 7011501
2021-11-26 18:16:04 +01:00
xoto10
f21a66f70d Small clean-up, Sept 2021
Closes https://github.com/official-stockfish/Stockfish/pull/3485

No functional change
2021-10-07 09:41:57 +02:00
SFisGOD
723f48dec0 Update default net to nn-13406b1dcbe0.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/6134abc425b9b35584838572
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-6762d36ad265.nnue
New net: nn-c9fdeea14cb2.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/61355b7e25b9b3558483860e
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-c9fdeea14cb2.nnue
New net: nn-0ddc28184f4c.nnue

SPSA 3: https://tests.stockfishchess.org/tests/view/613737be0cd98ab40c0c9e4e
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-0ddc28184f4c.nnue
New net: nn-2419828bb394.nnue

SPSA 4: https://tests.stockfishchess.org/tests/view/613966ff689039fce12e0fe7
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-2419828bb394.nnue
New net: nn-05d9b1ee3037.nnue

SPSA 5: https://tests.stockfishchess.org/tests/view/613b4a38689039fce12e1209
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-05d9b1ee3037.nnue
New net: nn-98c6ce0fc15f.nnue

SPSA 6: https://tests.stockfishchess.org/tests/view/613e331515591e7c9ebc3fe9
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-98c6ce0fc15f.nnue
New net: nn-13406b1dcbe0.nnue

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 82008 W: 21044 L: 20752 D: 40212
Ptnml(0-2): 264, 9341, 21525, 9587, 287
https://tests.stockfishchess.org/tests/view/613f7c6cf29dda16fcca870c

LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 182928 W: 46258 L: 45602 D: 91068
Ptnml(0-2): 107, 19448, 51712, 20076, 121
https://tests.stockfishchess.org/tests/view/613fccb97315e7c73204a48c

Closes #3703

Bench: 6658747
2021-09-15 17:50:20 +02:00
SFisGOD
be63ce1bb5 Update default net to nn-6762d36ad265.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/612cdb1fbb4956d8b78eb5ab
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-fe433fd8c7f6.nnue
New net: nn-5f134823db04.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/612fcde645091e810014af19
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-5f134823db04.nnue
New net: nn-8eca5dd4e3f7.nnue

SPSA 3: https://tests.stockfishchess.org/tests/view/6130822345091e810014af61
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-8eca5dd4e3f7.nnue
New net: nn-4556108e4f00.nnue

SPSA 4: https://tests.stockfishchess.org/tests/view/613287652ffb3c36aceb923c
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-4556108e4f00.nnue
New net: nn-6762d36ad265.nnue

STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 162776 W: 41220 L: 40807 D: 80749
Ptnml(0-2): 517, 18800, 42359, 19177, 535
https://tests.stockfishchess.org/tests/view/6134107125b9b35584838559

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 41056 W: 10428 L: 10156 D: 20472
Ptnml(0-2): 30, 4288, 11618, 4564, 28
https://tests.stockfishchess.org/tests/view/6134ad6525b9b3558483857a

closes https://github.com/official-stockfish/Stockfish/pull/3691

Bench: 5812158
2021-09-06 14:08:22 +02:00
SFisGOD
2807dcfab6 Update default net to nn-735bba95dec0.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/61286d8b62d20cf82b5ad1bd
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-33495fe25081.nnue
New net: nn-83e3cf2af92b.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/6129cf2162d20cf82b5ad25f
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-83e3cf2af92b.nnue
New net: nn-69a528eaef35.nnue

SPSA 3: https://tests.stockfishchess.org/tests/view/612a0dcb62d20cf82b5ad2a0
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-69a528eaef35.nnue
New net: nn-735bba95dec0.nnue

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 95144 W: 24310 L: 23999 D: 46835
Ptnml(0-2): 232, 11059, 24748, 11232, 301
https://tests.stockfishchess.org/tests/view/612bb3be0fdf40644b4b9996

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 33632 W: 8522 L: 8271 D: 16839
Ptnml(0-2): 18, 3511, 9516, 3744, 27
https://tests.stockfishchess.org/tests/view/612ce5b9bb4956d8b78eb5b3

Closes https://github.com/official-stockfish/Stockfish/pull/3685

Bench: 5600615
2021-08-31 12:56:19 +02:00
SFisGOD
69eede7d08 Update default net to nn-33495fe25081.nnue
STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 37368 W: 9621 L: 9391 D: 18356
Ptnml(0-2): 117, 4287, 9664, 4481, 135
https://tests.stockfishchess.org/tests/view/612768165318138ee1204977

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 13328 W: 3446 L: 3246 D: 6636
Ptnml(0-2): 11, 1383, 3682, 1571, 17
https://tests.stockfishchess.org/tests/view/6127dc8d62d20cf82b5ad196

Closes https://github.com/official-stockfish/Stockfish/pull/3679

Bench: 5179347
2021-08-27 07:51:26 +02:00
SFisGOD
590447d7a1 Update default net to nn-517c4f68b5df.nnue
SPSA: https://tests.stockfishchess.org/tests/view/611cf0da4977aa1525c9ca03
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-ac5605a608d6.nnue
New net: nn-517c4f68b5df.nnue

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 11600 W: 998 L: 851 D: 9751
Ptnml(0-2): 30, 705, 4186, 846, 33
https://tests.stockfishchess.org/tests/view/611f84524977aa1525c9cb5b

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 9360 W: 338 L: 243 D: 8779
Ptnml(0-2): 0, 220, 4151, 303, 6
https://tests.stockfishchess.org/tests/view/611f8c5b4977aa1525c9cb64

closes https://github.com/official-stockfish/Stockfish/pull/3667

Bench: 4844618
2021-08-22 09:09:58 +02:00
Torsten Hellwig
1946a67567 Update default net to nn-ac5605a608d6.nnue
This net was created with the nnue-pytorch trainer, it used the previous master net as a starting point.

The training data includes all T60 data (https://drive.google.com/drive/folders/1rzZkgIgw7G5vQMLr2hZNiUXOp7z80613), all T74 data (https://drive.google.com/drive/folders/1aFUv3Ih3-A8Vxw9064Kw_FU4sNhMHZU-) and the wrongNNUE_02_d9.binpack (https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq). The Leela data were randomly named and then concatenated. All data was merged into one binpack using interleave_binpacks.py.

python3 train.py \
    ../data/t60_t74_wrong.binpack \
    ../data/t60_t74_wrong.binpack \
    --resume-from-model ../data/nn-e8321e467bf6.pt \
    --gpus 1 \
    --threads 4 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 300 \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --max_epochs=600 \
    --seed $RANDOM \
    --default_root_dir ../output/exp_24

STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 15320 W: 1415 L: 1257 D: 12648
Ptnml(0-2): 50, 1002, 5402, 1152, 54
https://tests.stockfishchess.org/tests/view/611c404a4977aa1525c9c97f

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 9440 W: 345 L: 248 D: 8847
Ptnml(0-2): 3, 222, 4175, 315, 5
https://tests.stockfishchess.org/tests/view/611c6c7d4977aa1525c9c996

LTC with UHO_XXL_+0.90_+1.19.epd:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 6232 W: 1638 L: 1459 D: 3135
Ptnml(0-2): 5, 592, 1744, 769, 6
https://tests.stockfishchess.org/tests/view/611c9b214977aa1525c9c9cb

closes https://github.com/official-stockfish/Stockfish/pull/3664

Bench: 5375286
2021-08-18 09:17:22 +02:00
Tomasz Sobczyk
d61d38586e New NNUE architecture and net
Introduces a new NNUE network architecture and associated network parameters

The summary of the changes:

* Position for each perspective mirrored such that the king is on e..h files. Cuts the feature transformer size in half, while preserving enough knowledge to be good. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.b40q4rb1w7on.
* The number of neurons after the feature transformer increased two-fold, to 1024x2. This is possibly mostly due to the now very optimized feature transformer update code.
* The number of neurons after the second layer is reduced from 16 to 8, to reduce the speed impact. This, perhaps surprisingly, doesn't harm the strength much. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.6qkocr97fezq

The AffineTransform code did not work out-of-the box with the smaller number of neurons after the second layer, so some temporary changes have been made to add a special case for InputDimensions == 8. Also additional 0 padding is added to the output for some archs that cannot process inputs by <=8 (SSE2, NEON). VNNI uses an implementation that can keep all outputs in the registers while reducing the number of loads by 3 for each 16 inputs, thanks to the reduced number of output neurons. However GCC is particularily bad at optimization here (and perhaps why the current way the affine transform is done even passed sprt) (see https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit# for details) and more work will be done on this in the following days. I expect the current VNNI implementation to be improved and extended to other architectures.

The network was trained with a slightly modified version of the pytorch trainer (https://github.com/glinscott/nnue-pytorch); the changes are in https://github.com/glinscott/nnue-pytorch/pull/143

The training utilized 2 datasets.

    dataset A - https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing
    dataset B - as described in ba01f4b954

The training process was as following:

    train on dataset A for 350 epochs, take the best net in terms of elo at 20k nodes per move (it's fine to take anything from later stages of training).
    convert the .ckpt to .pt
    --resume-from-model from the .pt file, train on dataset B for <600 epochs, take the best net. Lambda=0.8, applied before the loss function.

The first training command:

python3 train.py \
    ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
    ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
    --gpus "$3," \
    --threads 1 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --smart-fen-skipping \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --max_epochs=600 \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

The second training command:

python3 serialize.py \
    --features=HalfKAv2_hm^ \
    ../nnue-pytorch-training/experiment_131/run_6/default/version_0/checkpoints/epoch-499.ckpt \
    ../nnue-pytorch-training/experiment_$1/base/base.pt

python3 train.py \
    ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
    ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
    --gpus "$3," \
    --threads 1 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --smart-fen-skipping \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=0.8 \
    --max_epochs=600 \
    --resume-from-model ../nnue-pytorch-training/experiment_$1/base/base.pt \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

STC: https://tests.stockfishchess.org/tests/view/611120b32a8a49ac5be798c4

LLR: 2.97 (-2.94,2.94) <-0.50,2.50>
Total: 22480 W: 2434 L: 2251 D: 17795
Ptnml(0-2): 101, 1736, 7410, 1865, 128

LTC: https://tests.stockfishchess.org/tests/view/611152b32a8a49ac5be798ea

LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9776 W: 442 L: 333 D: 9001
Ptnml(0-2): 5, 295, 4180, 402, 6

closes https://github.com/official-stockfish/Stockfish/pull/3646

bench: 5189338
2021-08-15 12:05:43 +02:00
SFisGOD
73ef5b8c4a Update default net to nn-46832cfbead3.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/6100e7f096b86d98abf6a832
Parameters: A total of 256 net weights and 8 net biases were tuned (output layer)
Base net: nn-56a5f1c4173a.nnue
New net: nn-ec3c8e029926.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/610733caafad2da4f4ae3da7
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-ec3c8e029926.nnue
New net: nn-46832cfbead3.nnue

STC:
LLR: 2.98 (-2.94,2.94) <-0.50,2.50>
Total: 50520 W: 3953 L: 3765 D: 42802
Ptnml(0-2): 138, 3063, 18678, 3235, 146
https://tests.stockfishchess.org/tests/view/610a79692a8a49ac5be793f4

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 57256 W: 1723 L: 1566 D: 53967
Ptnml(0-2): 12, 1442, 25568, 1589, 17
https://tests.stockfishchess.org/tests/view/610ac5bb2a8a49ac5be79434

Closes https://github.com/official-stockfish/Stockfish/pull/3642

Bench: 5359314
2021-08-05 08:52:07 +02:00
SFisGOD
e973eee919 Update default net to nn-56a5f1c4173a.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/60fd24efd8a6b65b2f3a796e
Parameters: A total of 256 net biases were tuned (hidden layer 2)
New best values: Half of the changes from the tuning run
New net: nn-5992d3ba79f3.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/60fec7d6d8a6b65b2f3a7aa2
Parameters: A total of 128 net biases were tuned (hidden layer 1)
New best values: Half of the changes from the tuning run
New net: nn-56a5f1c4173a.nnue

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 140392 W: 10863 L: 10578 D: 118951
Ptnml(0-2): 347, 8754, 51718, 9021, 356
https://tests.stockfishchess.org/tests/view/610037e396b86d98abf6a79e

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 14216 W: 454 L: 355 D: 13407
Ptnml(0-2): 4, 323, 6356, 420, 5
https://tests.stockfishchess.org/tests/view/61019995afad2da4f4ae3a3c

Closes #3633

Bench: 4801359
2021-07-29 07:35:13 +02:00
SFisGOD
237ed1ef8f Update default net to nn-26abeed38351.nnue
SPSA: https://tests.stockfishchess.org/tests/view/60fba335d8a6b65b2f3a7891

New best values: Half of the changes from the tuning run.
Setting: nodestime=300 with 10+0.1 (approximate real TC is 2.5 seconds)
The rest is the same as described in #3593

The change from nodestime=600 to 300 was suggested by gekkehenker to prevent time losses for some slow workers
SFisGOD@94cd757#commitcomment-53324840

STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 67448 W: 5241 L: 5036 D: 57171
Ptnml(0-2): 151, 4198, 24827, 4391, 157
https://tests.stockfishchess.org/tests/view/60fd50f2d8a6b65b2f3a798e

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 48752 W: 1504 L: 1358 D: 45890
Ptnml(0-2): 13, 1226, 21754, 1368, 15
https://tests.stockfishchess.org/tests/view/60fd7bb2d8a6b65b2f3a79a9

Closes https://github.com/official-stockfish/Stockfish/pull/3630

Bench:  5124774
2021-07-26 07:52:59 +02:00
MichaelB7
b939c80513 Update the default net to nn-76a8a7ffb820.nnue.
combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets.

Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-d8609abe8caf.nnue:

The initial net nn-d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer.

Training data gen command (generates in chunks of 200k positions):

generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack

PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence):

python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val

See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data.

Based on that Michael generated nn-76a8a7ffb820.nnue in the following way:

The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-d8609abe8caf.pt

This run is thus started from Segio Vieri's net nn-d8609abe8caf.nnue

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

model.py modifications:
loss = torch.pow(torch.abs(p - q), 2.6).mean()
LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch.
optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992)
For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created

The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time.

passed LTC
https://tests.stockfishchess.org/tests/view/60fa45aed8a6b65b2f3a77a4
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53040 W: 1732 L: 1575 D: 49733
Ptnml(0-2): 14, 1432, 23474, 1583, 17

passed STC
https://tests.stockfishchess.org/tests/view/60f9fee2d8a6b65b2f3a7775
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 37928 W: 3178 L: 3001 D: 31749
Ptnml(0-2): 100, 2446, 13695, 2623, 100.

closes https://github.com/official-stockfish/Stockfish/pull/3626

Bench: 5169957
2021-07-24 18:04:59 +02:00
SFisGOD
8fc297c506 Update default net to nn-9e3c6298299a.nnue
Optimization of nn-956480d8378f.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60da2bf63beab81350ac9fe7

Same method as described in PR #3593

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 17792 W: 1525 L: 1372 D: 14895
Ptnml(0-2): 28, 1156, 6401, 1257, 54
https://tests.stockfishchess.org/tests/view/60deffc59ea99d7c2d693c19

LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 36544 W: 1245 L: 1109 D: 34190
Ptnml(0-2): 12, 988, 16139, 1118, 15
https://tests.stockfishchess.org/tests/view/60df11339ea99d7c2d693c22

closes https://github.com/official-stockfish/Stockfish/pull/3601

Bench: 4687476
2021-07-03 10:03:32 +02:00
SFisGOD
49283d3a66 Update default net to nn-3475407dc199.nnue
Optimization of eight subnetwork output layers of Michael's nn-190f102a22c3.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60d5510642a522cc50282ef3

Parameters: A total of 256 net weights and 8 net biases were tuned
New best values: The raw values at the end of the tuning run were used (800k games, 5 seconds TC)
Settings: default ck value and SPSA A is 30,000 (3.75% of the total number of games)

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 29064 W: 2435 L: 2269 D: 24360
Ptnml(0-2): 72, 1857, 10505, 2029, 69
https://tests.stockfishchess.org/tests/view/60d8ea123beab81350ac9eb6

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 61848 W: 2055 L: 1884 D: 57909
Ptnml(0-2): 18, 1708, 27310, 1861, 27
https://tests.stockfishchess.org/tests/view/60d8f0393beab81350ac9ec6

closes https://github.com/official-stockfish/Stockfish/pull/3593

Bench: 4770936
2021-06-28 21:31:58 +02:00
MichaelB7
b94a651878 Make net nn-956480d8378f.nnue the default
Trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_18 --resume-from-model ./pt/nn-75980ca503c6.pt

This run is thus started from a previous master net.

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

passed STC:
https://tests.stockfishchess.org/tests/view/60d0c0a7a8ec07dc34c072b2
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 18440 W: 1693 L: 1531 D: 15216
Ptnml(0-2): 67, 1225, 6464, 1407, 57

passed LTC:
https://tests.stockfishchess.org/tests/view/60d762793beab81350ac9d72
LLR: 2.98 (-2.94,2.94) <0.50,3.50>
Total: 93120 W: 3152 L: 2933 D: 87035
Ptnml(0-2): 48, 2581, 41076, 2814, 41

passed LTC (rebased branch to current master):
https://tests.stockfishchess.org/tests/view/60d85eeb3beab81350ac9e2b
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 42688 W: 1347 L: 1206 D: 40135
Ptnml(0-2): 14, 1097, 18981, 1238, 14.

closes https://github.com/official-stockfish/Stockfish/pull/3592

Bench: 4906727
2021-06-28 21:20:05 +02:00
MichaelB7
9b82414b67 Make net nn-190f102a22c3.nnue the default net.
Trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_17 --resume-from-model ./pt/nn-75980ca503c6.pt

This run is thus started from the previous master net.

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

passed LTC
https://tests.stockfishchess.org/tests/view/60d09f52b4c17000d679517f
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 32184 W: 1100 L: 970 D: 30114
Ptnml(0-2): 10, 878, 14193, 994, 17

passed STC
https://tests.stockfishchess.org/tests/view/60d086c02114332881e7368e
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 11360 W: 1056 L: 906 D: 9398
Ptnml(0-2): 25, 735, 4026, 853, 41

closes https://github.com/official-stockfish/Stockfish/pull/3576

Bench: 4631244
2021-06-21 23:16:55 +02:00
MichaelB7
ba01f4b954 Make net nn-75980ca503c6.nnue the default.
trained with the Python command

c:\nnue>python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_10 --resume-from-model ./pt/nn-3b20abec10c1.pt
`
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack .

Net nn-3b20abec10c1.nnue was chosen as the --resume-from-model with the idea that through learning, the manually hex edited values will be learned and will not need to be manually adjusted going forward. They would also be fine tuned by the learning process.

passed STC:
https://tests.stockfishchess.org/tests/view/60cdf91e457376eb8bcab66f
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 18256 W: 1639 L: 1479 D: 15138
Ptnml(0-2): 59, 1179, 6505, 1313, 72

passed LTC:
https://tests.stockfishchess.org/tests/view/60ce2166457376eb8bcab6e1
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 18792 W: 654 L: 542 D: 17596
Ptnml(0-2): 9, 490, 8291, 592, 14

closes https://github.com/official-stockfish/Stockfish/pull/3570

Bench: 5020972
2021-06-19 23:24:35 +02:00
Tomasz Sobczyk
2e745956c0 Change trace with NNUE eval support
This patch adds some more output to the `eval` command. It adds a board display
with estimated piece values (method is remove-piece, evaluate, put-piece), and
splits the NNUE evaluation with (psqt,layers) for each bucket for the NNUE net.

Example:

```

./stockfish
position fen 3Qb1k1/1r2ppb1/pN1n2q1/Pp1Pp1Pr/4P2p/4BP2/4B1R1/1R5K b - - 11 40
eval

 Contributing terms for the classical eval:
+------------+-------------+-------------+-------------+
|    Term    |    White    |    Black    |    Total    |
|            |   MG    EG  |   MG    EG  |   MG    EG  |
+------------+-------------+-------------+-------------+
|   Material |  ----  ---- |  ----  ---- | -0.73 -1.55 |
|  Imbalance |  ----  ---- |  ----  ---- | -0.21 -0.17 |
|      Pawns |  0.35 -0.00 |  0.19 -0.26 |  0.16  0.25 |
|    Knights |  0.04 -0.08 |  0.12 -0.01 | -0.08 -0.07 |
|    Bishops | -0.34 -0.87 | -0.17 -0.61 | -0.17 -0.26 |
|      Rooks |  0.12  0.00 |  0.08  0.00 |  0.04  0.00 |
|     Queens |  0.00  0.00 | -0.27 -0.07 |  0.27  0.07 |
|   Mobility |  0.84  1.76 |  0.01  0.66 |  0.83  1.10 |
|King safety | -0.99 -0.17 | -0.72 -0.10 | -0.27 -0.07 |
|    Threats |  0.27  0.27 |  0.73  0.86 | -0.46 -0.59 |
|     Passed |  0.00  0.00 |  0.79  0.82 | -0.79 -0.82 |
|      Space |  0.61  0.00 |  0.24  0.00 |  0.37  0.00 |
|   Winnable |  ----  ---- |  ----  ---- |  0.00 -0.03 |
+------------+-------------+-------------+-------------+
|      Total |  ----  ---- |  ----  ---- | -1.03 -2.14 |
+------------+-------------+-------------+-------------+

 NNUE derived piece values:
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |   Q   |   b   |       |   k   |       |
|       |       |       | +12.4 | -1.62 |       |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |   r   |       |       |   p   |   p   |   b   |       |
|       | -3.89 |       |       | -0.84 | -1.19 | -3.32 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|   p   |   N   |       |   n   |       |       |   q   |       |
| -1.81 | +3.71 |       | -4.82 |       |       | -5.04 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|   P   |   p   |       |   P   |   p   |       |   P   |   r   |
| +1.16 | -0.91 |       | +0.55 | +0.12 |       | +0.50 | -4.02 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   P   |       |       |   p   |
|       |       |       |       | +2.33 |       |       | +1.17 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   B   |   P   |       |       |
|       |       |       |       | +4.79 | +1.54 |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   B   |       |   R   |       |
|       |       |       |       | +4.54 |       | +6.03 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |   R   |       |       |       |       |       |   K   |
|       | +4.81 |       |       |       |       |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+

 NNUE network contributions (Black to move)
+------------+------------+------------+------------+
|   Bucket   |  Material  | Positional |   Total    |
|            |   (PSQT)   |  (Layers)  |            |
+------------+------------+------------+------------+
|  0         |  +  0.32   |  -  1.46   |  -  1.13   |
|  1         |  +  0.25   |  -  0.68   |  -  0.43   |
|  2         |  +  0.46   |  -  1.72   |  -  1.25   |
|  3         |  +  0.55   |  -  1.80   |  -  1.25   |
|  4         |  +  0.48   |  -  1.77   |  -  1.29   |
|  5         |  +  0.40   |  -  2.00   |  -  1.60   |
|  6         |  +  0.57   |  -  2.12   |  -  1.54   | <-- this bucket is used
|  7         |  +  3.38   |  -  2.00   |  +  1.37   |
+------------+------------+------------+------------+

Classical evaluation   -1.00 (white side)
NNUE evaluation        +1.54 (white side)
Final evaluation       +2.38 (white side) [with scaled NNUE, hybrid, ...]

```

Also renames the export_net() function to save_eval() while there.

closes https://github.com/official-stockfish/Stockfish/pull/3562

No functional change
2021-06-19 11:57:01 +02:00
Joost VandeVondele
adfb23c029 Make net nn-50144f835024.nnue the default
trained with the Python command

c:\nnue>python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_8 --resume-from-model ./pt/nn-6ad41a9207d0.pt
`
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - make one large Wrong_NNUE 2 binpack and one large Training_Data of approximate size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training _Data binpack .

nn-6ad41a9207d0.pt was derived from a net vondele ran which passed STC quickly,
but faltered in LTC. https://tests.stockfishchess.org/tests/view/60cba666457376eb8bcab443

STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 18792 W: 2068 L: 1889 D: 14835
Ptnml(0-2): 82, 1480, 6117, 1611, 106
https://tests.stockfishchess.org/tests/view/60ccda8b457376eb8bcab568

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 11376 W: 574 L: 454 D: 10348
Ptnml(0-2): 4, 412, 4747, 510, 15
https://tests.stockfishchess.org/tests/view/60ccf952457376eb8bcab58d

closes https://github.com/official-stockfish/Stockfish/pull/3568

Bench: 4900906
2021-06-18 23:50:26 +02:00
SFisGOD
86afb6a7cf Update default net to nn-aa9d7eeb397e.nnue
Optimization of vondele's nn-33c9d39e5eb6.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60ca68be457376eb8bcab28b
Setting: ck values are default based on how large the parameters are
The new values for this net are the raw values at the end of the tuning (80k games)

The significant changes are in buckets 1 and 2 (5-12 pieces) so the main difference is in playing endgames if we compare it to nn-33c9. There is also change in bucket 7 (29-32 pieces) but not as substantial as the changes in buckets 1 and 2. If we interpret the changes based on an experiment a few months ago, this new net plays more optimistically during endgames and less optimistically during openings.

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 49504 W: 4246 L: 4053 D: 41205
Ptnml(0-2): 140, 3282, 17749, 3407, 174
https://tests.stockfishchess.org/tests/view/60cbd752457376eb8bcab478

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 88720 W: 4926 L: 4651 D: 79143
Ptnml(0-2): 105, 4048, 35793, 4295, 119
https://tests.stockfishchess.org/tests/view/60cc7828457376eb8bcab4fa

closes https://github.com/official-stockfish/Stockfish/pull/3566

Bench: 4758885
2021-06-18 21:29:14 +02:00
ap
14b673d90f New default net nn-3b20abec10c1.nnue
This net was created by @pleomati, who manually edited with an hex editor
10 values randomly chosen in the LCSFNet10 net (nn-6ad41a9207d0.nnue) to
create this one. The LCSFNet10 net was trained by Joost VandeVondele from
a dataset combining Stockfish games and Leela games (16x10^9 positions from
SF self-play at depth 9, and 6.3x10^9 positions from Leela games, so overall
72% of Stockfish positions and 28% of Leela positions).

passed STC 10+0.1:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 50888 W: 5881 L: 5654 D: 39353
Ptnml(0-2): 281, 4290, 16085, 4497, 291
https://tests.stockfishchess.org/tests/view/60cbfa68457376eb8bcab49a

passed LTC 60+0.6:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 25480 W: 1498 L: 1338 D: 22644
Ptnml(0-2): 36, 1155, 10193, 1325, 31
https://tests.stockfishchess.org/tests/view/60cc4af8457376eb8bcab4d4

closes https://github.com/official-stockfish/Stockfish/pull/3564

Bench: 4904930
2021-06-18 20:00:13 +02:00
Joost VandeVondele
8ec9e10866 New default net nn-33c9d39e5eb6.nnue
As the previous net, this net is trained on Leela games as provided by borg.
See also https://lczero.org/blog/2021/06/the-importance-of-open-data/

The particular data set, which is a mix of T60 and T74 data, is now available as a single binpack:
https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing

The training command was:
python train.py ../../training_data_pylon.binpack ../../training_data_pylon.binpack --gpus 1 --threads 2 --num-workers 2 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 10 --features=HalfKAv2^   --lambda=1.0  --max_epochs=440 --seed $RANDOM --default_root_dir exp/run_2

passed STC:
https://tests.stockfishchess.org/tests/view/60c887cb457376eb8bcab054
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 12792 W: 1483 L: 1311 D: 9998
Ptnml(0-2): 62, 989, 4131, 1143, 71

passed LTC:
https://tests.stockfishchess.org/tests/view/60c8e5c4457376eb8bcab0f0
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 11272 W: 601 L: 477 D: 10194
Ptnml(0-2): 9, 421, 4657, 535, 14

also had strong LTC performance against another strong net of the series:
https://tests.stockfishchess.org/tests/view/60c8c40d457376eb8bcab0c6

closes https://github.com/official-stockfish/Stockfish/pull/3557

Bench: 5032320
2021-06-15 22:08:40 +02:00
JWmer
f8c779dbe5 Update default net to nn-8e47cf062333.nnue
This net is the result of training on data used by the Leela project. More precisely,
we shuffled T60 and T74 data kindly provided by borg (for different Tnn, the data is
a result of Leela selfplay with differently sized Leela nets).

The data is available at vondele's google drive:
https://drive.google.com/drive/folders/1mftuzYdl9o6tBaceR3d_VBQIrgKJsFpl.

The Leela data comes in small chunks of .binpack files. To shuffle them, we simply
used a small python script to randomly rename the files, and then concatenated them
using `cat`. As validation data we picked a file of T60 data. We will further investigate
T74 data.

The training for the NNUE architecture used 200 epochs with the Python trainer from
the Stockfish project. Unlike the previous run we tried with this data, this run does
not have adjusted scaling — not because we didn't want to, but because we forgot.
However, this training randomly skips 40% more positions than previous run. The loss
was very spiky and decreased slower than it does usually.

Training loss: https://github.com/official-stockfish/images/blob/main/training-loss-8e47cf062333.png
Validation loss: https://github.com/official-stockfish/images/blob/main/validation-loss-8e47cf062333.png

This is the exact training command:
python train.py --smart-fen-skipping --random-fen-skipping 14 --batch-size 16384 --threads 4 --num-workers 4 --gpus 1 trainingdata\training_data.binpack validationdata\val.binpack

---

10k STC result:
ELO: 3.61 +-3.3 (95%) LOS: 98.4%
Total: 10000 W: 1241 L: 1137 D: 7622
Ptnml(0-2): 68, 841, 3086, 929, 76
https://tests.stockfishchess.org/tests/view/60c67e50457376eb8bcaae70

10k LTC result:
ELO: 2.71 +-2.4 (95%) LOS: 98.8%
Total: 10000 W: 659 L: 581 D: 8760
Ptnml(0-2): 22, 485, 3900, 579, 14
https://tests.stockfishchess.org/tests/view/60c69deb457376eb8bcaae98

Passed LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9648 W: 685 L: 545 D: 8418
Ptnml(0-2): 22, 448, 3740, 596, 18
https://tests.stockfishchess.org/tests/view/60c6d41c457376eb8bcaaecf

---

closes https://github.com/official-stockfish/Stockfish/pull/3550

Bench: 4877339
2021-06-14 09:24:07 +02:00