Created by retraining the master net with these modifications:
* New filtering methods for existing data from T80 sep+oct2022, T79 apr2022, T78 jun+jul+aug+sep2022, T77 dec2021
* Adding new filtered data from T80 aug2022 and T78 apr+may2022
* Increasing early-fen-skipping from 28 to 30
```
python3 easy_train.py \
--experiment-name leela96-dfrc99-T80novT79mayT60novdec-v2-T80augsepoctT79aprT78aprtosep-v6-T77dec-v3-sk30 \
--training-dataset /data/leela96-dfrc99-T80novT79mayT60novdec-v2-T80augsepoctT79aprT78aprtosep-v6-T77dec-v3.binpack \
--nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes \
--start-from-engine-test-net True \
--early-fen-skipping 30 \
--max_epoch 900 \
--start-lambda 1.0 \
--end-lambda 0.7 \
--lr 4.375e-4 \
--gamma 0.995 \
--tui False \
--gpus "0," \
--seed $RANDOM
```
The v3 filtering used for data from T77dec 2021 differs from v2 filtering in that:
* To improve binpack compression, positions after ply 28 were skipped during training by setting position scores to VALUE_NONE (32002) instead of removing them entirely
* All early-game positions with ply <= 28 were removed to maximize binpack compression
* Only bestmove captures at d6pv2 search were skipped, not 2nd bestmove captures
* Binpack compression was repaired for the remaining positions by effectively replacing bestmoves with "played moves" to maintain contiguous sequences of positions in the training game data
After improving binpack compression, The T77 dec2021 data size was reduced from 95G to 19G.
The v6 filtering used for data from T80augsepoctT79aprT78aprtosep 2022 differs from v2 in that:
* All positions with only one legal move were removed
* Tighter score differences at d6pv2 search were used to remove more positions with only one good move than before
* d6pv2 search was not used to remove positions where the best 2 moves were captures
```
python3 interleave_binpacks.py \
nn-547-dataset/leela96-eval-filt-v2.binpack \
nn-547-dataset/dfrc99-eval-filt-v2.binpack \
nn-547-dataset/test80-nov2022-12tb7p-eval-filt-v2-d6.binpack \
nn-547-dataset/T79-may2022-12tb7p-eval-filt-v2.binpack \
nn-547-dataset/T60-nov2021-12tb7p-eval-filt-v2.binpack \
nn-547-dataset/T60-dec2021-12tb7p-eval-filt-v2.binpack \
filt-v6/test80-aug2022-16tb7p-filter-v6.binpack \
filt-v6/test80-sep2022-16tb7p-filter-v6.binpack \
filt-v6/test80-oct2022-16tb7p-filter-v6.binpack \
filt-v6/test79-apr2022-16tb7p-filter-v6.binpack \
filt-v6/test78-aprmay2022-16tb7p-filter-v6.binpack \
filt-v6/test78-junjulaug2022-16tb7p-filter-v6.binpack \
filt-v6/test78-sep2022-16tb7p-filter-v6.binpack \
filt-v3/test77-dec2021-16tb7p-filt-v3.binpack \
/data/leela96-dfrc99-T80novT79mayT60novdec-v2-T80augsepoctT79aprT78aprtosep-v6-T77dec-v3.binpack
```
The code for the new data filtering methods is available at:
https://github.com/linrock/Stockfish/tree/nnue-data-v3/nnue-data
The code for giving hexword names to .nnue files is at:
https://github.com/linrock/nnue-namer
Links for downloading the training data components can be found at:
https://robotmoon.com/nnue-training-data/
Local elo at 25k nodes per move:
nn-epoch779.nnue : 0.6 +/- 3.1
Passed STC:
https://tests.stockfishchess.org/tests/view/64212412db43ab2ba6f8efb0
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 82256 W: 22185 L: 21809 D: 38262
Ptnml(0-2): 286, 9065, 22067, 9407, 303
Passed LTC:
https://tests.stockfishchess.org/tests/view/64223726db43ab2ba6f91d6c
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 30840 W: 8437 L: 8149 D: 14254
Ptnml(0-2): 14, 2891, 9323, 3177, 15
closes https://github.com/official-stockfish/Stockfish/pull/4465
bench 5101970
This patch introduces `hint_common_parent_position()` to signal that potentially several child nodes will require an NNUE eval. By populating explicitly the accumulator, these subsequent evaluations can be performed more efficiently.
This was based on the observation that calculating the evaluation in an excluded move position yielded a significant Elo gain, even though the evaluation itself was already available (work by pb00067).
Sopel wrote the code to perform just the accumulator update. This PR is based on cleaned up code that
passed STC:
https://tests.stockfishchess.org/tests/view/63f62f9be74a12625bcd4aa0
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 110368 W: 29607 L: 29167 D: 51594
Ptnml(0-2): 41, 10551, 33572, 10967, 53
and in an the earlier (equivalent) version
passed STC:
https://tests.stockfishchess.org/tests/view/63f3c3fee74a12625bcce2a6
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 47552 W: 12786 L: 12467 D: 22299
Ptnml(0-2): 120, 5107, 12997, 5438, 114
passed LTC:
https://tests.stockfishchess.org/tests/view/63f45cc2e74a12625bccfa63
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 110368 W: 29607 L: 29167 D: 51594
Ptnml(0-2): 41, 10551, 33572, 10967, 53
closes https://github.com/official-stockfish/Stockfish/pull/4402
Bench: 3726250
Created by retraining the master net on a dataset composed of:
* Most of the previous best dataset filtered to remove positions likely having only one good move
* Adding training data from Leela T77 dec2021 rescored with 16tb of 7-piece tablebases
Trained with end lambda 0.7 and max epoch 900. Positions with ply <= 28 were removed from most of the previous best dataset before training began. A new nnue-pytorch trainer param for skipping early plies was used to skip plies <= 24 in the unfiltered and additional Leela T77 parts of the dataset.
```
python easy_train.py \
--experiment-name leela96-dfrc99-T80octnovT79aprmayT60novdec-eval-filt-v2-T78augsep-12tb-T77dec-16tb-lambda7-sk24 \
--training-dataset /data/leela96-dfrc99-T80octnovT79aprmayT60novdec-eval-filt-v2-T78augsep-12tb-T77dec-16tb.binpack \
--nnue-pytorch-branch linrock/nnue-pytorch/easy-train-early-fen-skipping \
--early-fen-skipping 24 \
--gpus "0," \
--start-from-engine-test-net True \
--start-lambda 1.0 \
--end-lambda 0.7 \
--gamma 0.995 \
--lr 4.375e-4 \
--tui False \
--seed $RANDOM \
--max_epoch 900
```
The depth6 multipv2 search filtering method is the same as the one used for filtering recent best datasets, with a lower eval difference threshold to remove slightly more positions than before. These parts of the dataset were filtered:
* 96% of T60T70wIsRightFarseerT60T74T75T76.binpack
* 99% of dfrc_n5000.binpack
* T80 oct + nov 2022 data, no positions with castling flags, rescored with ~600gb 7p tablebases
* T79 apr + may 2022 data, rescored with 12tb 7p tablebases
* T60 nov + dec 2021 data, rescored with 12tb 7p tablebases
These parts of the dataset were not filtered. Positions with ply <= 24 were skipped during training:
* T78 aug + sep 2022 data, rescored with 12tb 7p tablebases
* 84% of T77 dec 2021 data, rescored with 16tb 7p tablebases
The code and exact evaluation thresholds used for data filtering can be found at:
https://github.com/linrock/Stockfish/tree/tools-filter-multipv2-eval-diff-t2/src/filter
The exact training data used can be found at:
https://robotmoon.com/nnue-training-data/
Local elo at 25k nodes per move:
nn-epoch859.nnue : 3.5 +/ 1.2
Passed STC:
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
https://tests.stockfishchess.org/tests/view/63dfeefc73223e7f52ad769f
Total: 219744 W: 58572 L: 58002 D: 103170
Ptnml(0-2): 609, 24446, 59284, 24832, 701
Passed LTC:
https://tests.stockfishchess.org/tests/view/63e268fc73223e7f52ade7b6
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 91256 W: 24528 L: 24121 D: 42607
Ptnml(0-2): 48, 8863, 27390, 9288, 39
closes https://github.com/official-stockfish/Stockfish/pull/4387
bench 3841998
Created by retraining the master net with Leela T78 data from Aug+Sep 2022 added to the previous best dataset. Trained with end lambda 0.7 and started with max epoch 800. All positions with ply <= 28 were skipped:
```
python easy_train.py \
--experiment-name leela95-dfrc96-filt-only-T80octnov-T60novdecT78augsepT79aprmay-12tb7p-sk28-lambda7 \
--training-dataset /data/leela95-dfrc96-filt-only-T80octnov-T60novdecT78augsepT79aprmay-12tb7p.binpack \
--nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-skip-ply-lteq-28 \
--start-from-engine-test-net True \
--gpus "0," \
--start-lambda 1.0 \
--end-lambda 0.7 \
--gamma 0.995 \
--lr 4.375e-4 \
--tui False \
--seed $RANDOM \
--max_epoch 800
```
Around epoch 750, training was manually paused and max epoch increased to 950 before resuming. The additional Leela training data from T78 was prepared in the same way as the previous best dataset.
The exact training data used can be found at:
https://robotmoon.com/nnue-training-data/
While the local elo ratings during this experiment were much lower than in recent master nets, several later epochs had a consistent elo above zero, and this was hypothesized to represent potential strength at slower time controls.
Local elo at 25k nodes per move
leela95-dfrc96-filt-only-T80octnov-T60novdecT78augsepT79aprmay-12tb7p-sk28-lambda7
nn-epoch819.nnue : 0.4 +/- 1.1 (nn-bc24c101ada0.nnue)
nn-epoch799.nnue : 0.3 +/- 1.2
nn-epoch759.nnue : 0.3 +/- 1.1
nn-epoch839.nnue : 0.2 +/- 1.4
Passed STC
https://tests.stockfishchess.org/tests/view/63cabf6f0eefe8694a0c6013
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 41608 W: 11161 L: 10848 D: 19599
Ptnml(0-2): 116, 4496, 11281, 4781, 130
Passed LTC
https://tests.stockfishchess.org/tests/view/63cb1856344bb01c191af263
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 76760 W: 20517 L: 20137 D: 36106
Ptnml(0-2): 34, 7435, 23070, 7799, 42
closes https://github.com/official-stockfish/Stockfish/pull/4351
bench 3941848
Created by retraining the master net on a dataset composed of:
* The Leela-dfrc_n5000.binpack dataset filtered with depth6 multipv2 search to remove positions with only one good move, in addition to removing positions where either of the two best moves are captures
* The same Leela T80 oct+nov 2022 training data used in recent best datasets
* Additional Leela training data from T60 nov+dec 2021 and T79 apr+may 2022
Trained with end lambda 0.7 and started with max epoch 800. All positions with ply <= 28 were skipped:
```
python easy_train.py \
--experiment-name leela95-dfrc96-mpv-eval-fonly-T80octnov-T79aprmayT60novdec-12tb7p-sk28-lambda7 \
--training-dataset /data/leela95-dfrc96-mpv-eval-fonly-T80octnov-T79aprmayT60novdec-12tb7p.binpack \
--nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-skip-ply-lteq-28 \
--start-from-engine-test-net True \
--gpus "0," \
--start-lambda 1.0 \
--end-lambda 0.7 \
--gamma 0.995 \
--lr 4.375e-4 \
--tui False \
--seed $RANDOM \
--max_epoch 800
```
Around epoch 780, training was manually paused and max epoch increased to 920 before resuming.
During depth6 multipv2 data filtering, positions were considered to have only one good move if the score of the best move was significantly better than the 2nd best move in a way that changes the outcome of the game:
* the best move leads to a significant advantage while the 2nd best move equalizes or loses
* the best move is about equal while the 2nd best move loses
The modified stockfish branch and exact score thresholds used for filtering are at:
https://github.com/linrock/Stockfish/tree/tools-filter-multipv2-eval-diff/src/filter
About 95% of the Leela portion and 96% of the DFRC portion of the Leela-dfrc_n5000.binpack dataset was filtered. Unfiltered parts of the dataset were left out.
The additional Leela training data from T60 nov+dec 2021 and T79 apr+may 2022 was WDL-rescored with about 12TB of syzygy 7-piece tablebases where the material difference is less than around 6 pawns. Best moves were exported to .plain data files during data conversion with the lc0 rescorer.
The exact training data can be found at:
https://robotmoon.com/nnue-training-data/
Local elo at 25k nodes per move
experiment_leela95-dfrc96-mpv-eval-fonly-T80octnov-T79aprmayT60novdec-12tb7p-sk28-lambda7
run_0/nn-epoch899.nnue : 3.8 +/- 1.6
Passed STC
https://tests.stockfishchess.org/tests/view/63bed1f540aa064159b9c89b
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 103344 W: 27392 L: 26991 D: 48961
Ptnml(0-2): 333, 11223, 28099, 11744, 273
Passed LTC
https://tests.stockfishchess.org/tests/view/63c010415705810de2deb3ec
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 21712 W: 5891 L: 5619 D: 10202
Ptnml(0-2): 12, 2022, 6511, 2304, 7
closes https://github.com/official-stockfish/Stockfish/pull/4338
bench 4106793
This is a later epoch (epoch 859) from the same experiment run that trained yesterday's master net nn-60fa44e376d9.nnue (epoch 779). The experiment was manually paused around epoch 790 and unpaused with max epoch increased to 900 mainly to get more local elo data without letting the GPU idle.
nn-60fa44e376d9.nnue is from #4314
nn-335a9b2d8a80.nnue is from #4295
Local elo vs. nn-335a9b2d8a80.nnue at 25k nodes per move:
experiment_leela93-dfrc99-filt-only-T80-oct-nov-skip28
run_0/nn-epoch779.nnue (nn-60fa44e376d9.nnue) : 5.0 +/- 1.2
run_0/nn-epoch859.nnue (nn-a3dc078bafc7.nnue) : 5.6 +/- 1.6
Passed STC vs. nn-335a9b2d8a80.nnue
https://tests.stockfishchess.org/tests/view/63ae10495bd1e5f27f13d94f
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 37536 W: 10088 L: 9781 D: 17667
Ptnml(0-2): 110, 4006, 10223, 4325, 104
An LTC test vs. nn-335a9b2d8a80.nnue was paused due to nn-60fa44e376d9.nnue passing LTC first:
https://tests.stockfishchess.org/tests/view/63ae5d34331d5fca5113703b
Passed LTC vs. nn-60fa44e376d9.nnue
https://tests.stockfishchess.org/tests/view/63af1e41465d2b022dbce4e7
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 148704 W: 39672 L: 39155 D: 69877
Ptnml(0-2): 59, 14443, 44843, 14936, 71
closes https://github.com/official-stockfish/Stockfish/pull/4319
bench 3984365
Created by retraining the master net on the previous best dataset with additional filtering. No new data was added.
More of the Leela-dfrc_n5000.binpack part of the dataset was pre-filtered with depth6 multipv2 search to remove bestmove captures. About 93% of the previous Leela/SF data and 99% of the SF dfrc data was filtered. Unfiltered parts of the dataset were left out. The new Leela T80 oct+nov data is the same as before. All early game positions with ply count <= 28 were skipped during training by modifying the training data loader in nnue-pytorch.
Trained in a similar way as recent master nets, with a different nnue-pytorch branch for early ply skipping:
python3 easy_train.py \
--experiment-name=leela93-dfrc99-filt-only-T80-oct-nov-skip28 \
--training-dataset=/data/leela93-dfrc99-filt-only-T80-oct-nov.binpack \
--start-from-engine-test-net True \
--nnue-pytorch-branch=linrock/nnue-pytorch/misc-fixes-skip-ply-lteq-28 \
--gpus="0," \
--start-lambda=1.0 \
--end-lambda=0.75 \
--gamma=0.995 \
--lr=4.375e-4 \
--tui=False \
--seed=$RANDOM \
--max_epoch=800 \
--network-testing-threads 20 \
--num-workers 6
For the exact training data used: https://robotmoon.com/nnue-training-data/
Details about the previous best dataset: #4295
Local testing at a fixed 25k nodes:
experiment_leela93-dfrc99-filt-only-T80-oct-nov-skip28
Local Elo: run_0/nn-epoch779.nnue : 5.1 +/- 1.5
Passed STC
https://tests.stockfishchess.org/tests/view/63adb3acae97a464904fd4e8
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 36504 W: 9847 L: 9538 D: 17119
Ptnml(0-2): 108, 3981, 9784, 4252, 127
Passed LTC
https://tests.stockfishchess.org/tests/view/63ae0ae25bd1e5f27f13d884
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 36592 W: 10017 L: 9717 D: 16858
Ptnml(0-2): 17, 3461, 11037, 3767, 14
closes https://github.com/official-stockfish/Stockfish/pull/4314
bench 4015511
Created by retraining the master net with a combination of:
the previous best dataset (Leela-dfrc_n5000.binpack), with about half the dataset filtered using depth6 multipv2 search to throw away positions where either of the 2 best moves are captures
Leela T80 Oct and Nov training data rescored with best moves, adding ~9.5 billion positions
Trained effectively the same way as the previous master net:
python3 easy_train.py \
--experiment-name=leela-dfrc-filtered-T80-oct-nov \
--training-dataset=/data/leela-dfrc-filtered-T80-oct-nov.binpack \
--start-from-engine-test-net True \
--gpus="0," \
--start-lambda=1.0 \
--end-lambda=0.75 \
--gamma=0.995 \
--lr=4.375e-4 \
--tui=False \
--seed=$RANDOM \
--max_epoch=800 \
--auto-exit-timeout-on-training-finished=900 \
--network-testing-threads 20 \
--num-workers 6
Local testing at a fixed 25k nodes:
experiments/experiment_leela-dfrc-filtered-T80-oct-nov/training/run_0/nn-epoch779.nnue
localElo: run_0/nn-epoch779.nnue : 4.7 +/- 3.1
The new Leela T80 part of the dataset was prepared by downloading test80 training data from all of Oct 2022 and Nov 2022, rescoring with syzygy 6-piece tablebases and ~600 GB of 7-piece tablebases, saving best moves to exported .plain files, removing all positions with castling flags, then converting to binpacks and using interleave_binpacks.py to merge them together. Scripts used in this data conversion process are available at:
https://github.com/linrock/lc0-data-converter
Filtering binpack data using depth6 multipv2 search was done by modifying transform.cpp in the tools branch:
https://github.com/linrock/Stockfish/tree/tools-filter-multipv2-no-rescore
Links for downloading the training data (total size: 338 GB) are available at:
https://robotmoon.com/nnue-training-data/
Passed STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 30544 W: 8244 L: 7947 D: 14353
Ptnml(0-2): 93, 3243, 8302, 3542, 92
https://tests.stockfishchess.org/tests/view/63a0d377264a0cf18f86f82b
Passed LTC:
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 32464 W: 8866 L: 8573 D: 15025
Ptnml(0-2): 19, 3054, 9794, 3345, 20
https://tests.stockfishchess.org/tests/view/63a10bc9fb452d3c44b1e016
closes https://github.com/official-stockfish/Stockfish/pull/4295
Bench 3554904
First things first...
this PR is being made from court. Today, Tord and Stéphane, with broad support
of the developer community are defending their complaint, filed in Munich, against ChessBase.
With their products Houdini 6 and Fat Fritz 2, both Stockfish derivatives,
ChessBase violated repeatedly the Stockfish GPLv3 license. Tord and Stéphane have terminated
their license with ChessBase permanently. Today we have the opportunity to present
our evidence to the judge and enforce that termination. To read up, have a look at our blog post
https://stockfishchess.org/blog/2022/public-court-hearing-soon/ and
https://stockfishchess.org/blog/2021/our-lawsuit-against-chessbase/
This PR introduces a net trained with an enhanced data set and a modified loss function in the trainer.
A slight adjustment for the scaling was needed to get a pass on standard chess.
passed STC:
https://tests.stockfishchess.org/tests/view/62c0527a49b62510394bd610
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 135008 W: 36614 L: 36152 D: 62242
Ptnml(0-2): 640, 15184, 35407, 15620, 653
passed LTC:
https://tests.stockfishchess.org/tests/view/62c17e459e7d9997a12d458e
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 28864 W: 8007 L: 7749 D: 13108
Ptnml(0-2): 47, 2810, 8466, 3056, 53
Local testing at a fixed 25k nodes resulted in
Test run1026/easy_train_data/experiments/experiment_2/training/run_0/nn-epoch799.nnue
localElo: 4.2 +- 1.6
The real strength of the net is in FRC and DFRC chess where it gains significantly.
Tested at STC with slightly different scaling:
FRC:
https://tests.stockfishchess.org/tests/view/62c13a4002ba5d0a774d20d4
Elo: 29.78 +-3.4 (95%) LOS: 100.0%
Total: 10000 W: 2007 L: 1152 D: 6841
Ptnml(0-2): 31, 686, 2804, 1355, 124
nElo: 59.24 +-6.9 (95%) PairsRatio: 2.06
DFRC:
https://tests.stockfishchess.org/tests/view/62c13a5702ba5d0a774d20d9
Elo: 55.25 +-3.9 (95%) LOS: 100.0%
Total: 10000 W: 2984 L: 1407 D: 5609
Ptnml(0-2): 51, 636, 2266, 1779, 268
nElo: 96.95 +-7.2 (95%) PairsRatio: 2.98
Tested at LTC with identical scaling:
FRC:
https://tests.stockfishchess.org/tests/view/62c26a3c9e7d9997a12d6caf
Elo: 16.20 +-2.5 (95%) LOS: 100.0%
Total: 10000 W: 1192 L: 726 D: 8082
Ptnml(0-2): 10, 403, 3727, 831, 29
nElo: 44.12 +-6.7 (95%) PairsRatio: 2.08
DFRC:
https://tests.stockfishchess.org/tests/view/62c26a539e7d9997a12d6cb2
Elo: 40.94 +-3.0 (95%) LOS: 100.0%
Total: 10000 W: 2215 L: 1042 D: 6743
Ptnml(0-2): 10, 410, 3053, 1451, 76
nElo: 92.77 +-6.9 (95%) PairsRatio: 3.64
This is due to the mixing in a significant fraction of DFRC training data in the final training round. The net is
trained using the easy_train.py script in the following way:
```
python easy_train.py \
--training-dataset=../Leela-dfrc_n5000.binpack \
--experiment-name=2 \
--nnue-pytorch-branch=vondele/nnue-pytorch/lossScan4 \
--additional-training-arg=--param-index=2 \
--start-lambda=1.0 \
--end-lambda=0.75 \
--gamma=0.995 \
--lr=4.375e-4 \
--start-from-engine-test-net True \
--tui=False \
--seed=$RANDOM \
--max_epoch=800 \
--auto-exit-timeout-on-training-finished=900 \
--network-testing-threads 8 \
--num-workers 12
```
where the data set used (Leela-dfrc_n5000.binpack) is a combination of our previous best data set (mix of Leela and some SF data) and DFRC data, interleaved to form:
The data is available in https://drive.google.com/drive/folders/1S9-ZiQa_3ApmjBtl2e8SyHxj4zG4V8gG?usp=sharing
Leela mix: https://drive.google.com/file/d/1JUkMhHSfgIYCjfDNKZUMYZt6L5I7Ra6G/view?usp=sharing
DFRC: https://drive.google.com/file/d/17vDaff9LAsVo_1OfsgWAIYqJtqR8aHlm/view?usp=sharing
The training branch used is
https://github.com/vondele/nnue-pytorch/commits/lossScan4
A PR to the main trainer repo will be made later. This contains a revised loss function, now computing the loss from the score based on the win rate model, which is a more accurate representation than what we had before. Scaling constants are tweaked there as well.
closes https://github.com/official-stockfish/Stockfish/pull/4100
Bench: 5186781
train a net using training data with a
heavier weight on positions having 16 pieces on the board. More specifically,
with a relative weight of `i * (32-i)/(16 * 16)+1` (where i is the number of pieces on the board).
This is done with the trainer branch https://github.com/glinscott/nnue-pytorch/pull/173
The command used is:
```
python train.py $datafile $datafile $restarttype $restartfile --gpus 1 --threads 4 --num-workers 12 --random-fen-skipping=3 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --features=HalfKAv2_hm^ --lambda=1.00 --max_epochs=$epochs --seed $RANDOM --default_root_dir exp/run_$i
```
The datafile is T60T70wIsRightFarseerT60T74T75T76.binpack, the restart is from the master net.
passed STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 22728 W: 6197 L: 5945 D: 10586
Ptnml(0-2): 105, 2453, 6001, 2695, 110
https://tests.stockfishchess.org/tests/view/625cf944ff677a888877cd90
passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 35664 W: 9535 L: 9264 D: 16865
Ptnml(0-2): 30, 3524, 10455, 3791, 32
https://tests.stockfishchess.org/tests/view/625d3c32ff677a888877d7ca
closes https://github.com/official-stockfish/Stockfish/pull/3989
Bench: 7269563
Architecture:
The diagram of the "SFNNv4" architecture:
https://user-images.githubusercontent.com/8037982/153455685-cbe3a038-e158-4481-844d-9d5fccf5c33a.png
The most important architectural changes are the following:
* 1024x2 [activated] neurons are pairwise, elementwise multiplied (not quite pairwise due to implementation details, see diagram), which introduces a non-linearity that exhibits similar benefits to previously tested sigmoid activation (quantmoid4), while being slightly faster.
* The following layer has therefore 2x less inputs, which we compensate by having 2 more outputs. It is possible that reducing the number of outputs might be beneficial (as we had it as low as 8 before). The layer is now 1024->16.
* The 16 outputs are split into 15 and 1. The 1-wide output is added to the network output (after some necessary scaling due to quantization differences). The 15-wide is activated and follows the usual path through a set of linear layers. The additional 1-wide output is at least neutral, but has shown a slightly positive trend in training compared to networks without it (all 16 outputs through the usual path), and allows possibly an additional stage of lazy evaluation to be introduced in the future.
Additionally, the inference code was rewritten and no longer uses a recursive implementation. This was necessitated by the splitting of the 16-wide intermediate result into two, which was impossible to do with the old implementation with ugly hacks. This is hopefully overall for the better.
First session:
The first session was training a network from scratch (random initialization). The exact trainer used was slightly different (older) from the one used in the second session, but it should not have a measurable effect. The purpose of this session is to establish a strong network base for the second session. Small deviations in strength do not harm the learnability in the second session.
The training was done using the following command:
python3 train.py \
/home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
/home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
--gpus "$3," \
--threads 4 \
--num-workers 4 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--gamma=0.992 \
--lr=8.75e-4 \
--max_epochs=400 \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2
Every 20th net was saved and its playing strength measured against some baseline at 25k nodes per move with pure NNUE evaluation (modified binary). The exact setup is not important as long as it's consistent. The purpose is to sift good candidates from bad ones.
The dataset can be found https://drive.google.com/file/d/1UQdZN_LWQ265spwTBwDKo0t1WjSJKvWY/view
Second session:
The second training session was done starting from the best network (as determined by strength testing) from the first session. It is important that it's resumed from a .pt model and NOT a .ckpt model. The conversion can be performed directly using serialize.py
The LR schedule was modified to use gamma=0.995 instead of gamma=0.992 and LR=4.375e-4 instead of LR=8.75e-4 to flatten the LR curve and allow for longer training. The training was then running for 800 epochs instead of 400 (though it's possibly mostly noise after around epoch 600).
The training was done using the following command:
The training was done using the following command:
python3 train.py \
/data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
/data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
--gpus "$3," \
--threads 4 \
--num-workers 4 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--gamma=0.995 \
--lr=4.375e-4 \
--max_epochs=800 \
--resume-from-model /data/sopel/nnue/nnue-pytorch-training/data/exp295/nn-epoch399.pt \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$run_id
In particular note that we now use lambda=1.0 instead of lambda=0.8 (previous nets), because tests show that WDL-skipping introduced by vondele performs better with lambda=1.0. Nets were being saved every 20th epoch. In total 16 runs were made with these settings and the best nets chosen according to playing strength at 25k nodes per move with pure NNUE evaluation - these are the 4 nets that have been put on fishtest.
The dataset can be found either at ftp://ftp.chessdb.cn/pub/sopel/data_sf/T60T70wIsRightFarseerT60T74T75T76.binpack in its entirety (download might be painfully slow because hosted in China) or can be assembled in the following way:
Get the 5640ad48ae/script/interleave_binpacks.py script.
Download T60T70wIsRightFarseer.binpack https://drive.google.com/file/d/1_sQoWBl31WAxNXma2v45004CIVltytP8/view
Download farseerT74.binpack http://trainingdata.farseer.org/T74-May13-End.7z
Download farseerT75.binpack http://trainingdata.farseer.org/T75-June3rd-End.7z
Download farseerT76.binpack http://trainingdata.farseer.org/T76-Nov10th-End.7z
Run python3 interleave_binpacks.py T60T70wIsRightFarseer.binpack farseerT74.binpack farseerT75.binpack farseerT76.binpack T60T70wIsRightFarseerT60T74T75T76.binpack
Tests:
STC: https://tests.stockfishchess.org/tests/view/6203fb85d71106ed12a407b7
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 16952 W: 4775 L: 4521 D: 7656
Ptnml(0-2): 133, 1818, 4318, 2076, 131
LTC: https://tests.stockfishchess.org/tests/view/62041e68d71106ed12a40e85
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 14944 W: 4138 L: 3907 D: 6899
Ptnml(0-2): 21, 1499, 4202, 1728, 22
closes https://github.com/official-stockfish/Stockfish/pull/3927
Bench: 4919707
Introduces a new NNUE network architecture and associated network parameters
The summary of the changes:
* Position for each perspective mirrored such that the king is on e..h files. Cuts the feature transformer size in half, while preserving enough knowledge to be good. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.b40q4rb1w7on.
* The number of neurons after the feature transformer increased two-fold, to 1024x2. This is possibly mostly due to the now very optimized feature transformer update code.
* The number of neurons after the second layer is reduced from 16 to 8, to reduce the speed impact. This, perhaps surprisingly, doesn't harm the strength much. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.6qkocr97fezq
The AffineTransform code did not work out-of-the box with the smaller number of neurons after the second layer, so some temporary changes have been made to add a special case for InputDimensions == 8. Also additional 0 padding is added to the output for some archs that cannot process inputs by <=8 (SSE2, NEON). VNNI uses an implementation that can keep all outputs in the registers while reducing the number of loads by 3 for each 16 inputs, thanks to the reduced number of output neurons. However GCC is particularily bad at optimization here (and perhaps why the current way the affine transform is done even passed sprt) (see https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit# for details) and more work will be done on this in the following days. I expect the current VNNI implementation to be improved and extended to other architectures.
The network was trained with a slightly modified version of the pytorch trainer (https://github.com/glinscott/nnue-pytorch); the changes are in https://github.com/glinscott/nnue-pytorch/pull/143
The training utilized 2 datasets.
dataset A - https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing
dataset B - as described in ba01f4b954
The training process was as following:
train on dataset A for 350 epochs, take the best net in terms of elo at 20k nodes per move (it's fine to take anything from later stages of training).
convert the .ckpt to .pt
--resume-from-model from the .pt file, train on dataset B for <600 epochs, take the best net. Lambda=0.8, applied before the loss function.
The first training command:
python3 train.py \
../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
--gpus "$3," \
--threads 1 \
--num-workers 1 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--smart-fen-skipping \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--max_epochs=600 \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2
The second training command:
python3 serialize.py \
--features=HalfKAv2_hm^ \
../nnue-pytorch-training/experiment_131/run_6/default/version_0/checkpoints/epoch-499.ckpt \
../nnue-pytorch-training/experiment_$1/base/base.pt
python3 train.py \
../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
--gpus "$3," \
--threads 1 \
--num-workers 1 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--smart-fen-skipping \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=0.8 \
--max_epochs=600 \
--resume-from-model ../nnue-pytorch-training/experiment_$1/base/base.pt \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2
STC: https://tests.stockfishchess.org/tests/view/611120b32a8a49ac5be798c4
LLR: 2.97 (-2.94,2.94) <-0.50,2.50>
Total: 22480 W: 2434 L: 2251 D: 17795
Ptnml(0-2): 101, 1736, 7410, 1865, 128
LTC: https://tests.stockfishchess.org/tests/view/611152b32a8a49ac5be798ea
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9776 W: 442 L: 333 D: 9001
Ptnml(0-2): 5, 295, 4180, 402, 6
closes https://github.com/official-stockfish/Stockfish/pull/3646
bench: 5189338
combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets.
Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-d8609abe8caf.nnue:
The initial net nn-d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer.
Training data gen command (generates in chunks of 200k positions):
generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack
PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence):
python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val
See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data.
Based on that Michael generated nn-76a8a7ffb820.nnue in the following way:
The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch
python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-d8609abe8caf.pt
This run is thus started from Segio Vieri's net nn-d8609abe8caf.nnue
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack
model.py modifications:
loss = torch.pow(torch.abs(p - q), 2.6).mean()
LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch.
optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992)
For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created
The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time.
passed LTC
https://tests.stockfishchess.org/tests/view/60fa45aed8a6b65b2f3a77a4
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53040 W: 1732 L: 1575 D: 49733
Ptnml(0-2): 14, 1432, 23474, 1583, 17
passed STC
https://tests.stockfishchess.org/tests/view/60f9fee2d8a6b65b2f3a7775
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 37928 W: 3178 L: 3001 D: 31749
Ptnml(0-2): 100, 2446, 13695, 2623, 100.
closes https://github.com/official-stockfish/Stockfish/pull/3626
Bench: 5169957
trained with the Python command
c:\nnue>python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_10 --resume-from-model ./pt/nn-3b20abec10c1.pt
`
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack .
Net nn-3b20abec10c1.nnue was chosen as the --resume-from-model with the idea that through learning, the manually hex edited values will be learned and will not need to be manually adjusted going forward. They would also be fine tuned by the learning process.
passed STC:
https://tests.stockfishchess.org/tests/view/60cdf91e457376eb8bcab66f
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 18256 W: 1639 L: 1479 D: 15138
Ptnml(0-2): 59, 1179, 6505, 1313, 72
passed LTC:
https://tests.stockfishchess.org/tests/view/60ce2166457376eb8bcab6e1
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 18792 W: 654 L: 542 D: 17596
Ptnml(0-2): 9, 490, 8291, 592, 14
closes https://github.com/official-stockfish/Stockfish/pull/3570
Bench: 5020972
Optimization of vondele's nn-33c9d39e5eb6.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60ca68be457376eb8bcab28b
Setting: ck values are default based on how large the parameters are
The new values for this net are the raw values at the end of the tuning (80k games)
The significant changes are in buckets 1 and 2 (5-12 pieces) so the main difference is in playing endgames if we compare it to nn-33c9. There is also change in bucket 7 (29-32 pieces) but not as substantial as the changes in buckets 1 and 2. If we interpret the changes based on an experiment a few months ago, this new net plays more optimistically during endgames and less optimistically during openings.
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 49504 W: 4246 L: 4053 D: 41205
Ptnml(0-2): 140, 3282, 17749, 3407, 174
https://tests.stockfishchess.org/tests/view/60cbd752457376eb8bcab478
LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 88720 W: 4926 L: 4651 D: 79143
Ptnml(0-2): 105, 4048, 35793, 4295, 119
https://tests.stockfishchess.org/tests/view/60cc7828457376eb8bcab4fa
closes https://github.com/official-stockfish/Stockfish/pull/3566
Bench: 4758885
This net was created by @pleomati, who manually edited with an hex editor
10 values randomly chosen in the LCSFNet10 net (nn-6ad41a9207d0.nnue) to
create this one. The LCSFNet10 net was trained by Joost VandeVondele from
a dataset combining Stockfish games and Leela games (16x10^9 positions from
SF self-play at depth 9, and 6.3x10^9 positions from Leela games, so overall
72% of Stockfish positions and 28% of Leela positions).
passed STC 10+0.1:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 50888 W: 5881 L: 5654 D: 39353
Ptnml(0-2): 281, 4290, 16085, 4497, 291
https://tests.stockfishchess.org/tests/view/60cbfa68457376eb8bcab49a
passed LTC 60+0.6:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 25480 W: 1498 L: 1338 D: 22644
Ptnml(0-2): 36, 1155, 10193, 1325, 31
https://tests.stockfishchess.org/tests/view/60cc4af8457376eb8bcab4d4
closes https://github.com/official-stockfish/Stockfish/pull/3564
Bench: 4904930
This net is the result of training on data used by the Leela project. More precisely,
we shuffled T60 and T74 data kindly provided by borg (for different Tnn, the data is
a result of Leela selfplay with differently sized Leela nets).
The data is available at vondele's google drive:
https://drive.google.com/drive/folders/1mftuzYdl9o6tBaceR3d_VBQIrgKJsFpl.
The Leela data comes in small chunks of .binpack files. To shuffle them, we simply
used a small python script to randomly rename the files, and then concatenated them
using `cat`. As validation data we picked a file of T60 data. We will further investigate
T74 data.
The training for the NNUE architecture used 200 epochs with the Python trainer from
the Stockfish project. Unlike the previous run we tried with this data, this run does
not have adjusted scaling — not because we didn't want to, but because we forgot.
However, this training randomly skips 40% more positions than previous run. The loss
was very spiky and decreased slower than it does usually.
Training loss: https://github.com/official-stockfish/images/blob/main/training-loss-8e47cf062333.png
Validation loss: https://github.com/official-stockfish/images/blob/main/validation-loss-8e47cf062333.png
This is the exact training command:
python train.py --smart-fen-skipping --random-fen-skipping 14 --batch-size 16384 --threads 4 --num-workers 4 --gpus 1 trainingdata\training_data.binpack validationdata\val.binpack
---
10k STC result:
ELO: 3.61 +-3.3 (95%) LOS: 98.4%
Total: 10000 W: 1241 L: 1137 D: 7622
Ptnml(0-2): 68, 841, 3086, 929, 76
https://tests.stockfishchess.org/tests/view/60c67e50457376eb8bcaae70
10k LTC result:
ELO: 2.71 +-2.4 (95%) LOS: 98.8%
Total: 10000 W: 659 L: 581 D: 8760
Ptnml(0-2): 22, 485, 3900, 579, 14
https://tests.stockfishchess.org/tests/view/60c69deb457376eb8bcaae98
Passed LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9648 W: 685 L: 545 D: 8418
Ptnml(0-2): 22, 448, 3740, 596, 18
https://tests.stockfishchess.org/tests/view/60c6d41c457376eb8bcaaecf
---
closes https://github.com/official-stockfish/Stockfish/pull/3550
Bench: 4877339