1
0
Fork 0
mirror of https://github.com/sockspls/badfish synced 2025-05-02 01:29:36 +00:00
Commit graph

173 commits

Author SHA1 Message Date
farseer
3bea736a2a Update default net to nn-4401e826ebcc.nnue
Using data T60 12/1/20 to 11/2/2021, T74 4/22/21 to 7/27/21, T75 6/3/21 to 10/16/21, T76
(half of the randomly interleaved dataset due to a mistake merging) 11/10/21 to 11/21/21,
wrongIsRight_nodes5000pv2.binpack, and WrongIsRight-Reloaded.binpack combined and shuffled
position by position.

Trained with LR=4.375e-4 and WDL filtering enabled:

python train.py --smart-fen-skipping --random-fen-skipping 0 --features=HalfKAv2_hm^
--lambda=1.0 --max_epochs=800 --seed 910688689 --batch-size 16384
--progress_bar_refresh_rate 30 --threads 4 --num-workers 4 --gpus 1
--resume-from-model C:\msys64\home\Mike\nnue-pytorch\9b3d.pt
E:\trainingdata\T60-T74-T75-T76-WiR-WiRR-PbyP.binpack
E:\trainingdata\T60-T74-T75-T76-WiR-WiRR-PbyP.binpack

Passed STC
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 41848 W: 10962 L: 10676 D: 20210 Elo +2.16
Ptnml(0-2): 142, 4699, 11016, 4865, 202
https://tests.stockfishchess.org/tests/view/61ba886857a0d0f327c2cfd6

Passed LTC
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 27776 W: 7208 L: 6953 D: 13615 Elo + 3.00
Ptnml(0-2): 14, 2808, 8007, 3027, 32
https://tests.stockfishchess.org/tests/view/61baae4d57a0d0f327c2d96f

closes https://github.com/official-stockfish/Stockfish/pull/3856

Bench: 4667591
2021-12-17 18:12:47 +01:00
Joost VandeVondele
ea1ddb6aef Update default net to nn-d93927199b3d.nnue
Using the same dataset as before but slightly reduced initial LR as in
https://github.com/vondele/nnue-pytorch/tree/tweakLR1

passed STC:
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 51368 W: 13492 L: 13191 D: 24685
Ptnml(0-2): 168, 5767, 13526, 6042, 181
https://tests.stockfishchess.org/tests/view/61b61f43dffbe89a3580b529

passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 45128 W: 11763 L: 11469 D: 21896
Ptnml(0-2): 24, 4583, 13063, 4863, 31
https://tests.stockfishchess.org/tests/view/61b6612edffbe89a3580c447

closes https://github.com/official-stockfish/Stockfish/pull/3848

Bench: 5121336
2021-12-13 07:17:25 +01:00
Joost VandeVondele
b82d93ece4 Update default net to nn-63376713ba63.nnue.
same data set as previous trained nets, tuned the wdl model slightly for training.
https://github.com/vondele/nnue-pytorch/tree/wdlTweak1

passed STC:
https://tests.stockfishchess.org/tests/view/61abe9e456fcf33bce7d2834
LLR: 2.93 (-2.94,2.94) <0.00,2.50>
Total: 31720 W: 8385 L: 8119 D: 15216
Ptnml(0-2): 117, 3534, 8273, 3838, 98

passed LTC:
https://tests.stockfishchess.org/tests/view/61ac293756fcf33bce7d36cf
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 136136 W: 35255 L: 34741 D: 66140
Ptnml(0-2): 114, 14217, 38894, 14727, 116

closes https://github.com/official-stockfish/Stockfish/pull/3836

Bench: 4667742
2021-12-07 12:40:48 +01:00
Joost VandeVondele
327060232a Update default net to nn-cdf1785602d6.nnue
Same process as in e4a0c6c759
with the training started from the current master net.

passed STC:
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 38224 W: 10023 L: 9742 D: 18459
Ptnml(0-2): 133, 4328, 9940, 4547, 164
https://tests.stockfishchess.org/tests/view/61a8611e4ed77d629d4e836e

passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 115176 W: 29783 L: 29321 D: 56072
Ptnml(0-2): 68, 12039, 32936, 12453, 92
https://tests.stockfishchess.org/tests/view/61a8963e4ed77d629d4e8d9b

closes https://github.com/official-stockfish/Stockfish/pull/3830

Bench: 4829419
2021-12-04 10:31:22 +01:00
Joost VandeVondele
e4a0c6c759 Update default net to nn-4f56ecfca5b7.nnue
New net trained with nnue-pytorch, started from a master net on a data set of Leela
(T60.binpack+T74.binpck) Stockfish data (wrongIsRight_nodes5000pv2.binpack), and
Michael Babigian's conversion of T60 Leela data (including TB7 rescoring) (farseer.binpack)
available as a single interleaved binpack:

https://drive.google.com/file/d/1_sQoWBl31WAxNXma2v45004CIVltytP8/view?usp=sharing

The nnue-pytorch branch used is https://github.com/vondele/nnue-pytorch/tree/wdl

passed STC:
https://tests.stockfishchess.org/tests/view/61a3cc729f0c43dae1c71f1b
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 49152 W: 12842 L: 12544 D: 23766
Ptnml(0-2): 154, 5542, 12904, 5804, 172

passed LTC:
https://tests.stockfishchess.org/tests/view/61a43c6260afd064f2d724f1
LLR: 2.96 (-2.94,2.94) <0.50,3.00>
Total: 25528 W: 6676 L: 6425 D: 12427
Ptnml(0-2): 9, 2593, 7315, 2832, 15

closes https://github.com/official-stockfish/Stockfish/pull/3816

Bench: 6885242
2021-11-29 12:56:01 +01:00
Joost VandeVondele
9ee58dc7a7 Update default net to nn-3678835b1d3d.nnue
New net trained with nnue-pytorch, started from the master net on a data set of Leela
(T60.binpack+T74.binpck) and Stockfish data (wrongIsRight_nodes5000pv2.binpack),
available as a single interleaved binpack:

https://drive.google.com/file/d/12uWZIA3F2cNbraAzQNb1jgf3tq_6HkTr/view?usp=sharing

The nnue-pytorch branch used is https://github.com/vondele/nnue-pytorch/tree/wdl, which
has the new feature to filter positions based on the likelihood of the current evaluation
leading to the game outcome. It should make it less likely to try to learn from
misevaluated positions. Standard options have been used, starting from the master net:

   --gpus 1 --threads 4 --num-workers 4 --batch-size 16384 --progress_bar_refresh_rate 300
   --smart-fen-skipping --random-fen-skipping 12 --features=HalfKAv2_hm^   --lambda=1.0

Testing with games shows neutral Elo at STC, and good performance at LTC:

STC:
https://tests.stockfishchess.org/tests/view/619eb597c0a4ea18ba95a4dc
ELO: -0.44 +-1.8 (95%) LOS: 31.2%
Total: 40000 W: 10447 L: 10498 D: 19055
Ptnml(0-2): 254, 4576, 10260, 4787, 123

LTC:
https://tests.stockfishchess.org/tests/view/619f6e87c0a4ea18ba95a53f
ELO: 3.30 +-1.8 (95%) LOS: 100.0%
Total: 33062 W: 8560 L: 8246 D: 16256
Ptnml(0-2): 54, 3358, 9352, 3754, 13

passed LTC SPRT:
https://tests.stockfishchess.org/tests/view/61a0864e8967bbf894416e65
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 29376 W: 7663 L: 7396 D: 14317
Ptnml(0-2): 67, 3017, 8205, 3380, 19

closes https://github.com/official-stockfish/Stockfish/pull/3808

Bench: 7011501
2021-11-26 18:16:04 +01:00
xoto10
f21a66f70d Small clean-up, Sept 2021
Closes https://github.com/official-stockfish/Stockfish/pull/3485

No functional change
2021-10-07 09:41:57 +02:00
SFisGOD
723f48dec0 Update default net to nn-13406b1dcbe0.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/6134abc425b9b35584838572
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-6762d36ad265.nnue
New net: nn-c9fdeea14cb2.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/61355b7e25b9b3558483860e
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-c9fdeea14cb2.nnue
New net: nn-0ddc28184f4c.nnue

SPSA 3: https://tests.stockfishchess.org/tests/view/613737be0cd98ab40c0c9e4e
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-0ddc28184f4c.nnue
New net: nn-2419828bb394.nnue

SPSA 4: https://tests.stockfishchess.org/tests/view/613966ff689039fce12e0fe7
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-2419828bb394.nnue
New net: nn-05d9b1ee3037.nnue

SPSA 5: https://tests.stockfishchess.org/tests/view/613b4a38689039fce12e1209
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-05d9b1ee3037.nnue
New net: nn-98c6ce0fc15f.nnue

SPSA 6: https://tests.stockfishchess.org/tests/view/613e331515591e7c9ebc3fe9
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-98c6ce0fc15f.nnue
New net: nn-13406b1dcbe0.nnue

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 82008 W: 21044 L: 20752 D: 40212
Ptnml(0-2): 264, 9341, 21525, 9587, 287
https://tests.stockfishchess.org/tests/view/613f7c6cf29dda16fcca870c

LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 182928 W: 46258 L: 45602 D: 91068
Ptnml(0-2): 107, 19448, 51712, 20076, 121
https://tests.stockfishchess.org/tests/view/613fccb97315e7c73204a48c

Closes #3703

Bench: 6658747
2021-09-15 17:50:20 +02:00
SFisGOD
be63ce1bb5 Update default net to nn-6762d36ad265.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/612cdb1fbb4956d8b78eb5ab
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-fe433fd8c7f6.nnue
New net: nn-5f134823db04.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/612fcde645091e810014af19
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-5f134823db04.nnue
New net: nn-8eca5dd4e3f7.nnue

SPSA 3: https://tests.stockfishchess.org/tests/view/6130822345091e810014af61
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-8eca5dd4e3f7.nnue
New net: nn-4556108e4f00.nnue

SPSA 4: https://tests.stockfishchess.org/tests/view/613287652ffb3c36aceb923c
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-4556108e4f00.nnue
New net: nn-6762d36ad265.nnue

STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 162776 W: 41220 L: 40807 D: 80749
Ptnml(0-2): 517, 18800, 42359, 19177, 535
https://tests.stockfishchess.org/tests/view/6134107125b9b35584838559

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 41056 W: 10428 L: 10156 D: 20472
Ptnml(0-2): 30, 4288, 11618, 4564, 28
https://tests.stockfishchess.org/tests/view/6134ad6525b9b3558483857a

closes https://github.com/official-stockfish/Stockfish/pull/3691

Bench: 5812158
2021-09-06 14:08:22 +02:00
SFisGOD
2807dcfab6 Update default net to nn-735bba95dec0.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/61286d8b62d20cf82b5ad1bd
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-33495fe25081.nnue
New net: nn-83e3cf2af92b.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/6129cf2162d20cf82b5ad25f
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-83e3cf2af92b.nnue
New net: nn-69a528eaef35.nnue

SPSA 3: https://tests.stockfishchess.org/tests/view/612a0dcb62d20cf82b5ad2a0
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-69a528eaef35.nnue
New net: nn-735bba95dec0.nnue

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 95144 W: 24310 L: 23999 D: 46835
Ptnml(0-2): 232, 11059, 24748, 11232, 301
https://tests.stockfishchess.org/tests/view/612bb3be0fdf40644b4b9996

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 33632 W: 8522 L: 8271 D: 16839
Ptnml(0-2): 18, 3511, 9516, 3744, 27
https://tests.stockfishchess.org/tests/view/612ce5b9bb4956d8b78eb5b3

Closes https://github.com/official-stockfish/Stockfish/pull/3685

Bench: 5600615
2021-08-31 12:56:19 +02:00
SFisGOD
69eede7d08 Update default net to nn-33495fe25081.nnue
STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 37368 W: 9621 L: 9391 D: 18356
Ptnml(0-2): 117, 4287, 9664, 4481, 135
https://tests.stockfishchess.org/tests/view/612768165318138ee1204977

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 13328 W: 3446 L: 3246 D: 6636
Ptnml(0-2): 11, 1383, 3682, 1571, 17
https://tests.stockfishchess.org/tests/view/6127dc8d62d20cf82b5ad196

Closes https://github.com/official-stockfish/Stockfish/pull/3679

Bench: 5179347
2021-08-27 07:51:26 +02:00
SFisGOD
590447d7a1 Update default net to nn-517c4f68b5df.nnue
SPSA: https://tests.stockfishchess.org/tests/view/611cf0da4977aa1525c9ca03
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-ac5605a608d6.nnue
New net: nn-517c4f68b5df.nnue

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 11600 W: 998 L: 851 D: 9751
Ptnml(0-2): 30, 705, 4186, 846, 33
https://tests.stockfishchess.org/tests/view/611f84524977aa1525c9cb5b

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 9360 W: 338 L: 243 D: 8779
Ptnml(0-2): 0, 220, 4151, 303, 6
https://tests.stockfishchess.org/tests/view/611f8c5b4977aa1525c9cb64

closes https://github.com/official-stockfish/Stockfish/pull/3667

Bench: 4844618
2021-08-22 09:09:58 +02:00
Torsten Hellwig
1946a67567 Update default net to nn-ac5605a608d6.nnue
This net was created with the nnue-pytorch trainer, it used the previous master net as a starting point.

The training data includes all T60 data (https://drive.google.com/drive/folders/1rzZkgIgw7G5vQMLr2hZNiUXOp7z80613), all T74 data (https://drive.google.com/drive/folders/1aFUv3Ih3-A8Vxw9064Kw_FU4sNhMHZU-) and the wrongNNUE_02_d9.binpack (https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq). The Leela data were randomly named and then concatenated. All data was merged into one binpack using interleave_binpacks.py.

python3 train.py \
    ../data/t60_t74_wrong.binpack \
    ../data/t60_t74_wrong.binpack \
    --resume-from-model ../data/nn-e8321e467bf6.pt \
    --gpus 1 \
    --threads 4 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 300 \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --max_epochs=600 \
    --seed $RANDOM \
    --default_root_dir ../output/exp_24

STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 15320 W: 1415 L: 1257 D: 12648
Ptnml(0-2): 50, 1002, 5402, 1152, 54
https://tests.stockfishchess.org/tests/view/611c404a4977aa1525c9c97f

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 9440 W: 345 L: 248 D: 8847
Ptnml(0-2): 3, 222, 4175, 315, 5
https://tests.stockfishchess.org/tests/view/611c6c7d4977aa1525c9c996

LTC with UHO_XXL_+0.90_+1.19.epd:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 6232 W: 1638 L: 1459 D: 3135
Ptnml(0-2): 5, 592, 1744, 769, 6
https://tests.stockfishchess.org/tests/view/611c9b214977aa1525c9c9cb

closes https://github.com/official-stockfish/Stockfish/pull/3664

Bench: 5375286
2021-08-18 09:17:22 +02:00
Tomasz Sobczyk
d61d38586e New NNUE architecture and net
Introduces a new NNUE network architecture and associated network parameters

The summary of the changes:

* Position for each perspective mirrored such that the king is on e..h files. Cuts the feature transformer size in half, while preserving enough knowledge to be good. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.b40q4rb1w7on.
* The number of neurons after the feature transformer increased two-fold, to 1024x2. This is possibly mostly due to the now very optimized feature transformer update code.
* The number of neurons after the second layer is reduced from 16 to 8, to reduce the speed impact. This, perhaps surprisingly, doesn't harm the strength much. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.6qkocr97fezq

The AffineTransform code did not work out-of-the box with the smaller number of neurons after the second layer, so some temporary changes have been made to add a special case for InputDimensions == 8. Also additional 0 padding is added to the output for some archs that cannot process inputs by <=8 (SSE2, NEON). VNNI uses an implementation that can keep all outputs in the registers while reducing the number of loads by 3 for each 16 inputs, thanks to the reduced number of output neurons. However GCC is particularily bad at optimization here (and perhaps why the current way the affine transform is done even passed sprt) (see https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit# for details) and more work will be done on this in the following days. I expect the current VNNI implementation to be improved and extended to other architectures.

The network was trained with a slightly modified version of the pytorch trainer (https://github.com/glinscott/nnue-pytorch); the changes are in https://github.com/glinscott/nnue-pytorch/pull/143

The training utilized 2 datasets.

    dataset A - https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing
    dataset B - as described in ba01f4b954

The training process was as following:

    train on dataset A for 350 epochs, take the best net in terms of elo at 20k nodes per move (it's fine to take anything from later stages of training).
    convert the .ckpt to .pt
    --resume-from-model from the .pt file, train on dataset B for <600 epochs, take the best net. Lambda=0.8, applied before the loss function.

The first training command:

python3 train.py \
    ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
    ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
    --gpus "$3," \
    --threads 1 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --smart-fen-skipping \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --max_epochs=600 \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

The second training command:

python3 serialize.py \
    --features=HalfKAv2_hm^ \
    ../nnue-pytorch-training/experiment_131/run_6/default/version_0/checkpoints/epoch-499.ckpt \
    ../nnue-pytorch-training/experiment_$1/base/base.pt

python3 train.py \
    ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
    ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
    --gpus "$3," \
    --threads 1 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --smart-fen-skipping \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=0.8 \
    --max_epochs=600 \
    --resume-from-model ../nnue-pytorch-training/experiment_$1/base/base.pt \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

STC: https://tests.stockfishchess.org/tests/view/611120b32a8a49ac5be798c4

LLR: 2.97 (-2.94,2.94) <-0.50,2.50>
Total: 22480 W: 2434 L: 2251 D: 17795
Ptnml(0-2): 101, 1736, 7410, 1865, 128

LTC: https://tests.stockfishchess.org/tests/view/611152b32a8a49ac5be798ea

LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9776 W: 442 L: 333 D: 9001
Ptnml(0-2): 5, 295, 4180, 402, 6

closes https://github.com/official-stockfish/Stockfish/pull/3646

bench: 5189338
2021-08-15 12:05:43 +02:00
SFisGOD
73ef5b8c4a Update default net to nn-46832cfbead3.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/6100e7f096b86d98abf6a832
Parameters: A total of 256 net weights and 8 net biases were tuned (output layer)
Base net: nn-56a5f1c4173a.nnue
New net: nn-ec3c8e029926.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/610733caafad2da4f4ae3da7
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-ec3c8e029926.nnue
New net: nn-46832cfbead3.nnue

STC:
LLR: 2.98 (-2.94,2.94) <-0.50,2.50>
Total: 50520 W: 3953 L: 3765 D: 42802
Ptnml(0-2): 138, 3063, 18678, 3235, 146
https://tests.stockfishchess.org/tests/view/610a79692a8a49ac5be793f4

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 57256 W: 1723 L: 1566 D: 53967
Ptnml(0-2): 12, 1442, 25568, 1589, 17
https://tests.stockfishchess.org/tests/view/610ac5bb2a8a49ac5be79434

Closes https://github.com/official-stockfish/Stockfish/pull/3642

Bench: 5359314
2021-08-05 08:52:07 +02:00
SFisGOD
e973eee919 Update default net to nn-56a5f1c4173a.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/60fd24efd8a6b65b2f3a796e
Parameters: A total of 256 net biases were tuned (hidden layer 2)
New best values: Half of the changes from the tuning run
New net: nn-5992d3ba79f3.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/60fec7d6d8a6b65b2f3a7aa2
Parameters: A total of 128 net biases were tuned (hidden layer 1)
New best values: Half of the changes from the tuning run
New net: nn-56a5f1c4173a.nnue

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 140392 W: 10863 L: 10578 D: 118951
Ptnml(0-2): 347, 8754, 51718, 9021, 356
https://tests.stockfishchess.org/tests/view/610037e396b86d98abf6a79e

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 14216 W: 454 L: 355 D: 13407
Ptnml(0-2): 4, 323, 6356, 420, 5
https://tests.stockfishchess.org/tests/view/61019995afad2da4f4ae3a3c

Closes #3633

Bench: 4801359
2021-07-29 07:35:13 +02:00
SFisGOD
237ed1ef8f Update default net to nn-26abeed38351.nnue
SPSA: https://tests.stockfishchess.org/tests/view/60fba335d8a6b65b2f3a7891

New best values: Half of the changes from the tuning run.
Setting: nodestime=300 with 10+0.1 (approximate real TC is 2.5 seconds)
The rest is the same as described in #3593

The change from nodestime=600 to 300 was suggested by gekkehenker to prevent time losses for some slow workers
SFisGOD@94cd757#commitcomment-53324840

STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 67448 W: 5241 L: 5036 D: 57171
Ptnml(0-2): 151, 4198, 24827, 4391, 157
https://tests.stockfishchess.org/tests/view/60fd50f2d8a6b65b2f3a798e

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 48752 W: 1504 L: 1358 D: 45890
Ptnml(0-2): 13, 1226, 21754, 1368, 15
https://tests.stockfishchess.org/tests/view/60fd7bb2d8a6b65b2f3a79a9

Closes https://github.com/official-stockfish/Stockfish/pull/3630

Bench:  5124774
2021-07-26 07:52:59 +02:00
MichaelB7
b939c80513 Update the default net to nn-76a8a7ffb820.nnue.
combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets.

Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-d8609abe8caf.nnue:

The initial net nn-d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer.

Training data gen command (generates in chunks of 200k positions):

generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack

PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence):

python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val

See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data.

Based on that Michael generated nn-76a8a7ffb820.nnue in the following way:

The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-d8609abe8caf.pt

This run is thus started from Segio Vieri's net nn-d8609abe8caf.nnue

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

model.py modifications:
loss = torch.pow(torch.abs(p - q), 2.6).mean()
LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch.
optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992)
For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created

The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time.

passed LTC
https://tests.stockfishchess.org/tests/view/60fa45aed8a6b65b2f3a77a4
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53040 W: 1732 L: 1575 D: 49733
Ptnml(0-2): 14, 1432, 23474, 1583, 17

passed STC
https://tests.stockfishchess.org/tests/view/60f9fee2d8a6b65b2f3a7775
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 37928 W: 3178 L: 3001 D: 31749
Ptnml(0-2): 100, 2446, 13695, 2623, 100.

closes https://github.com/official-stockfish/Stockfish/pull/3626

Bench: 5169957
2021-07-24 18:04:59 +02:00
SFisGOD
8fc297c506 Update default net to nn-9e3c6298299a.nnue
Optimization of nn-956480d8378f.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60da2bf63beab81350ac9fe7

Same method as described in PR #3593

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 17792 W: 1525 L: 1372 D: 14895
Ptnml(0-2): 28, 1156, 6401, 1257, 54
https://tests.stockfishchess.org/tests/view/60deffc59ea99d7c2d693c19

LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 36544 W: 1245 L: 1109 D: 34190
Ptnml(0-2): 12, 988, 16139, 1118, 15
https://tests.stockfishchess.org/tests/view/60df11339ea99d7c2d693c22

closes https://github.com/official-stockfish/Stockfish/pull/3601

Bench: 4687476
2021-07-03 10:03:32 +02:00
SFisGOD
49283d3a66 Update default net to nn-3475407dc199.nnue
Optimization of eight subnetwork output layers of Michael's nn-190f102a22c3.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60d5510642a522cc50282ef3

Parameters: A total of 256 net weights and 8 net biases were tuned
New best values: The raw values at the end of the tuning run were used (800k games, 5 seconds TC)
Settings: default ck value and SPSA A is 30,000 (3.75% of the total number of games)

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 29064 W: 2435 L: 2269 D: 24360
Ptnml(0-2): 72, 1857, 10505, 2029, 69
https://tests.stockfishchess.org/tests/view/60d8ea123beab81350ac9eb6

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 61848 W: 2055 L: 1884 D: 57909
Ptnml(0-2): 18, 1708, 27310, 1861, 27
https://tests.stockfishchess.org/tests/view/60d8f0393beab81350ac9ec6

closes https://github.com/official-stockfish/Stockfish/pull/3593

Bench: 4770936
2021-06-28 21:31:58 +02:00
MichaelB7
b94a651878 Make net nn-956480d8378f.nnue the default
Trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_18 --resume-from-model ./pt/nn-75980ca503c6.pt

This run is thus started from a previous master net.

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

passed STC:
https://tests.stockfishchess.org/tests/view/60d0c0a7a8ec07dc34c072b2
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 18440 W: 1693 L: 1531 D: 15216
Ptnml(0-2): 67, 1225, 6464, 1407, 57

passed LTC:
https://tests.stockfishchess.org/tests/view/60d762793beab81350ac9d72
LLR: 2.98 (-2.94,2.94) <0.50,3.50>
Total: 93120 W: 3152 L: 2933 D: 87035
Ptnml(0-2): 48, 2581, 41076, 2814, 41

passed LTC (rebased branch to current master):
https://tests.stockfishchess.org/tests/view/60d85eeb3beab81350ac9e2b
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 42688 W: 1347 L: 1206 D: 40135
Ptnml(0-2): 14, 1097, 18981, 1238, 14.

closes https://github.com/official-stockfish/Stockfish/pull/3592

Bench: 4906727
2021-06-28 21:20:05 +02:00
MichaelB7
9b82414b67 Make net nn-190f102a22c3.nnue the default net.
Trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_17 --resume-from-model ./pt/nn-75980ca503c6.pt

This run is thus started from the previous master net.

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

passed LTC
https://tests.stockfishchess.org/tests/view/60d09f52b4c17000d679517f
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 32184 W: 1100 L: 970 D: 30114
Ptnml(0-2): 10, 878, 14193, 994, 17

passed STC
https://tests.stockfishchess.org/tests/view/60d086c02114332881e7368e
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 11360 W: 1056 L: 906 D: 9398
Ptnml(0-2): 25, 735, 4026, 853, 41

closes https://github.com/official-stockfish/Stockfish/pull/3576

Bench: 4631244
2021-06-21 23:16:55 +02:00
MichaelB7
ba01f4b954 Make net nn-75980ca503c6.nnue the default.
trained with the Python command

c:\nnue>python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_10 --resume-from-model ./pt/nn-3b20abec10c1.pt
`
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack .

Net nn-3b20abec10c1.nnue was chosen as the --resume-from-model with the idea that through learning, the manually hex edited values will be learned and will not need to be manually adjusted going forward. They would also be fine tuned by the learning process.

passed STC:
https://tests.stockfishchess.org/tests/view/60cdf91e457376eb8bcab66f
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 18256 W: 1639 L: 1479 D: 15138
Ptnml(0-2): 59, 1179, 6505, 1313, 72

passed LTC:
https://tests.stockfishchess.org/tests/view/60ce2166457376eb8bcab6e1
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 18792 W: 654 L: 542 D: 17596
Ptnml(0-2): 9, 490, 8291, 592, 14

closes https://github.com/official-stockfish/Stockfish/pull/3570

Bench: 5020972
2021-06-19 23:24:35 +02:00
Tomasz Sobczyk
2e745956c0 Change trace with NNUE eval support
This patch adds some more output to the `eval` command. It adds a board display
with estimated piece values (method is remove-piece, evaluate, put-piece), and
splits the NNUE evaluation with (psqt,layers) for each bucket for the NNUE net.

Example:

```

./stockfish
position fen 3Qb1k1/1r2ppb1/pN1n2q1/Pp1Pp1Pr/4P2p/4BP2/4B1R1/1R5K b - - 11 40
eval

 Contributing terms for the classical eval:
+------------+-------------+-------------+-------------+
|    Term    |    White    |    Black    |    Total    |
|            |   MG    EG  |   MG    EG  |   MG    EG  |
+------------+-------------+-------------+-------------+
|   Material |  ----  ---- |  ----  ---- | -0.73 -1.55 |
|  Imbalance |  ----  ---- |  ----  ---- | -0.21 -0.17 |
|      Pawns |  0.35 -0.00 |  0.19 -0.26 |  0.16  0.25 |
|    Knights |  0.04 -0.08 |  0.12 -0.01 | -0.08 -0.07 |
|    Bishops | -0.34 -0.87 | -0.17 -0.61 | -0.17 -0.26 |
|      Rooks |  0.12  0.00 |  0.08  0.00 |  0.04  0.00 |
|     Queens |  0.00  0.00 | -0.27 -0.07 |  0.27  0.07 |
|   Mobility |  0.84  1.76 |  0.01  0.66 |  0.83  1.10 |
|King safety | -0.99 -0.17 | -0.72 -0.10 | -0.27 -0.07 |
|    Threats |  0.27  0.27 |  0.73  0.86 | -0.46 -0.59 |
|     Passed |  0.00  0.00 |  0.79  0.82 | -0.79 -0.82 |
|      Space |  0.61  0.00 |  0.24  0.00 |  0.37  0.00 |
|   Winnable |  ----  ---- |  ----  ---- |  0.00 -0.03 |
+------------+-------------+-------------+-------------+
|      Total |  ----  ---- |  ----  ---- | -1.03 -2.14 |
+------------+-------------+-------------+-------------+

 NNUE derived piece values:
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |   Q   |   b   |       |   k   |       |
|       |       |       | +12.4 | -1.62 |       |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |   r   |       |       |   p   |   p   |   b   |       |
|       | -3.89 |       |       | -0.84 | -1.19 | -3.32 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|   p   |   N   |       |   n   |       |       |   q   |       |
| -1.81 | +3.71 |       | -4.82 |       |       | -5.04 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|   P   |   p   |       |   P   |   p   |       |   P   |   r   |
| +1.16 | -0.91 |       | +0.55 | +0.12 |       | +0.50 | -4.02 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   P   |       |       |   p   |
|       |       |       |       | +2.33 |       |       | +1.17 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   B   |   P   |       |       |
|       |       |       |       | +4.79 | +1.54 |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   B   |       |   R   |       |
|       |       |       |       | +4.54 |       | +6.03 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |   R   |       |       |       |       |       |   K   |
|       | +4.81 |       |       |       |       |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+

 NNUE network contributions (Black to move)
+------------+------------+------------+------------+
|   Bucket   |  Material  | Positional |   Total    |
|            |   (PSQT)   |  (Layers)  |            |
+------------+------------+------------+------------+
|  0         |  +  0.32   |  -  1.46   |  -  1.13   |
|  1         |  +  0.25   |  -  0.68   |  -  0.43   |
|  2         |  +  0.46   |  -  1.72   |  -  1.25   |
|  3         |  +  0.55   |  -  1.80   |  -  1.25   |
|  4         |  +  0.48   |  -  1.77   |  -  1.29   |
|  5         |  +  0.40   |  -  2.00   |  -  1.60   |
|  6         |  +  0.57   |  -  2.12   |  -  1.54   | <-- this bucket is used
|  7         |  +  3.38   |  -  2.00   |  +  1.37   |
+------------+------------+------------+------------+

Classical evaluation   -1.00 (white side)
NNUE evaluation        +1.54 (white side)
Final evaluation       +2.38 (white side) [with scaled NNUE, hybrid, ...]

```

Also renames the export_net() function to save_eval() while there.

closes https://github.com/official-stockfish/Stockfish/pull/3562

No functional change
2021-06-19 11:57:01 +02:00
Joost VandeVondele
adfb23c029 Make net nn-50144f835024.nnue the default
trained with the Python command

c:\nnue>python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_8 --resume-from-model ./pt/nn-6ad41a9207d0.pt
`
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - make one large Wrong_NNUE 2 binpack and one large Training_Data of approximate size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training _Data binpack .

nn-6ad41a9207d0.pt was derived from a net vondele ran which passed STC quickly,
but faltered in LTC. https://tests.stockfishchess.org/tests/view/60cba666457376eb8bcab443

STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 18792 W: 2068 L: 1889 D: 14835
Ptnml(0-2): 82, 1480, 6117, 1611, 106
https://tests.stockfishchess.org/tests/view/60ccda8b457376eb8bcab568

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 11376 W: 574 L: 454 D: 10348
Ptnml(0-2): 4, 412, 4747, 510, 15
https://tests.stockfishchess.org/tests/view/60ccf952457376eb8bcab58d

closes https://github.com/official-stockfish/Stockfish/pull/3568

Bench: 4900906
2021-06-18 23:50:26 +02:00
SFisGOD
86afb6a7cf Update default net to nn-aa9d7eeb397e.nnue
Optimization of vondele's nn-33c9d39e5eb6.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60ca68be457376eb8bcab28b
Setting: ck values are default based on how large the parameters are
The new values for this net are the raw values at the end of the tuning (80k games)

The significant changes are in buckets 1 and 2 (5-12 pieces) so the main difference is in playing endgames if we compare it to nn-33c9. There is also change in bucket 7 (29-32 pieces) but not as substantial as the changes in buckets 1 and 2. If we interpret the changes based on an experiment a few months ago, this new net plays more optimistically during endgames and less optimistically during openings.

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 49504 W: 4246 L: 4053 D: 41205
Ptnml(0-2): 140, 3282, 17749, 3407, 174
https://tests.stockfishchess.org/tests/view/60cbd752457376eb8bcab478

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 88720 W: 4926 L: 4651 D: 79143
Ptnml(0-2): 105, 4048, 35793, 4295, 119
https://tests.stockfishchess.org/tests/view/60cc7828457376eb8bcab4fa

closes https://github.com/official-stockfish/Stockfish/pull/3566

Bench: 4758885
2021-06-18 21:29:14 +02:00
ap
14b673d90f New default net nn-3b20abec10c1.nnue
This net was created by @pleomati, who manually edited with an hex editor
10 values randomly chosen in the LCSFNet10 net (nn-6ad41a9207d0.nnue) to
create this one. The LCSFNet10 net was trained by Joost VandeVondele from
a dataset combining Stockfish games and Leela games (16x10^9 positions from
SF self-play at depth 9, and 6.3x10^9 positions from Leela games, so overall
72% of Stockfish positions and 28% of Leela positions).

passed STC 10+0.1:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 50888 W: 5881 L: 5654 D: 39353
Ptnml(0-2): 281, 4290, 16085, 4497, 291
https://tests.stockfishchess.org/tests/view/60cbfa68457376eb8bcab49a

passed LTC 60+0.6:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 25480 W: 1498 L: 1338 D: 22644
Ptnml(0-2): 36, 1155, 10193, 1325, 31
https://tests.stockfishchess.org/tests/view/60cc4af8457376eb8bcab4d4

closes https://github.com/official-stockfish/Stockfish/pull/3564

Bench: 4904930
2021-06-18 20:00:13 +02:00
Joost VandeVondele
8ec9e10866 New default net nn-33c9d39e5eb6.nnue
As the previous net, this net is trained on Leela games as provided by borg.
See also https://lczero.org/blog/2021/06/the-importance-of-open-data/

The particular data set, which is a mix of T60 and T74 data, is now available as a single binpack:
https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing

The training command was:
python train.py ../../training_data_pylon.binpack ../../training_data_pylon.binpack --gpus 1 --threads 2 --num-workers 2 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 10 --features=HalfKAv2^   --lambda=1.0  --max_epochs=440 --seed $RANDOM --default_root_dir exp/run_2

passed STC:
https://tests.stockfishchess.org/tests/view/60c887cb457376eb8bcab054
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 12792 W: 1483 L: 1311 D: 9998
Ptnml(0-2): 62, 989, 4131, 1143, 71

passed LTC:
https://tests.stockfishchess.org/tests/view/60c8e5c4457376eb8bcab0f0
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 11272 W: 601 L: 477 D: 10194
Ptnml(0-2): 9, 421, 4657, 535, 14

also had strong LTC performance against another strong net of the series:
https://tests.stockfishchess.org/tests/view/60c8c40d457376eb8bcab0c6

closes https://github.com/official-stockfish/Stockfish/pull/3557

Bench: 5032320
2021-06-15 22:08:40 +02:00
JWmer
f8c779dbe5 Update default net to nn-8e47cf062333.nnue
This net is the result of training on data used by the Leela project. More precisely,
we shuffled T60 and T74 data kindly provided by borg (for different Tnn, the data is
a result of Leela selfplay with differently sized Leela nets).

The data is available at vondele's google drive:
https://drive.google.com/drive/folders/1mftuzYdl9o6tBaceR3d_VBQIrgKJsFpl.

The Leela data comes in small chunks of .binpack files. To shuffle them, we simply
used a small python script to randomly rename the files, and then concatenated them
using `cat`. As validation data we picked a file of T60 data. We will further investigate
T74 data.

The training for the NNUE architecture used 200 epochs with the Python trainer from
the Stockfish project. Unlike the previous run we tried with this data, this run does
not have adjusted scaling — not because we didn't want to, but because we forgot.
However, this training randomly skips 40% more positions than previous run. The loss
was very spiky and decreased slower than it does usually.

Training loss: https://github.com/official-stockfish/images/blob/main/training-loss-8e47cf062333.png
Validation loss: https://github.com/official-stockfish/images/blob/main/validation-loss-8e47cf062333.png

This is the exact training command:
python train.py --smart-fen-skipping --random-fen-skipping 14 --batch-size 16384 --threads 4 --num-workers 4 --gpus 1 trainingdata\training_data.binpack validationdata\val.binpack

---

10k STC result:
ELO: 3.61 +-3.3 (95%) LOS: 98.4%
Total: 10000 W: 1241 L: 1137 D: 7622
Ptnml(0-2): 68, 841, 3086, 929, 76
https://tests.stockfishchess.org/tests/view/60c67e50457376eb8bcaae70

10k LTC result:
ELO: 2.71 +-2.4 (95%) LOS: 98.8%
Total: 10000 W: 659 L: 581 D: 8760
Ptnml(0-2): 22, 485, 3900, 579, 14
https://tests.stockfishchess.org/tests/view/60c69deb457376eb8bcaae98

Passed LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9648 W: 685 L: 545 D: 8418
Ptnml(0-2): 22, 448, 3740, 596, 18
https://tests.stockfishchess.org/tests/view/60c6d41c457376eb8bcaaecf

---

closes https://github.com/official-stockfish/Stockfish/pull/3550

Bench: 4877339
2021-06-14 09:24:07 +02:00
Joost VandeVondele
d53071eff4 Update default net to nn-7e66505906a6.nnue
Trained with pytorch using the master branch and recommended settings,
the data used is the previous 64B binpack enhanced with a 2B binpack
generated using an opening book of positions for with the static eval
is significantly different from d9 search.

book           : https://drive.google.com/file/d/1rHcKY5rv34kwku6g89OhnE8Bkfq3UWau/view?usp=sharing
book generation: 3ce43ab0c4
binpack        : https://drive.google.com/file/d/1rHcKY5rv34kwku6g89OhnE8Bkfq3UWau/view?usp=sharing

-------

Data generation command:

generate_training_data depth 9 count 31250000 random_multi_pv 2 random_multi_pv_diff 100 random_move_max_ply 8 random_move_count 3 set_recommended_uci_options eval_limit 32000 output_file_name output.binpack book wrongNNUE.epd seed ${RANDOM}${RANDOM}

Training command:

python train.py ../../all_d9_fishd9_d8_d10_wrong_shuffle.binpack ../../all_d9_fishd9_d8_d10_wrong_shuffle.binpack  --gpus 1 --threads 2 --num-workers 2 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^   --lambda=1.0  --max_epochs=400 --seed $RANDOM --default_root_dir exp/run_5

-------

passed STC:
https://tests.stockfishchess.org/tests/view/60b7c79a457376eb8bcaa104
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 64592 W: 6254 L: 6028 D: 52310
Ptnml(0-2): 255, 4785, 22020, 4951, 285

passed LTC:
https://tests.stockfishchess.org/tests/view/60b85307457376eb8bcaa182
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 45560 W: 1998 L: 1826 D: 41736
Ptnml(0-2): 36, 1604, 19335, 1762, 43

closes https://github.com/official-stockfish/Stockfish/pull/3521

Bench: 4364128
2021-06-03 16:25:44 +02:00
Stéphane Nicolet
f193778446 Do not use lazy evaluation inside NNUE
This simplification patch implements two changes:

1. it simplifies away the so-called "lazy" path in the NNUE evaluation internals,
   where we trusted the psqt head alone to avoid the costly "positional" head in
   some cases;
2. it raises a little bit the NNUEThreshold1 in evaluate.cpp (from 682 to 800),
   which increases the limit where we switched from NNUE eval to Classical eval.

Both effects increase the number of positional evaluations done by our new net
architecture, but the results of our tests below seem to indicate that the loss
of speed will be compensated by the gain of eval quality.

STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 26280 W: 2244 L: 2137 D: 21899
Ptnml(0-2): 72, 1755, 9405, 1810, 98
https://tests.stockfishchess.org/tests/view/60ae73f112066fd299795a51

LTC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 20592 W: 750 L: 677 D: 19165
Ptnml(0-2): 9, 614, 8980, 681, 12
https://tests.stockfishchess.org/tests/view/60ae88e812066fd299795a82

closes https://github.com/official-stockfish/Stockfish/pull/3503

Bench: 3817907
2021-05-27 01:21:56 +02:00
Tomasz Sobczyk
9d53129075 Expose the lazy threshold for the feature transformer PSQT as a parameter.
Definition of the lazy threshold moved to evaluate.cpp where all others are.
Lazy threshold only used for real searches, not used for the "eval" call.
This preserves the purity of NNUE evaluation, which is useful to verify
consistency between the engine and the NNUE trainer.

closes https://github.com/official-stockfish/Stockfish/pull/3499

No functional change
2021-05-25 21:40:51 +02:00
Stéphane Nicolet
a2f01c07eb Sometimes change the (materialist, positional) balance
Our new nets output two values for the side to move in the last layer.
We can interpret the first value as a material evaluation of the
position, and the second one as the dynamic, positional value of the
location of pieces.

This patch changes the balance for the (materialist, positional) parts
of the score from (128, 128) to (121, 135) when the piece material is
equal between the two players, but keeps the standard (128, 128) balance
when one player is at least an exchange up.

Passed STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 15936 W: 1421 L: 1266 D: 13249
Ptnml(0-2): 37, 1037, 5694, 1134, 66
https://tests.stockfishchess.org/tests/view/60a82df9ce8ea25a3ef0408f

Passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 13904 W: 516 L: 410 D: 12978
Ptnml(0-2): 4, 374, 6088, 484, 2
https://tests.stockfishchess.org/tests/view/60a8bbf9ce8ea25a3ef04101

closes https://github.com/official-stockfish/Stockfish/pull/3492

Bench: 3856635
2021-05-22 21:09:22 +02:00
Joost VandeVondele
fb2d175f97 Update default net to nn-7756374aaed3.nnue
trained with pytorch using the master branch and recommended settings,
same data set as previously used:

python train.py ../../all_d9_fishd9_d8_d10_shuffle.binpack ../../all_d9_fishd9_d8_d10_shuffle.binpack \
        --gpus 1 --threads 2 --num-workers 2 --batch-size 16384 --progress_bar_refresh_rate 300 \
        --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^   --lambda=1.0 \
        --max_epochs=400 --seed $RANDOM --default_root_dir exp/run_8

passed STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 21424 W: 2078 L: 1907 D: 17439
Ptnml(0-2): 80, 1512, 7385, 1627, 108
https://tests.stockfishchess.org/tests/view/60a6c749ce8ea25a3ef03f4d

passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 67912 W: 2851 L: 2648 D: 62413
Ptnml(0-2): 40, 2348, 28984, 2537, 47
https://tests.stockfishchess.org/tests/view/60a722ecce8ea25a3ef03fb9

closes https://github.com/official-stockfish/Stockfish/pull/3489

Bench: 3779522
2021-05-22 07:35:39 +02:00
Tomasz Sobczyk
e8d64af123 New NNUE architecture and net
Introduces a new NNUE network architecture and associated network parameters,
as obtained by a new pytorch trainer.

The network is already very strong at short TC, without regression at longer TC,
and has potential for further improvements.

https://tests.stockfishchess.org/tests/view/60a159c65085663412d0921d
TC: 10s+0.1s, 1 thread
ELO: 21.74 +-3.4 (95%) LOS: 100.0%
Total: 10000 W: 1559 L: 934 D: 7507
Ptnml(0-2): 38, 701, 2972, 1176, 113

https://tests.stockfishchess.org/tests/view/60a187005085663412d0925b
TC: 60s+0.6s, 1 thread
ELO: 5.85 +-1.7 (95%) LOS: 100.0%
Total: 20000 W: 1381 L: 1044 D: 17575
Ptnml(0-2): 27, 885, 7864, 1172, 52

https://tests.stockfishchess.org/tests/view/60a2beede229097940a03806
TC: 20s+0.2s, 8 threads
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 34272 W: 1610 L: 1452 D: 31210
Ptnml(0-2): 30, 1285, 14350, 1439, 32

https://tests.stockfishchess.org/tests/view/60a2d687e229097940a03c72
TC: 60s+0.6s, 8 threads
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 45544 W: 1262 L: 1214 D: 43068
Ptnml(0-2): 12, 1129, 20442, 1177, 12

The network has been trained (by vondele) using the https://github.com/glinscott/nnue-pytorch/ trainer (started by glinscott),
specifically the branch https://github.com/Sopel97/nnue-pytorch/tree/experiment_56.
The data used are in 64 billion positions (193GB total) generated and scored with the current master net
d8: https://drive.google.com/file/d/1hOOYSDKgOOp38ZmD0N4DV82TOLHzjUiF/view?usp=sharing
d9: https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing
d10: https://drive.google.com/file/d/1ZC5upzBYMmMj1gMYCkt6rCxQG0GnO3Kk/view?usp=sharing
fishtest_d9: https://drive.google.com/file/d/1GQHt0oNgKaHazwJFTRbXhlCN3FbUedFq/view?usp=sharing

This network also contains a few architectural changes with respect to the current master:

    Size changed from 256x2-32-32-1 to 512x2-16-32-1
        ~15-20% slower
        ~2x larger
        adds a special path for 16 valued ClippedReLU
        fixes affine transform code for 16 inputs/outputs, buy using InputDimensions instead of PaddedInputDimensions
            this is safe now because the inputs are processed in groups of 4 in the current affine transform code
    The feature set changed from HalfKP to HalfKAv2
        Includes information about the kings like HalfKA
        Packs king features better, resulting in 8% size reduction compared to HalfKA
    The board is flipped for the black's perspective, instead of rotated like in the current master
    PSQT values for each feature
        the feature transformer now outputs a part that is fowarded directly to the output and allows learning piece values more directly than the previous network architecture. The effect is visible for high imbalance positions, where the current master network outputs evaluations skewed towards zero.
        8 PSQT values per feature, chosen based on (popcount(pos.pieces()) - 1) / 4
        initialized to classical material values on the start of the training
    8 subnetworks (512x2->16->32->1), chosen based on (popcount(pos.pieces()) - 1) / 4
        only one subnetwork is evaluated for any position, no or marginal speed loss

A diagram of the network is available: https://user-images.githubusercontent.com/8037982/118656988-553a1700-b7eb-11eb-82ef-56a11cbebbf2.png
A more complete description: https://github.com/glinscott/nnue-pytorch/blob/master/docs/nnue.md

closes https://github.com/official-stockfish/Stockfish/pull/3474

Bench: 3806488
2021-05-18 18:06:23 +02:00
Tomasz Sobczyk
58054fd0fa Exporting the currently loaded network file
This PR adds an ability to export any currently loaded network.
The export_net command now takes an optional filename parameter.
If the loaded net is not the embedded net the filename parameter is required.

Two changes were required to support this:

* the "architecture" string, which is really just a some kind of description in the net, is now saved into netDescription on load and correctly saved on export.
* the AffineTransform scrambles weights for some architectures and sparsifies them, such that retrieving the index is hard. This is solved by having a temporary scrambled<->unscrambled index lookup table when loading the network, and the actual index is saved for each individual weight that makes it to canSaturate16. This increases the size of the canSaturate16 entries by 6 bytes.

closes https://github.com/official-stockfish/Stockfish/pull/3456

No functional change
2021-05-11 19:36:11 +02:00
Tomasz Sobczyk
ca250e969c Add an UCI level command "export_net".
This command writes the embedded net to the file `EvalFileDefaultName`.
If there is no embedded net the command does nothing.

fixes #3453

closes https://github.com/official-stockfish/Stockfish/pull/3454

No functional change
2021-05-07 09:45:08 +02:00
Dieter Dobbelaere
7ffae17f85 Add Stockfish namespace.
fixes #3350 and is a small cleanup that might make it easier to use SF
in separate projects, like a NNUE trainer or similar.

closes https://github.com/official-stockfish/Stockfish/pull/3370

No functional change.
2021-03-07 14:26:54 +01:00
Joost VandeVondele
c4d67d77c9 Update copyright years
No functional change
2021-01-08 17:04:23 +01:00
SFisGOD
7364006757 Update default net to nn-62ef826d1a6d.nnue
Include scaling change as suggested by Dietrich Kappe,
the one who trained net for Komodo.  According to him,
some nets may require different scaling in order to utilize its full strength.

STC:
LLR: 2.93 (-2.94,2.94) {-0.25,1.25}
Total: 99856 W: 9669 L: 9401 D: 80786
Ptnml(0-2): 374, 7468, 34037, 7614, 435
https://tests.stockfishchess.org/tests/view/5fc2697642a050a89f02c8ec

LTC:
LLR: 2.96 (-2.94,2.94) {0.25,1.25}
Total: 29840 W: 1220 L: 1081 D: 27539
Ptnml(0-2): 10, 969, 12827, 1100, 14
https://tests.stockfishchess.org/tests/view/5fc2ea5142a050a89f02c957

Bench: 3561701
2020-11-29 16:54:06 +01:00
SFisGOD
32edb1d009 Update default net to nn-c3ca321c51c9.nnue
Optimization of the net biases of the 32 x 32 layer and the output layer.

Tuning of 32 x 32 layer (200k games, 5 seconds TC)
https://tests.stockfishchess.org/tests/view/5f9aaf266a2c112b60691c68

STC:
LLR: 2.95 (-2.94,2.94) {-0.25,1.25}
Total: 41848 W: 4665 L: 4461 D: 32722
Ptnml(0-2): 239, 3308, 13659, 3446, 272
https://tests.stockfishchess.org/tests/view/5fa5ef5a936c54e11ec9954f

LTC:
LLR: 2.94 (-2.94,2.94) {0.25,1.25}
Total: 88008 W: 4045 L: 3768 D: 80195
Ptnml(0-2): 69, 3339, 36908, 3622, 66
https://tests.stockfishchess.org/tests/view/5fa62a78936c54e11ec99577

closes https://github.com/official-stockfish/Stockfish/pull/3220

Bench: 3649288
2020-11-08 08:36:16 +01:00
mstembera
dfc7f88650 Update default net to nn-cb26f10b1fd9.nnue
Result of https://tests.stockfishchess.org/tests/view/5f9a06796a2c112b60691c0f tuning.

STC
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 53712 W: 5776 L: 5561 D: 42375
Ptnml(0-2): 253, 4282, 17604, 4431, 286
https://tests.stockfishchess.org/tests/view/5f9c7bbc6a2c112b60691d4d

LTC
LLR: 2.97 (-2.94,2.94) {0.25,1.25}
Total: 80184 W: 4007 L: 3739 D: 72438
Ptnml(0-2): 58, 3302, 33130, 3518, 84
https://tests.stockfishchess.org/tests/view/5f9d01f06a2c112b60691d87

closes https://github.com/official-stockfish/Stockfish/pull/3209

bench: 3517795
2020-11-01 08:02:40 +01:00
SFisGOD
6328135264 Update default net to nn-2eb2e0707c2b.nnue
Optimization of the net weights of the 32 x 32 layer (1024 parameters) and net biases of the 512 x 32 layer (32 parameters) using SPSA.

Tuning of 32 x 32 Layer (800,000 games, 5 seconds time control):
https://tests.stockfishchess.org/tests/view/5f942040d3978d7e86f1aa05

Tuning of 512 x 32 Layer (80,000 games, 20 seconds time control):
https://tests.stockfishchess.org/tests/view/5f8f926d2c92c7fe3a8c608b

STC:
LLR: 2.96 (-2.94,2.94) {-0.25,1.25}
Total: 17336 W: 1918 L: 1754 D: 13664
Ptnml(0-2): 79, 1344, 5672, 1480, 93
https://tests.stockfishchess.org/tests/view/5f9882346a2c112b60691b34

LTC:
LLR: 2.94 (-2.94,2.94) {0.25,1.25}
Total: 37304 W: 1822 L: 1651 D: 33831
Ptnml(0-2): 27, 1461, 15501, 1640, 23
https://tests.stockfishchess.org/tests/view/5f98a4b36a2c112b60691b40

closes https://github.com/official-stockfish/Stockfish/pull/3201

Bench: 3403528
2020-10-28 08:13:34 +01:00
mstembera
281d520cc2 Update default net to nn-eba324f53044.nnue
The new net is based on the previous net 04cf2b4ed1da but with the biases
for the 1st hidden layer tuned SPSA, see the SPSA session on fishtest there:
https://tests.stockfishchess.org/tests/view/5f875213dcdad978fe8c5211

Thanks to @vondele for writing out the net, see discussion in this thread:
432da86721

Passed STC:
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 15000 W: 1640 L: 1483 D: 11877
Ptnml(0-2): 50, 1183, 4908, 1278, 81
https://tests.stockfishchess.org/tests/view/5f8955e20fea1a44ec4f0a5d

Passed LTC:
LLR: 2.96 (-2.94,2.94) {0.25,1.25}
Total: 81272 W: 3948 L: 3682 D: 73642
Ptnml(0-2): 64, 3194, 33856, 3456, 66
https://tests.stockfishchess.org/tests/view/5f89e8efeae8a6e60644d6e7

closes https://github.com/official-stockfish/Stockfish/pull/3187

Bench: 3762411
2020-10-18 13:43:26 +02:00
Joost VandeVondele
ba73f8ce0d Update default net to nn-04cf2b4ed1da.nnue
Further tune the net parameters, now the last but one layer (32x32).
To limit the number of parameters optimized, the network layer was
decomposed using SVD, and the singular values were treated
as parameters and tuned.

Tuning branch: https://github.com/vondele/Stockfish/tree/svdTune
Tuner: https://github.com/vondele/nevergrad4sf

passed STC:
https://tests.stockfishchess.org/tests/view/5f83e82f8ea73fb8ddf83e4e
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 8488 W: 944 L: 795 D: 6749
Ptnml(0-2): 39, 609, 2811, 734, 51

passed LTC:
https://tests.stockfishchess.org/tests/view/5f83f4118ea73fb8ddf83e66
LLR: 2.94 (-2.94,2.94) {0.25,1.25}
Total: 169016 W: 8043 L: 7589 D: 153384
Ptnml(0-2): 133, 6623, 70538, 7085, 129

closes https://github.com/official-stockfish/Stockfish/pull/3181

Bench: 3945198
2020-10-14 13:28:21 +02:00
SFisGOD
5efbaaba77 Update default net to nn-baeb9ef2d183.nnue
Further optimization of Sergio's nn-03744f8d56d8.nnue
This patch is the result of collaboration with Joost VandeVondele.

STC:
LLR: 2.96 (-2.94,2.94) {-0.25,1.25}
Total: 37000 W: 4145 L: 3947 D: 28908
Ptnml(0-2): 191, 3016, 11912, 3166, 215
https://tests.stockfishchess.org/tests/view/5f71e7983b22d6afa5069475

LTC:
LLR: 2.96 (-2.94,2.94) {0.25,1.25}
Total: 60224 W: 2992 L: 2769 D: 54463
Ptnml(0-2): 48, 2420, 24956, 2637, 51
https://tests.stockfishchess.org/tests/view/5f722bb83b22d6afa506998f

closes https://github.com/official-stockfish/Stockfish/pull/3161

Bench: 3720073
2020-09-28 22:29:31 +02:00
Joost VandeVondele
36c2886302 Update default net to nn-04a843f8932e.nnue
an optimization of Sergio's nn-03744f8d56d8.nnue tuning the output layer (33 parameters) on game play.

WIP code to make layer parameters tunable is https://github.com/vondele/Stockfish/tree/optionOutput
Optimization itself is using https://github.com/vondele/nevergrad4sf
Writing of the modified net using WIP code based on the learner code https://github.com/vondele/Stockfish/tree/evalWrite

Most parameters in the output layer are changed only little (~5 for int8_t).

passed STC:
https://tests.stockfishchess.org/tests/view/5f716f6b3b22d6afa506941a
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 15488 W: 1859 L: 1689 D: 11940
Ptnml(0-2): 79, 1260, 4917, 1388, 100

passed LTC:
https://tests.stockfishchess.org/tests/view/5f71908e3b22d6afa506942e
LLR: 2.93 (-2.94,2.94) {0.25,1.25}
Total: 8728 W: 518 L: 400 D: 7810
Ptnml(0-2): 7, 338, 3556, 456, 7

closes https://github.com/official-stockfish/Stockfish/pull/3158

Bench: 3789924
2020-09-28 16:55:40 +02:00
Stéphane Nicolet
9a64e737cf Small cleanups 12
- Clean signature of functions in namespace NNUE
- Add comment for countermove based pruning
- Remove bestMoveCount variable
- Add const qualifier to kpp_board_index array
- Fix spaces in get_best_thread()
- Fix indention in capture LMR code in search.cpp
- Rename TtmemDeleter to LargePageDeleter

Closes https://github.com/official-stockfish/Stockfish/pull/3063

No functional change
2020-09-21 10:41:10 +02:00
Sergio Vieri
7135678f71 Update default net to nn-03744f8d56d8.nnue
Equivalent to 20200914-1520

closes https://github.com/official-stockfish/Stockfish/pull/3123

Bench: 4222126
2020-09-15 07:21:04 +02:00
Sergio Vieri
9cc482c788 Update default net to nn-308d71810dff.nnue
equivalent to 20200903-1739

Net trained from scratch, so it has quite different features extracted compared to the previous net (82215d0fd0df).

STC:
LLR: 2.98 (-2.94,2.94) {-0.25,1.25}
Total: 108328 W: 14048 L: 13719 D: 80561
Ptnml(0-2): 842, 10039, 32062, 10390, 831
https://tests.stockfishchess.org/tests/view/5f50e053ba100690c5cc5f00

LTC:
LLR: 2.96 (-2.94,2.94) {0.25,1.25}
Total: 13872 W: 1059 L: 890 D: 11923
Ptnml(0-2): 30, 724, 5270, 871, 41
https://tests.stockfishchess.org/tests/view/5f51821fba100690c5cc5f36

closes https://github.com/official-stockfish/Stockfish/pull/3104

Bench: 3832716
2020-09-04 08:03:43 +02:00