1
0
Fork 0
mirror of https://github.com/sockspls/badfish synced 2025-05-01 01:03:09 +00:00
Commit graph

162 commits

Author SHA1 Message Date
SFisGOD
590447d7a1 Update default net to nn-517c4f68b5df.nnue
SPSA: https://tests.stockfishchess.org/tests/view/611cf0da4977aa1525c9ca03
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-ac5605a608d6.nnue
New net: nn-517c4f68b5df.nnue

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 11600 W: 998 L: 851 D: 9751
Ptnml(0-2): 30, 705, 4186, 846, 33
https://tests.stockfishchess.org/tests/view/611f84524977aa1525c9cb5b

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 9360 W: 338 L: 243 D: 8779
Ptnml(0-2): 0, 220, 4151, 303, 6
https://tests.stockfishchess.org/tests/view/611f8c5b4977aa1525c9cb64

closes https://github.com/official-stockfish/Stockfish/pull/3667

Bench: 4844618
2021-08-22 09:09:58 +02:00
Torsten Hellwig
1946a67567 Update default net to nn-ac5605a608d6.nnue
This net was created with the nnue-pytorch trainer, it used the previous master net as a starting point.

The training data includes all T60 data (https://drive.google.com/drive/folders/1rzZkgIgw7G5vQMLr2hZNiUXOp7z80613), all T74 data (https://drive.google.com/drive/folders/1aFUv3Ih3-A8Vxw9064Kw_FU4sNhMHZU-) and the wrongNNUE_02_d9.binpack (https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq). The Leela data were randomly named and then concatenated. All data was merged into one binpack using interleave_binpacks.py.

python3 train.py \
    ../data/t60_t74_wrong.binpack \
    ../data/t60_t74_wrong.binpack \
    --resume-from-model ../data/nn-e8321e467bf6.pt \
    --gpus 1 \
    --threads 4 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 300 \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --max_epochs=600 \
    --seed $RANDOM \
    --default_root_dir ../output/exp_24

STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 15320 W: 1415 L: 1257 D: 12648
Ptnml(0-2): 50, 1002, 5402, 1152, 54
https://tests.stockfishchess.org/tests/view/611c404a4977aa1525c9c97f

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 9440 W: 345 L: 248 D: 8847
Ptnml(0-2): 3, 222, 4175, 315, 5
https://tests.stockfishchess.org/tests/view/611c6c7d4977aa1525c9c996

LTC with UHO_XXL_+0.90_+1.19.epd:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 6232 W: 1638 L: 1459 D: 3135
Ptnml(0-2): 5, 592, 1744, 769, 6
https://tests.stockfishchess.org/tests/view/611c9b214977aa1525c9c9cb

closes https://github.com/official-stockfish/Stockfish/pull/3664

Bench: 5375286
2021-08-18 09:17:22 +02:00
Tomasz Sobczyk
d61d38586e New NNUE architecture and net
Introduces a new NNUE network architecture and associated network parameters

The summary of the changes:

* Position for each perspective mirrored such that the king is on e..h files. Cuts the feature transformer size in half, while preserving enough knowledge to be good. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.b40q4rb1w7on.
* The number of neurons after the feature transformer increased two-fold, to 1024x2. This is possibly mostly due to the now very optimized feature transformer update code.
* The number of neurons after the second layer is reduced from 16 to 8, to reduce the speed impact. This, perhaps surprisingly, doesn't harm the strength much. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.6qkocr97fezq

The AffineTransform code did not work out-of-the box with the smaller number of neurons after the second layer, so some temporary changes have been made to add a special case for InputDimensions == 8. Also additional 0 padding is added to the output for some archs that cannot process inputs by <=8 (SSE2, NEON). VNNI uses an implementation that can keep all outputs in the registers while reducing the number of loads by 3 for each 16 inputs, thanks to the reduced number of output neurons. However GCC is particularily bad at optimization here (and perhaps why the current way the affine transform is done even passed sprt) (see https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit# for details) and more work will be done on this in the following days. I expect the current VNNI implementation to be improved and extended to other architectures.

The network was trained with a slightly modified version of the pytorch trainer (https://github.com/glinscott/nnue-pytorch); the changes are in https://github.com/glinscott/nnue-pytorch/pull/143

The training utilized 2 datasets.

    dataset A - https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing
    dataset B - as described in ba01f4b954

The training process was as following:

    train on dataset A for 350 epochs, take the best net in terms of elo at 20k nodes per move (it's fine to take anything from later stages of training).
    convert the .ckpt to .pt
    --resume-from-model from the .pt file, train on dataset B for <600 epochs, take the best net. Lambda=0.8, applied before the loss function.

The first training command:

python3 train.py \
    ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
    ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
    --gpus "$3," \
    --threads 1 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --smart-fen-skipping \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=1.0 \
    --max_epochs=600 \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

The second training command:

python3 serialize.py \
    --features=HalfKAv2_hm^ \
    ../nnue-pytorch-training/experiment_131/run_6/default/version_0/checkpoints/epoch-499.ckpt \
    ../nnue-pytorch-training/experiment_$1/base/base.pt

python3 train.py \
    ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
    ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
    --gpus "$3," \
    --threads 1 \
    --num-workers 1 \
    --batch-size 16384 \
    --progress_bar_refresh_rate 20 \
    --smart-fen-skipping \
    --random-fen-skipping 3 \
    --features=HalfKAv2_hm^ \
    --lambda=0.8 \
    --max_epochs=600 \
    --resume-from-model ../nnue-pytorch-training/experiment_$1/base/base.pt \
    --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

STC: https://tests.stockfishchess.org/tests/view/611120b32a8a49ac5be798c4

LLR: 2.97 (-2.94,2.94) <-0.50,2.50>
Total: 22480 W: 2434 L: 2251 D: 17795
Ptnml(0-2): 101, 1736, 7410, 1865, 128

LTC: https://tests.stockfishchess.org/tests/view/611152b32a8a49ac5be798ea

LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9776 W: 442 L: 333 D: 9001
Ptnml(0-2): 5, 295, 4180, 402, 6

closes https://github.com/official-stockfish/Stockfish/pull/3646

bench: 5189338
2021-08-15 12:05:43 +02:00
SFisGOD
73ef5b8c4a Update default net to nn-46832cfbead3.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/6100e7f096b86d98abf6a832
Parameters: A total of 256 net weights and 8 net biases were tuned (output layer)
Base net: nn-56a5f1c4173a.nnue
New net: nn-ec3c8e029926.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/610733caafad2da4f4ae3da7
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-ec3c8e029926.nnue
New net: nn-46832cfbead3.nnue

STC:
LLR: 2.98 (-2.94,2.94) <-0.50,2.50>
Total: 50520 W: 3953 L: 3765 D: 42802
Ptnml(0-2): 138, 3063, 18678, 3235, 146
https://tests.stockfishchess.org/tests/view/610a79692a8a49ac5be793f4

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 57256 W: 1723 L: 1566 D: 53967
Ptnml(0-2): 12, 1442, 25568, 1589, 17
https://tests.stockfishchess.org/tests/view/610ac5bb2a8a49ac5be79434

Closes https://github.com/official-stockfish/Stockfish/pull/3642

Bench: 5359314
2021-08-05 08:52:07 +02:00
SFisGOD
e973eee919 Update default net to nn-56a5f1c4173a.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/60fd24efd8a6b65b2f3a796e
Parameters: A total of 256 net biases were tuned (hidden layer 2)
New best values: Half of the changes from the tuning run
New net: nn-5992d3ba79f3.nnue

SPSA 2: https://tests.stockfishchess.org/tests/view/60fec7d6d8a6b65b2f3a7aa2
Parameters: A total of 128 net biases were tuned (hidden layer 1)
New best values: Half of the changes from the tuning run
New net: nn-56a5f1c4173a.nnue

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 140392 W: 10863 L: 10578 D: 118951
Ptnml(0-2): 347, 8754, 51718, 9021, 356
https://tests.stockfishchess.org/tests/view/610037e396b86d98abf6a79e

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 14216 W: 454 L: 355 D: 13407
Ptnml(0-2): 4, 323, 6356, 420, 5
https://tests.stockfishchess.org/tests/view/61019995afad2da4f4ae3a3c

Closes #3633

Bench: 4801359
2021-07-29 07:35:13 +02:00
SFisGOD
237ed1ef8f Update default net to nn-26abeed38351.nnue
SPSA: https://tests.stockfishchess.org/tests/view/60fba335d8a6b65b2f3a7891

New best values: Half of the changes from the tuning run.
Setting: nodestime=300 with 10+0.1 (approximate real TC is 2.5 seconds)
The rest is the same as described in #3593

The change from nodestime=600 to 300 was suggested by gekkehenker to prevent time losses for some slow workers
SFisGOD@94cd757#commitcomment-53324840

STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 67448 W: 5241 L: 5036 D: 57171
Ptnml(0-2): 151, 4198, 24827, 4391, 157
https://tests.stockfishchess.org/tests/view/60fd50f2d8a6b65b2f3a798e

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 48752 W: 1504 L: 1358 D: 45890
Ptnml(0-2): 13, 1226, 21754, 1368, 15
https://tests.stockfishchess.org/tests/view/60fd7bb2d8a6b65b2f3a79a9

Closes https://github.com/official-stockfish/Stockfish/pull/3630

Bench:  5124774
2021-07-26 07:52:59 +02:00
MichaelB7
b939c80513 Update the default net to nn-76a8a7ffb820.nnue.
combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets.

Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-d8609abe8caf.nnue:

The initial net nn-d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer.

Training data gen command (generates in chunks of 200k positions):

generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack

PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence):

python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val

See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data.

Based on that Michael generated nn-76a8a7ffb820.nnue in the following way:

The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-d8609abe8caf.pt

This run is thus started from Segio Vieri's net nn-d8609abe8caf.nnue

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

model.py modifications:
loss = torch.pow(torch.abs(p - q), 2.6).mean()
LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch.
optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992)
For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created

The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time.

passed LTC
https://tests.stockfishchess.org/tests/view/60fa45aed8a6b65b2f3a77a4
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53040 W: 1732 L: 1575 D: 49733
Ptnml(0-2): 14, 1432, 23474, 1583, 17

passed STC
https://tests.stockfishchess.org/tests/view/60f9fee2d8a6b65b2f3a7775
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 37928 W: 3178 L: 3001 D: 31749
Ptnml(0-2): 100, 2446, 13695, 2623, 100.

closes https://github.com/official-stockfish/Stockfish/pull/3626

Bench: 5169957
2021-07-24 18:04:59 +02:00
SFisGOD
8fc297c506 Update default net to nn-9e3c6298299a.nnue
Optimization of nn-956480d8378f.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60da2bf63beab81350ac9fe7

Same method as described in PR #3593

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 17792 W: 1525 L: 1372 D: 14895
Ptnml(0-2): 28, 1156, 6401, 1257, 54
https://tests.stockfishchess.org/tests/view/60deffc59ea99d7c2d693c19

LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 36544 W: 1245 L: 1109 D: 34190
Ptnml(0-2): 12, 988, 16139, 1118, 15
https://tests.stockfishchess.org/tests/view/60df11339ea99d7c2d693c22

closes https://github.com/official-stockfish/Stockfish/pull/3601

Bench: 4687476
2021-07-03 10:03:32 +02:00
SFisGOD
49283d3a66 Update default net to nn-3475407dc199.nnue
Optimization of eight subnetwork output layers of Michael's nn-190f102a22c3.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60d5510642a522cc50282ef3

Parameters: A total of 256 net weights and 8 net biases were tuned
New best values: The raw values at the end of the tuning run were used (800k games, 5 seconds TC)
Settings: default ck value and SPSA A is 30,000 (3.75% of the total number of games)

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 29064 W: 2435 L: 2269 D: 24360
Ptnml(0-2): 72, 1857, 10505, 2029, 69
https://tests.stockfishchess.org/tests/view/60d8ea123beab81350ac9eb6

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 61848 W: 2055 L: 1884 D: 57909
Ptnml(0-2): 18, 1708, 27310, 1861, 27
https://tests.stockfishchess.org/tests/view/60d8f0393beab81350ac9ec6

closes https://github.com/official-stockfish/Stockfish/pull/3593

Bench: 4770936
2021-06-28 21:31:58 +02:00
MichaelB7
b94a651878 Make net nn-956480d8378f.nnue the default
Trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_18 --resume-from-model ./pt/nn-75980ca503c6.pt

This run is thus started from a previous master net.

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

passed STC:
https://tests.stockfishchess.org/tests/view/60d0c0a7a8ec07dc34c072b2
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 18440 W: 1693 L: 1531 D: 15216
Ptnml(0-2): 67, 1225, 6464, 1407, 57

passed LTC:
https://tests.stockfishchess.org/tests/view/60d762793beab81350ac9d72
LLR: 2.98 (-2.94,2.94) <0.50,3.50>
Total: 93120 W: 3152 L: 2933 D: 87035
Ptnml(0-2): 48, 2581, 41076, 2814, 41

passed LTC (rebased branch to current master):
https://tests.stockfishchess.org/tests/view/60d85eeb3beab81350ac9e2b
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 42688 W: 1347 L: 1206 D: 40135
Ptnml(0-2): 14, 1097, 18981, 1238, 14.

closes https://github.com/official-stockfish/Stockfish/pull/3592

Bench: 4906727
2021-06-28 21:20:05 +02:00
MichaelB7
9b82414b67 Make net nn-190f102a22c3.nnue the default net.
Trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_17 --resume-from-model ./pt/nn-75980ca503c6.pt

This run is thus started from the previous master net.

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

passed LTC
https://tests.stockfishchess.org/tests/view/60d09f52b4c17000d679517f
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 32184 W: 1100 L: 970 D: 30114
Ptnml(0-2): 10, 878, 14193, 994, 17

passed STC
https://tests.stockfishchess.org/tests/view/60d086c02114332881e7368e
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 11360 W: 1056 L: 906 D: 9398
Ptnml(0-2): 25, 735, 4026, 853, 41

closes https://github.com/official-stockfish/Stockfish/pull/3576

Bench: 4631244
2021-06-21 23:16:55 +02:00
MichaelB7
ba01f4b954 Make net nn-75980ca503c6.nnue the default.
trained with the Python command

c:\nnue>python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_10 --resume-from-model ./pt/nn-3b20abec10c1.pt
`
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack .

Net nn-3b20abec10c1.nnue was chosen as the --resume-from-model with the idea that through learning, the manually hex edited values will be learned and will not need to be manually adjusted going forward. They would also be fine tuned by the learning process.

passed STC:
https://tests.stockfishchess.org/tests/view/60cdf91e457376eb8bcab66f
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 18256 W: 1639 L: 1479 D: 15138
Ptnml(0-2): 59, 1179, 6505, 1313, 72

passed LTC:
https://tests.stockfishchess.org/tests/view/60ce2166457376eb8bcab6e1
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 18792 W: 654 L: 542 D: 17596
Ptnml(0-2): 9, 490, 8291, 592, 14

closes https://github.com/official-stockfish/Stockfish/pull/3570

Bench: 5020972
2021-06-19 23:24:35 +02:00
Tomasz Sobczyk
2e745956c0 Change trace with NNUE eval support
This patch adds some more output to the `eval` command. It adds a board display
with estimated piece values (method is remove-piece, evaluate, put-piece), and
splits the NNUE evaluation with (psqt,layers) for each bucket for the NNUE net.

Example:

```

./stockfish
position fen 3Qb1k1/1r2ppb1/pN1n2q1/Pp1Pp1Pr/4P2p/4BP2/4B1R1/1R5K b - - 11 40
eval

 Contributing terms for the classical eval:
+------------+-------------+-------------+-------------+
|    Term    |    White    |    Black    |    Total    |
|            |   MG    EG  |   MG    EG  |   MG    EG  |
+------------+-------------+-------------+-------------+
|   Material |  ----  ---- |  ----  ---- | -0.73 -1.55 |
|  Imbalance |  ----  ---- |  ----  ---- | -0.21 -0.17 |
|      Pawns |  0.35 -0.00 |  0.19 -0.26 |  0.16  0.25 |
|    Knights |  0.04 -0.08 |  0.12 -0.01 | -0.08 -0.07 |
|    Bishops | -0.34 -0.87 | -0.17 -0.61 | -0.17 -0.26 |
|      Rooks |  0.12  0.00 |  0.08  0.00 |  0.04  0.00 |
|     Queens |  0.00  0.00 | -0.27 -0.07 |  0.27  0.07 |
|   Mobility |  0.84  1.76 |  0.01  0.66 |  0.83  1.10 |
|King safety | -0.99 -0.17 | -0.72 -0.10 | -0.27 -0.07 |
|    Threats |  0.27  0.27 |  0.73  0.86 | -0.46 -0.59 |
|     Passed |  0.00  0.00 |  0.79  0.82 | -0.79 -0.82 |
|      Space |  0.61  0.00 |  0.24  0.00 |  0.37  0.00 |
|   Winnable |  ----  ---- |  ----  ---- |  0.00 -0.03 |
+------------+-------------+-------------+-------------+
|      Total |  ----  ---- |  ----  ---- | -1.03 -2.14 |
+------------+-------------+-------------+-------------+

 NNUE derived piece values:
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |   Q   |   b   |       |   k   |       |
|       |       |       | +12.4 | -1.62 |       |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |   r   |       |       |   p   |   p   |   b   |       |
|       | -3.89 |       |       | -0.84 | -1.19 | -3.32 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|   p   |   N   |       |   n   |       |       |   q   |       |
| -1.81 | +3.71 |       | -4.82 |       |       | -5.04 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|   P   |   p   |       |   P   |   p   |       |   P   |   r   |
| +1.16 | -0.91 |       | +0.55 | +0.12 |       | +0.50 | -4.02 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   P   |       |       |   p   |
|       |       |       |       | +2.33 |       |       | +1.17 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   B   |   P   |       |       |
|       |       |       |       | +4.79 | +1.54 |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |       |       |       |   B   |       |   R   |       |
|       |       |       |       | +4.54 |       | +6.03 |       |
+-------+-------+-------+-------+-------+-------+-------+-------+
|       |   R   |       |       |       |       |       |   K   |
|       | +4.81 |       |       |       |       |       |       |
+-------+-------+-------+-------+-------+-------+-------+-------+

 NNUE network contributions (Black to move)
+------------+------------+------------+------------+
|   Bucket   |  Material  | Positional |   Total    |
|            |   (PSQT)   |  (Layers)  |            |
+------------+------------+------------+------------+
|  0         |  +  0.32   |  -  1.46   |  -  1.13   |
|  1         |  +  0.25   |  -  0.68   |  -  0.43   |
|  2         |  +  0.46   |  -  1.72   |  -  1.25   |
|  3         |  +  0.55   |  -  1.80   |  -  1.25   |
|  4         |  +  0.48   |  -  1.77   |  -  1.29   |
|  5         |  +  0.40   |  -  2.00   |  -  1.60   |
|  6         |  +  0.57   |  -  2.12   |  -  1.54   | <-- this bucket is used
|  7         |  +  3.38   |  -  2.00   |  +  1.37   |
+------------+------------+------------+------------+

Classical evaluation   -1.00 (white side)
NNUE evaluation        +1.54 (white side)
Final evaluation       +2.38 (white side) [with scaled NNUE, hybrid, ...]

```

Also renames the export_net() function to save_eval() while there.

closes https://github.com/official-stockfish/Stockfish/pull/3562

No functional change
2021-06-19 11:57:01 +02:00
Joost VandeVondele
adfb23c029 Make net nn-50144f835024.nnue the default
trained with the Python command

c:\nnue>python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_8 --resume-from-model ./pt/nn-6ad41a9207d0.pt
`
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - make one large Wrong_NNUE 2 binpack and one large Training_Data of approximate size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training _Data binpack .

nn-6ad41a9207d0.pt was derived from a net vondele ran which passed STC quickly,
but faltered in LTC. https://tests.stockfishchess.org/tests/view/60cba666457376eb8bcab443

STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 18792 W: 2068 L: 1889 D: 14835
Ptnml(0-2): 82, 1480, 6117, 1611, 106
https://tests.stockfishchess.org/tests/view/60ccda8b457376eb8bcab568

LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 11376 W: 574 L: 454 D: 10348
Ptnml(0-2): 4, 412, 4747, 510, 15
https://tests.stockfishchess.org/tests/view/60ccf952457376eb8bcab58d

closes https://github.com/official-stockfish/Stockfish/pull/3568

Bench: 4900906
2021-06-18 23:50:26 +02:00
SFisGOD
86afb6a7cf Update default net to nn-aa9d7eeb397e.nnue
Optimization of vondele's nn-33c9d39e5eb6.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60ca68be457376eb8bcab28b
Setting: ck values are default based on how large the parameters are
The new values for this net are the raw values at the end of the tuning (80k games)

The significant changes are in buckets 1 and 2 (5-12 pieces) so the main difference is in playing endgames if we compare it to nn-33c9. There is also change in bucket 7 (29-32 pieces) but not as substantial as the changes in buckets 1 and 2. If we interpret the changes based on an experiment a few months ago, this new net plays more optimistically during endgames and less optimistically during openings.

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 49504 W: 4246 L: 4053 D: 41205
Ptnml(0-2): 140, 3282, 17749, 3407, 174
https://tests.stockfishchess.org/tests/view/60cbd752457376eb8bcab478

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 88720 W: 4926 L: 4651 D: 79143
Ptnml(0-2): 105, 4048, 35793, 4295, 119
https://tests.stockfishchess.org/tests/view/60cc7828457376eb8bcab4fa

closes https://github.com/official-stockfish/Stockfish/pull/3566

Bench: 4758885
2021-06-18 21:29:14 +02:00
ap
14b673d90f New default net nn-3b20abec10c1.nnue
This net was created by @pleomati, who manually edited with an hex editor
10 values randomly chosen in the LCSFNet10 net (nn-6ad41a9207d0.nnue) to
create this one. The LCSFNet10 net was trained by Joost VandeVondele from
a dataset combining Stockfish games and Leela games (16x10^9 positions from
SF self-play at depth 9, and 6.3x10^9 positions from Leela games, so overall
72% of Stockfish positions and 28% of Leela positions).

passed STC 10+0.1:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 50888 W: 5881 L: 5654 D: 39353
Ptnml(0-2): 281, 4290, 16085, 4497, 291
https://tests.stockfishchess.org/tests/view/60cbfa68457376eb8bcab49a

passed LTC 60+0.6:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 25480 W: 1498 L: 1338 D: 22644
Ptnml(0-2): 36, 1155, 10193, 1325, 31
https://tests.stockfishchess.org/tests/view/60cc4af8457376eb8bcab4d4

closes https://github.com/official-stockfish/Stockfish/pull/3564

Bench: 4904930
2021-06-18 20:00:13 +02:00
Joost VandeVondele
8ec9e10866 New default net nn-33c9d39e5eb6.nnue
As the previous net, this net is trained on Leela games as provided by borg.
See also https://lczero.org/blog/2021/06/the-importance-of-open-data/

The particular data set, which is a mix of T60 and T74 data, is now available as a single binpack:
https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing

The training command was:
python train.py ../../training_data_pylon.binpack ../../training_data_pylon.binpack --gpus 1 --threads 2 --num-workers 2 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 10 --features=HalfKAv2^   --lambda=1.0  --max_epochs=440 --seed $RANDOM --default_root_dir exp/run_2

passed STC:
https://tests.stockfishchess.org/tests/view/60c887cb457376eb8bcab054
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 12792 W: 1483 L: 1311 D: 9998
Ptnml(0-2): 62, 989, 4131, 1143, 71

passed LTC:
https://tests.stockfishchess.org/tests/view/60c8e5c4457376eb8bcab0f0
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 11272 W: 601 L: 477 D: 10194
Ptnml(0-2): 9, 421, 4657, 535, 14

also had strong LTC performance against another strong net of the series:
https://tests.stockfishchess.org/tests/view/60c8c40d457376eb8bcab0c6

closes https://github.com/official-stockfish/Stockfish/pull/3557

Bench: 5032320
2021-06-15 22:08:40 +02:00
JWmer
f8c779dbe5 Update default net to nn-8e47cf062333.nnue
This net is the result of training on data used by the Leela project. More precisely,
we shuffled T60 and T74 data kindly provided by borg (for different Tnn, the data is
a result of Leela selfplay with differently sized Leela nets).

The data is available at vondele's google drive:
https://drive.google.com/drive/folders/1mftuzYdl9o6tBaceR3d_VBQIrgKJsFpl.

The Leela data comes in small chunks of .binpack files. To shuffle them, we simply
used a small python script to randomly rename the files, and then concatenated them
using `cat`. As validation data we picked a file of T60 data. We will further investigate
T74 data.

The training for the NNUE architecture used 200 epochs with the Python trainer from
the Stockfish project. Unlike the previous run we tried with this data, this run does
not have adjusted scaling — not because we didn't want to, but because we forgot.
However, this training randomly skips 40% more positions than previous run. The loss
was very spiky and decreased slower than it does usually.

Training loss: https://github.com/official-stockfish/images/blob/main/training-loss-8e47cf062333.png
Validation loss: https://github.com/official-stockfish/images/blob/main/validation-loss-8e47cf062333.png

This is the exact training command:
python train.py --smart-fen-skipping --random-fen-skipping 14 --batch-size 16384 --threads 4 --num-workers 4 --gpus 1 trainingdata\training_data.binpack validationdata\val.binpack

---

10k STC result:
ELO: 3.61 +-3.3 (95%) LOS: 98.4%
Total: 10000 W: 1241 L: 1137 D: 7622
Ptnml(0-2): 68, 841, 3086, 929, 76
https://tests.stockfishchess.org/tests/view/60c67e50457376eb8bcaae70

10k LTC result:
ELO: 2.71 +-2.4 (95%) LOS: 98.8%
Total: 10000 W: 659 L: 581 D: 8760
Ptnml(0-2): 22, 485, 3900, 579, 14
https://tests.stockfishchess.org/tests/view/60c69deb457376eb8bcaae98

Passed LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9648 W: 685 L: 545 D: 8418
Ptnml(0-2): 22, 448, 3740, 596, 18
https://tests.stockfishchess.org/tests/view/60c6d41c457376eb8bcaaecf

---

closes https://github.com/official-stockfish/Stockfish/pull/3550

Bench: 4877339
2021-06-14 09:24:07 +02:00
Joost VandeVondele
d53071eff4 Update default net to nn-7e66505906a6.nnue
Trained with pytorch using the master branch and recommended settings,
the data used is the previous 64B binpack enhanced with a 2B binpack
generated using an opening book of positions for with the static eval
is significantly different from d9 search.

book           : https://drive.google.com/file/d/1rHcKY5rv34kwku6g89OhnE8Bkfq3UWau/view?usp=sharing
book generation: 3ce43ab0c4
binpack        : https://drive.google.com/file/d/1rHcKY5rv34kwku6g89OhnE8Bkfq3UWau/view?usp=sharing

-------

Data generation command:

generate_training_data depth 9 count 31250000 random_multi_pv 2 random_multi_pv_diff 100 random_move_max_ply 8 random_move_count 3 set_recommended_uci_options eval_limit 32000 output_file_name output.binpack book wrongNNUE.epd seed ${RANDOM}${RANDOM}

Training command:

python train.py ../../all_d9_fishd9_d8_d10_wrong_shuffle.binpack ../../all_d9_fishd9_d8_d10_wrong_shuffle.binpack  --gpus 1 --threads 2 --num-workers 2 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^   --lambda=1.0  --max_epochs=400 --seed $RANDOM --default_root_dir exp/run_5

-------

passed STC:
https://tests.stockfishchess.org/tests/view/60b7c79a457376eb8bcaa104
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 64592 W: 6254 L: 6028 D: 52310
Ptnml(0-2): 255, 4785, 22020, 4951, 285

passed LTC:
https://tests.stockfishchess.org/tests/view/60b85307457376eb8bcaa182
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 45560 W: 1998 L: 1826 D: 41736
Ptnml(0-2): 36, 1604, 19335, 1762, 43

closes https://github.com/official-stockfish/Stockfish/pull/3521

Bench: 4364128
2021-06-03 16:25:44 +02:00
Stéphane Nicolet
f193778446 Do not use lazy evaluation inside NNUE
This simplification patch implements two changes:

1. it simplifies away the so-called "lazy" path in the NNUE evaluation internals,
   where we trusted the psqt head alone to avoid the costly "positional" head in
   some cases;
2. it raises a little bit the NNUEThreshold1 in evaluate.cpp (from 682 to 800),
   which increases the limit where we switched from NNUE eval to Classical eval.

Both effects increase the number of positional evaluations done by our new net
architecture, but the results of our tests below seem to indicate that the loss
of speed will be compensated by the gain of eval quality.

STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 26280 W: 2244 L: 2137 D: 21899
Ptnml(0-2): 72, 1755, 9405, 1810, 98
https://tests.stockfishchess.org/tests/view/60ae73f112066fd299795a51

LTC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 20592 W: 750 L: 677 D: 19165
Ptnml(0-2): 9, 614, 8980, 681, 12
https://tests.stockfishchess.org/tests/view/60ae88e812066fd299795a82

closes https://github.com/official-stockfish/Stockfish/pull/3503

Bench: 3817907
2021-05-27 01:21:56 +02:00
Tomasz Sobczyk
9d53129075 Expose the lazy threshold for the feature transformer PSQT as a parameter.
Definition of the lazy threshold moved to evaluate.cpp where all others are.
Lazy threshold only used for real searches, not used for the "eval" call.
This preserves the purity of NNUE evaluation, which is useful to verify
consistency between the engine and the NNUE trainer.

closes https://github.com/official-stockfish/Stockfish/pull/3499

No functional change
2021-05-25 21:40:51 +02:00
Stéphane Nicolet
a2f01c07eb Sometimes change the (materialist, positional) balance
Our new nets output two values for the side to move in the last layer.
We can interpret the first value as a material evaluation of the
position, and the second one as the dynamic, positional value of the
location of pieces.

This patch changes the balance for the (materialist, positional) parts
of the score from (128, 128) to (121, 135) when the piece material is
equal between the two players, but keeps the standard (128, 128) balance
when one player is at least an exchange up.

Passed STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 15936 W: 1421 L: 1266 D: 13249
Ptnml(0-2): 37, 1037, 5694, 1134, 66
https://tests.stockfishchess.org/tests/view/60a82df9ce8ea25a3ef0408f

Passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 13904 W: 516 L: 410 D: 12978
Ptnml(0-2): 4, 374, 6088, 484, 2
https://tests.stockfishchess.org/tests/view/60a8bbf9ce8ea25a3ef04101

closes https://github.com/official-stockfish/Stockfish/pull/3492

Bench: 3856635
2021-05-22 21:09:22 +02:00
Joost VandeVondele
fb2d175f97 Update default net to nn-7756374aaed3.nnue
trained with pytorch using the master branch and recommended settings,
same data set as previously used:

python train.py ../../all_d9_fishd9_d8_d10_shuffle.binpack ../../all_d9_fishd9_d8_d10_shuffle.binpack \
        --gpus 1 --threads 2 --num-workers 2 --batch-size 16384 --progress_bar_refresh_rate 300 \
        --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^   --lambda=1.0 \
        --max_epochs=400 --seed $RANDOM --default_root_dir exp/run_8

passed STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 21424 W: 2078 L: 1907 D: 17439
Ptnml(0-2): 80, 1512, 7385, 1627, 108
https://tests.stockfishchess.org/tests/view/60a6c749ce8ea25a3ef03f4d

passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 67912 W: 2851 L: 2648 D: 62413
Ptnml(0-2): 40, 2348, 28984, 2537, 47
https://tests.stockfishchess.org/tests/view/60a722ecce8ea25a3ef03fb9

closes https://github.com/official-stockfish/Stockfish/pull/3489

Bench: 3779522
2021-05-22 07:35:39 +02:00
Tomasz Sobczyk
e8d64af123 New NNUE architecture and net
Introduces a new NNUE network architecture and associated network parameters,
as obtained by a new pytorch trainer.

The network is already very strong at short TC, without regression at longer TC,
and has potential for further improvements.

https://tests.stockfishchess.org/tests/view/60a159c65085663412d0921d
TC: 10s+0.1s, 1 thread
ELO: 21.74 +-3.4 (95%) LOS: 100.0%
Total: 10000 W: 1559 L: 934 D: 7507
Ptnml(0-2): 38, 701, 2972, 1176, 113

https://tests.stockfishchess.org/tests/view/60a187005085663412d0925b
TC: 60s+0.6s, 1 thread
ELO: 5.85 +-1.7 (95%) LOS: 100.0%
Total: 20000 W: 1381 L: 1044 D: 17575
Ptnml(0-2): 27, 885, 7864, 1172, 52

https://tests.stockfishchess.org/tests/view/60a2beede229097940a03806
TC: 20s+0.2s, 8 threads
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 34272 W: 1610 L: 1452 D: 31210
Ptnml(0-2): 30, 1285, 14350, 1439, 32

https://tests.stockfishchess.org/tests/view/60a2d687e229097940a03c72
TC: 60s+0.6s, 8 threads
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 45544 W: 1262 L: 1214 D: 43068
Ptnml(0-2): 12, 1129, 20442, 1177, 12

The network has been trained (by vondele) using the https://github.com/glinscott/nnue-pytorch/ trainer (started by glinscott),
specifically the branch https://github.com/Sopel97/nnue-pytorch/tree/experiment_56.
The data used are in 64 billion positions (193GB total) generated and scored with the current master net
d8: https://drive.google.com/file/d/1hOOYSDKgOOp38ZmD0N4DV82TOLHzjUiF/view?usp=sharing
d9: https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing
d10: https://drive.google.com/file/d/1ZC5upzBYMmMj1gMYCkt6rCxQG0GnO3Kk/view?usp=sharing
fishtest_d9: https://drive.google.com/file/d/1GQHt0oNgKaHazwJFTRbXhlCN3FbUedFq/view?usp=sharing

This network also contains a few architectural changes with respect to the current master:

    Size changed from 256x2-32-32-1 to 512x2-16-32-1
        ~15-20% slower
        ~2x larger
        adds a special path for 16 valued ClippedReLU
        fixes affine transform code for 16 inputs/outputs, buy using InputDimensions instead of PaddedInputDimensions
            this is safe now because the inputs are processed in groups of 4 in the current affine transform code
    The feature set changed from HalfKP to HalfKAv2
        Includes information about the kings like HalfKA
        Packs king features better, resulting in 8% size reduction compared to HalfKA
    The board is flipped for the black's perspective, instead of rotated like in the current master
    PSQT values for each feature
        the feature transformer now outputs a part that is fowarded directly to the output and allows learning piece values more directly than the previous network architecture. The effect is visible for high imbalance positions, where the current master network outputs evaluations skewed towards zero.
        8 PSQT values per feature, chosen based on (popcount(pos.pieces()) - 1) / 4
        initialized to classical material values on the start of the training
    8 subnetworks (512x2->16->32->1), chosen based on (popcount(pos.pieces()) - 1) / 4
        only one subnetwork is evaluated for any position, no or marginal speed loss

A diagram of the network is available: https://user-images.githubusercontent.com/8037982/118656988-553a1700-b7eb-11eb-82ef-56a11cbebbf2.png
A more complete description: https://github.com/glinscott/nnue-pytorch/blob/master/docs/nnue.md

closes https://github.com/official-stockfish/Stockfish/pull/3474

Bench: 3806488
2021-05-18 18:06:23 +02:00
Tomasz Sobczyk
58054fd0fa Exporting the currently loaded network file
This PR adds an ability to export any currently loaded network.
The export_net command now takes an optional filename parameter.
If the loaded net is not the embedded net the filename parameter is required.

Two changes were required to support this:

* the "architecture" string, which is really just a some kind of description in the net, is now saved into netDescription on load and correctly saved on export.
* the AffineTransform scrambles weights for some architectures and sparsifies them, such that retrieving the index is hard. This is solved by having a temporary scrambled<->unscrambled index lookup table when loading the network, and the actual index is saved for each individual weight that makes it to canSaturate16. This increases the size of the canSaturate16 entries by 6 bytes.

closes https://github.com/official-stockfish/Stockfish/pull/3456

No functional change
2021-05-11 19:36:11 +02:00
Tomasz Sobczyk
ca250e969c Add an UCI level command "export_net".
This command writes the embedded net to the file `EvalFileDefaultName`.
If there is no embedded net the command does nothing.

fixes #3453

closes https://github.com/official-stockfish/Stockfish/pull/3454

No functional change
2021-05-07 09:45:08 +02:00
Dieter Dobbelaere
7ffae17f85 Add Stockfish namespace.
fixes #3350 and is a small cleanup that might make it easier to use SF
in separate projects, like a NNUE trainer or similar.

closes https://github.com/official-stockfish/Stockfish/pull/3370

No functional change.
2021-03-07 14:26:54 +01:00
Joost VandeVondele
c4d67d77c9 Update copyright years
No functional change
2021-01-08 17:04:23 +01:00
SFisGOD
7364006757 Update default net to nn-62ef826d1a6d.nnue
Include scaling change as suggested by Dietrich Kappe,
the one who trained net for Komodo.  According to him,
some nets may require different scaling in order to utilize its full strength.

STC:
LLR: 2.93 (-2.94,2.94) {-0.25,1.25}
Total: 99856 W: 9669 L: 9401 D: 80786
Ptnml(0-2): 374, 7468, 34037, 7614, 435
https://tests.stockfishchess.org/tests/view/5fc2697642a050a89f02c8ec

LTC:
LLR: 2.96 (-2.94,2.94) {0.25,1.25}
Total: 29840 W: 1220 L: 1081 D: 27539
Ptnml(0-2): 10, 969, 12827, 1100, 14
https://tests.stockfishchess.org/tests/view/5fc2ea5142a050a89f02c957

Bench: 3561701
2020-11-29 16:54:06 +01:00
SFisGOD
32edb1d009 Update default net to nn-c3ca321c51c9.nnue
Optimization of the net biases of the 32 x 32 layer and the output layer.

Tuning of 32 x 32 layer (200k games, 5 seconds TC)
https://tests.stockfishchess.org/tests/view/5f9aaf266a2c112b60691c68

STC:
LLR: 2.95 (-2.94,2.94) {-0.25,1.25}
Total: 41848 W: 4665 L: 4461 D: 32722
Ptnml(0-2): 239, 3308, 13659, 3446, 272
https://tests.stockfishchess.org/tests/view/5fa5ef5a936c54e11ec9954f

LTC:
LLR: 2.94 (-2.94,2.94) {0.25,1.25}
Total: 88008 W: 4045 L: 3768 D: 80195
Ptnml(0-2): 69, 3339, 36908, 3622, 66
https://tests.stockfishchess.org/tests/view/5fa62a78936c54e11ec99577

closes https://github.com/official-stockfish/Stockfish/pull/3220

Bench: 3649288
2020-11-08 08:36:16 +01:00
mstembera
dfc7f88650 Update default net to nn-cb26f10b1fd9.nnue
Result of https://tests.stockfishchess.org/tests/view/5f9a06796a2c112b60691c0f tuning.

STC
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 53712 W: 5776 L: 5561 D: 42375
Ptnml(0-2): 253, 4282, 17604, 4431, 286
https://tests.stockfishchess.org/tests/view/5f9c7bbc6a2c112b60691d4d

LTC
LLR: 2.97 (-2.94,2.94) {0.25,1.25}
Total: 80184 W: 4007 L: 3739 D: 72438
Ptnml(0-2): 58, 3302, 33130, 3518, 84
https://tests.stockfishchess.org/tests/view/5f9d01f06a2c112b60691d87

closes https://github.com/official-stockfish/Stockfish/pull/3209

bench: 3517795
2020-11-01 08:02:40 +01:00
SFisGOD
6328135264 Update default net to nn-2eb2e0707c2b.nnue
Optimization of the net weights of the 32 x 32 layer (1024 parameters) and net biases of the 512 x 32 layer (32 parameters) using SPSA.

Tuning of 32 x 32 Layer (800,000 games, 5 seconds time control):
https://tests.stockfishchess.org/tests/view/5f942040d3978d7e86f1aa05

Tuning of 512 x 32 Layer (80,000 games, 20 seconds time control):
https://tests.stockfishchess.org/tests/view/5f8f926d2c92c7fe3a8c608b

STC:
LLR: 2.96 (-2.94,2.94) {-0.25,1.25}
Total: 17336 W: 1918 L: 1754 D: 13664
Ptnml(0-2): 79, 1344, 5672, 1480, 93
https://tests.stockfishchess.org/tests/view/5f9882346a2c112b60691b34

LTC:
LLR: 2.94 (-2.94,2.94) {0.25,1.25}
Total: 37304 W: 1822 L: 1651 D: 33831
Ptnml(0-2): 27, 1461, 15501, 1640, 23
https://tests.stockfishchess.org/tests/view/5f98a4b36a2c112b60691b40

closes https://github.com/official-stockfish/Stockfish/pull/3201

Bench: 3403528
2020-10-28 08:13:34 +01:00
mstembera
281d520cc2 Update default net to nn-eba324f53044.nnue
The new net is based on the previous net 04cf2b4ed1da but with the biases
for the 1st hidden layer tuned SPSA, see the SPSA session on fishtest there:
https://tests.stockfishchess.org/tests/view/5f875213dcdad978fe8c5211

Thanks to @vondele for writing out the net, see discussion in this thread:
432da86721

Passed STC:
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 15000 W: 1640 L: 1483 D: 11877
Ptnml(0-2): 50, 1183, 4908, 1278, 81
https://tests.stockfishchess.org/tests/view/5f8955e20fea1a44ec4f0a5d

Passed LTC:
LLR: 2.96 (-2.94,2.94) {0.25,1.25}
Total: 81272 W: 3948 L: 3682 D: 73642
Ptnml(0-2): 64, 3194, 33856, 3456, 66
https://tests.stockfishchess.org/tests/view/5f89e8efeae8a6e60644d6e7

closes https://github.com/official-stockfish/Stockfish/pull/3187

Bench: 3762411
2020-10-18 13:43:26 +02:00
Joost VandeVondele
ba73f8ce0d Update default net to nn-04cf2b4ed1da.nnue
Further tune the net parameters, now the last but one layer (32x32).
To limit the number of parameters optimized, the network layer was
decomposed using SVD, and the singular values were treated
as parameters and tuned.

Tuning branch: https://github.com/vondele/Stockfish/tree/svdTune
Tuner: https://github.com/vondele/nevergrad4sf

passed STC:
https://tests.stockfishchess.org/tests/view/5f83e82f8ea73fb8ddf83e4e
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 8488 W: 944 L: 795 D: 6749
Ptnml(0-2): 39, 609, 2811, 734, 51

passed LTC:
https://tests.stockfishchess.org/tests/view/5f83f4118ea73fb8ddf83e66
LLR: 2.94 (-2.94,2.94) {0.25,1.25}
Total: 169016 W: 8043 L: 7589 D: 153384
Ptnml(0-2): 133, 6623, 70538, 7085, 129

closes https://github.com/official-stockfish/Stockfish/pull/3181

Bench: 3945198
2020-10-14 13:28:21 +02:00
SFisGOD
5efbaaba77 Update default net to nn-baeb9ef2d183.nnue
Further optimization of Sergio's nn-03744f8d56d8.nnue
This patch is the result of collaboration with Joost VandeVondele.

STC:
LLR: 2.96 (-2.94,2.94) {-0.25,1.25}
Total: 37000 W: 4145 L: 3947 D: 28908
Ptnml(0-2): 191, 3016, 11912, 3166, 215
https://tests.stockfishchess.org/tests/view/5f71e7983b22d6afa5069475

LTC:
LLR: 2.96 (-2.94,2.94) {0.25,1.25}
Total: 60224 W: 2992 L: 2769 D: 54463
Ptnml(0-2): 48, 2420, 24956, 2637, 51
https://tests.stockfishchess.org/tests/view/5f722bb83b22d6afa506998f

closes https://github.com/official-stockfish/Stockfish/pull/3161

Bench: 3720073
2020-09-28 22:29:31 +02:00
Joost VandeVondele
36c2886302 Update default net to nn-04a843f8932e.nnue
an optimization of Sergio's nn-03744f8d56d8.nnue tuning the output layer (33 parameters) on game play.

WIP code to make layer parameters tunable is https://github.com/vondele/Stockfish/tree/optionOutput
Optimization itself is using https://github.com/vondele/nevergrad4sf
Writing of the modified net using WIP code based on the learner code https://github.com/vondele/Stockfish/tree/evalWrite

Most parameters in the output layer are changed only little (~5 for int8_t).

passed STC:
https://tests.stockfishchess.org/tests/view/5f716f6b3b22d6afa506941a
LLR: 2.94 (-2.94,2.94) {-0.25,1.25}
Total: 15488 W: 1859 L: 1689 D: 11940
Ptnml(0-2): 79, 1260, 4917, 1388, 100

passed LTC:
https://tests.stockfishchess.org/tests/view/5f71908e3b22d6afa506942e
LLR: 2.93 (-2.94,2.94) {0.25,1.25}
Total: 8728 W: 518 L: 400 D: 7810
Ptnml(0-2): 7, 338, 3556, 456, 7

closes https://github.com/official-stockfish/Stockfish/pull/3158

Bench: 3789924
2020-09-28 16:55:40 +02:00
Stéphane Nicolet
9a64e737cf Small cleanups 12
- Clean signature of functions in namespace NNUE
- Add comment for countermove based pruning
- Remove bestMoveCount variable
- Add const qualifier to kpp_board_index array
- Fix spaces in get_best_thread()
- Fix indention in capture LMR code in search.cpp
- Rename TtmemDeleter to LargePageDeleter

Closes https://github.com/official-stockfish/Stockfish/pull/3063

No functional change
2020-09-21 10:41:10 +02:00
Sergio Vieri
7135678f71 Update default net to nn-03744f8d56d8.nnue
Equivalent to 20200914-1520

closes https://github.com/official-stockfish/Stockfish/pull/3123

Bench: 4222126
2020-09-15 07:21:04 +02:00
Sergio Vieri
9cc482c788 Update default net to nn-308d71810dff.nnue
equivalent to 20200903-1739

Net trained from scratch, so it has quite different features extracted compared to the previous net (82215d0fd0df).

STC:
LLR: 2.98 (-2.94,2.94) {-0.25,1.25}
Total: 108328 W: 14048 L: 13719 D: 80561
Ptnml(0-2): 842, 10039, 32062, 10390, 831
https://tests.stockfishchess.org/tests/view/5f50e053ba100690c5cc5f00

LTC:
LLR: 2.96 (-2.94,2.94) {0.25,1.25}
Total: 13872 W: 1059 L: 890 D: 11923
Ptnml(0-2): 30, 724, 5270, 871, 41
https://tests.stockfishchess.org/tests/view/5f51821fba100690c5cc5f36

closes https://github.com/official-stockfish/Stockfish/pull/3104

Bench: 3832716
2020-09-04 08:03:43 +02:00
Stéphane Nicolet
406979ea12 Embed default net, and simplify using non-default nets
covers the most important cases from the user perspective:

It embeds the default net in the binary, so a download of that binary will result
in a working engine with the default net. The engine will be functional in the default mode
without any additional user action.

It allows non-default nets to be used, which will be looked for in up to
three directories (working directory, location of the binary, and optionally a specific default directory).
This mechanism is also kept for those developers that use MSVC,
the one compiler that doesn't have an easy mechanism for embedding data.

It is possible to disable embedding, and instead specify a specific directory, e.g. linux distros might want to use
CXXFLAGS="-DNNUE_EMBEDDING_OFF -DDEFAULT_NNUE_DIRECTORY=/usr/share/games/stockfish/" make -j ARCH=x86-64 profile-build

passed STC non-regression:
https://tests.stockfishchess.org/tests/view/5f4a581c150f0aef5f8ae03a
LLR: 2.95 (-2.94,2.94) {-1.25,-0.25}
Total: 66928 W: 7202 L: 7147 D: 52579
Ptnml(0-2): 291, 5309, 22211, 5360, 293

closes https://github.com/official-stockfish/Stockfish/pull/3070

fixes https://github.com/official-stockfish/Stockfish/issues/3030

No functional change.
2020-08-29 21:56:00 +02:00
nodchip
84f3e86790 Add NNUE evaluation
This patch ports the efficiently updatable neural network (NNUE) evaluation to Stockfish.

Both the NNUE and the classical evaluations are available, and can be used to
assign a value to a position that is later used in alpha-beta (PVS) search to find the
best move. The classical evaluation computes this value as a function of various chess
concepts, handcrafted by experts, tested and tuned using fishtest. The NNUE evaluation
computes this value with a neural network based on basic inputs. The network is optimized
and trained on the evalutions of millions of positions at moderate search depth.

The NNUE evaluation was first introduced in shogi, and ported to Stockfish afterward.
It can be evaluated efficiently on CPUs, and exploits the fact that only parts
of the neural network need to be updated after a typical chess move.
[The nodchip repository](https://github.com/nodchip/Stockfish) provides additional
tools to train and develop the NNUE networks.

This patch is the result of contributions of various authors, from various communities,
including: nodchip, ynasu87, yaneurao (initial port and NNUE authors), domschl, FireFather,
rqs, xXH4CKST3RXx, tttak, zz4032, joergoster, mstembera, nguyenpham, erbsenzaehler,
dorzechowski, and vondele.

This new evaluation needed various changes to fishtest and the corresponding infrastructure,
for which tomtor, ppigazzini, noobpwnftw, daylen, and vondele are gratefully acknowledged.

The first networks have been provided by gekkehenker and sergiovieri, with the latter
net (nn-97f742aaefcd.nnue) being the current default.

The evaluation function can be selected at run time with the `Use NNUE` (true/false) UCI option,
provided the `EvalFile` option points the the network file (depending on the GUI, with full path).

The performance of the NNUE evaluation relative to the classical evaluation depends somewhat on
the hardware, and is expected to improve quickly, but is currently on > 80 Elo on fishtest:

60000 @ 10+0.1 th 1
https://tests.stockfishchess.org/tests/view/5f28fe6ea5abc164f05e4c4c
ELO: 92.77 +-2.1 (95%) LOS: 100.0%
Total: 60000 W: 24193 L: 8543 D: 27264
Ptnml(0-2): 609, 3850, 9708, 10948, 4885

40000 @ 20+0.2 th 8
https://tests.stockfishchess.org/tests/view/5f290229a5abc164f05e4c58
ELO: 89.47 +-2.0 (95%) LOS: 100.0%
Total: 40000 W: 12756 L: 2677 D: 24567
Ptnml(0-2): 74, 1583, 8550, 7776, 2017

At the same time, the impact on the classical evaluation remains minimal, causing no significant
regression:

sprt @ 10+0.1 th 1
https://tests.stockfishchess.org/tests/view/5f2906a2a5abc164f05e4c5b
LLR: 2.94 (-2.94,2.94) {-6.00,-4.00}
Total: 34936 W: 6502 L: 6825 D: 21609
Ptnml(0-2): 571, 4082, 8434, 3861, 520

sprt @ 60+0.6 th 1
https://tests.stockfishchess.org/tests/view/5f2906cfa5abc164f05e4c5d
LLR: 2.93 (-2.94,2.94) {-6.00,-4.00}
Total: 10088 W: 1232 L: 1265 D: 7591
Ptnml(0-2): 49, 914, 3170, 843, 68

The needed networks can be found at https://tests.stockfishchess.org/nns
It is recommended to use the default one as indicated by the `EvalFile` UCI option.

Guidelines for testing new nets can be found at
https://github.com/glinscott/fishtest/wiki/Creating-my-first-test#nnue-net-tests

Integration has been discussed in various issues:
https://github.com/official-stockfish/Stockfish/issues/2823
https://github.com/official-stockfish/Stockfish/issues/2728

The integration branch will be closed after the merge:
https://github.com/official-stockfish/Stockfish/pull/2825
https://github.com/official-stockfish/Stockfish/tree/nnue-player-wip

closes https://github.com/official-stockfish/Stockfish/pull/2912

This will be an exciting time for computer chess, looking forward to seeing the evolution of
this approach.

Bench: 4746616
2020-08-06 16:37:45 +02:00
Joost VandeVondele
c6839a2615 Small cleanups
closes https://github.com/official-stockfish/Stockfish/pull/2546

No functional change.
2020-03-01 09:31:58 +01:00
Alain SAVARD
09bef14c76 Update lists of authors and contributors
Preparing for version 11 of Stockfish: update lists of authors,
contributors giving CPU time to the fishtest framework, etc.

No functional change
2020-01-09 01:43:47 +01:00
SFisGOD
31ac538f96 A combo of parameter tweaks
Joint work by SFisGOD, xoroshiro and Chess13234.

This combo consists of the following tweaks:
Assorted bonuses and penalties by SFisGOD
Bishop and Rook PSQT by SFisGOD
Tempo Value by xoroshiro
Futility pruning by Chess13234

STC:
LLR: 2.95 (-2.94,2.94) [0.00,4.00]
Total: 9005 W: 2082 L: 1882 D: 5041
http://tests.stockfishchess.org/tests/view/5c11628c0ebc5902ba119e90

LTC:
LLR: 2.95 (-2.94,2.94) [0.00,4.00]
Total: 44207 W: 7451 L: 7157 D: 29599
http://tests.stockfishchess.org/tests/view/5c1172a40ebc5902ba119fa3

Bench: 3332460
2018-12-13 13:35:35 +01:00
Stéphane Nicolet
cf5d683408 Stockfish 10-beta
Preparation commit for the upcoming Stockfish 10 version, giving a chance to catch last minute feature bugs and evaluation regression during the one-week code freeze period. Also changing the copyright dates to include 2019.

No functional change
2018-11-19 11:18:21 +01:00
Marco Costalba
4bd24da161 Slight tidy up in endgame machinery
No functional change.
2018-07-22 17:55:41 +02:00
Ondrej Mosnáček
c8ef80f466 Use per-thread dynamic contempt
We now use per-thread dynamic contempt. This patch has the following
effects:

 * for Threads=1: **non-functional**
 * for Threads>1:
   * with MultiPV=1: **no regression, little to no ELO gain**
   * with MultiPV>1: **clear improvement over master**

First, I tried testing at standard MultiPV=1 play with [0,5] bounds.
This yielded 2 yellow and 1 red test:

5+0.05, Threads=5:
LLR: -2.96 (-2.94,2.94) [0.00,5.00]
Total: 82689 W: 16439 L: 16190 D: 50060
http://tests.stockfishchess.org/tests/view/5aa93a5a0ebc5902952892e6

5+0.05, Threads=8:
LLR: -2.96 (-2.94,2.94) [0.00,5.00]
Total: 27164 W: 4974 L: 4983 D: 17207
http://tests.stockfishchess.org/tests/view/5ab2639b0ebc5902a6fbefd5

5+0.5, Threads=16:
LLR: -2.97 (-2.94,2.94) [0.00,5.00]
Total: 41396 W: 7127 L: 7082 D: 27187
http://tests.stockfishchess.org/tests/view/5ab124220ebc59029516cb62

Then, I tested with Skill Level=17 (implicitly MutliPV=4), showing
a clear improvement:

5+0.05, Threads=5:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 3498 W: 1316 L: 1135 D: 1047
http://tests.stockfishchess.org/tests/view/5ab4b6580ebc5902932aeca2

Next, I tested the patch with MultiPV=1 again, this time checking for
non-regression ([-3, 1]):

5+0.5, Threads=5:
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 65575 W: 12786 L: 12745 D: 40044
http://tests.stockfishchess.org/tests/view/5ab4e8500ebc5902932aecb3

Finally, I ran some tests with fixed number of games, checking if
reverting dynamic contempt gains more elo with Skill Level=17 (i.e.
MultiPV) than applying the "prevScore" fix and this patch. These tests
showed, that this patch gains 15 ELO when playing with Skill Level=17:

5+0.05, Threads=3, "revert dynamic contempt" vs. "WITHOUT this patch":
ELO: -11.43 +-4.1 (95%) LOS: 0.0%
Total: 20000 W: 7085 L: 7743 D: 5172
http://tests.stockfishchess.org/tests/view/5ab636450ebc590295d88536

5+0.05, Threads=3, "revert dynamic contempt" vs. "WITH this patch":
ELO: -26.42 +-4.1 (95%) LOS: 0.0%
Total: 20000 W: 6661 L: 8179 D: 5160
http://tests.stockfishchess.org/tests/view/5ab62e680ebc590295d88524

---
***FAQ***

**Why should this be commited?**
I believe that the gain for multi-thread MultiPV search is a sufficient
justification for this otherwise neutral change. I also believe this
implementation of dynamic contempt is more logical, although this may
be just my opinion.

**Why is per-thread contempt better at MultiPV?**
A likely explanation for the gain in MultiPV mode is that during
search each thread independently switches between rootMoves and via
the shared contempt score skews each other's evaluation.

**Why were the tests done with Skill Level=17?**
This was originally suggested by @Hanamuke and the idea is that with
Skill Level Stockfish sometimes plays also moves it thinks are slightly
sub-optimal and thus the quality of all moves offered by the MultiPV
search is checked by the test.

**Why are the ELO differences so huge?**
This is most likely because of the nature of Skill Level mode --
since it slower and weaker than normal mode, bugs in evaluation have
much greater effect.

---

Closes https://github.com/official-stockfish/Stockfish/pull/1515.

No functional change -- in single thread mode.
2018-03-30 10:48:57 +02:00
Ronald de Man
759b3c79cf Mark all compile-time constants as constexpr.
To more clearly distinguish them from "const" local variables, this patch
defines compile-time local constants as constexpr. This is consistent with
the definition of PvNode as constexpr in search() and qsearch(). It also
makes the code more robust, since the compiler will now check that those
constants are indeed compile-time constants.

We can go even one step further and define all the evaluation and search
compile-time constants as constexpr.

In generate_castling() I replaced "K" with "step", since K was incorrectly
capitalised (in the Chess960 case).

In timeman.cpp I had to make the non-local constants MaxRatio and StealRatio
constepxr, since otherwise gcc would complain when calculating TMaxRatio and
TStealRatio. (Strangely, I did not have to make Is64Bit constexpr even though
it is used in ucioption.cpp in the calculation of constexpr MaxHashMB.)

I have renamed PieceCount to pieceCount in material.h, since the values of
the array are not compile-time constants.

Some compile-time constants in tbprobe.cpp were overlooked. Sides and MaxFile
are not compile-time constants, so were renamed to sides and maxFile.

Non-functional change.
2018-03-18 23:48:16 +01:00
Stefano Cardanobile
cb1324312d Introduce dynamic contempt
Make contempt dependent on the current score of the root position.

The idea is that we now use a linear formula like the following to decide
on the contempt to use during a search :

    contempt = x + y * eval

where x is the base contempt set by the user in the "Contempt" UCI option,
and y * eval is the dynamic part which adapts itself to the estimation of
the evaluation of the root position returned by the search. In this patch,
we use x = 18 centipawns by default, and the y * eval correction can go
from -20 centipawns if the root eval is less than -2.0 pawns, up to +20
centipawns when the root eval is more than 2.0 pawns.

To summarize, the new contempt goes from -0.02 to 0.38 pawns, depending if
Stockfish is losing or winning, with an average value of 0.18 pawns by default.

STC:
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 110052 W: 24614 L: 23938 D: 61500
http://tests.stockfishchess.org/tests/view/5a72e6020ebc590f2c86ea20

LTC:
LLR: 2.97 (-2.94,2.94) [0.00,5.00]
Total: 16470 W: 2896 L: 2705 D: 10869
http://tests.stockfishchess.org/tests/view/5a76c5b90ebc5902971a9830

A second match at LTC was organised against the current master:

ELO: 1.45 +-2.9 (95%) LOS: 84.0%
Total: 19369 W: 3350 L: 3269 D: 12750
http://tests.stockfishchess.org/tests/view/5a7acf980ebc5902971a9a2e

Finally, we checked that there is no apparent problem with multithreading,
despite the fact that some threads might have a slightly different contempt
level that the main thread.

Match of this version against master, both using 5 threads, time control 30+0.3:
ELO: 2.18 +-3.2 (95%) LOS: 90.8%
Total: 14840 W: 2502 L: 2409 D: 9929
http://tests.stockfishchess.org/tests/view/5a7bf3e80ebc5902971a9aa2

Include suggestions from Marco Costalba, Aram Tumanian, Ronald de Man, etc.

Bench: 5207156
2018-02-09 19:07:19 +01:00
Joost VandeVondele
9afa1d7330 New Year 2018
Adjust copyright headers.

No functional change.
2018-01-01 13:18:10 +01:00