Add a logarithmic term in the optimism computation, increase
the maximal optimism and lower the contempt offset.
This increases the dynamics of the optimism aspects, giving
a boost for balanced positions without skewing too much on
unbalanced positions (but this version will enter panic mode
faster than previous master when behind, trying to draw faster
when slightly behind). This helps, since optimism is in general
a good thing, for instance at LTC, but too high optimism
rapidly contaminates play.
passed STC:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 159343 W: 34489 L: 33588 D: 91266
http://tests.stockfishchess.org/tests/view/5a8db9340ebc590297cc85b6
passed LTC:
LLR: 2.97 (-2.94,2.94) [0.00,5.00]
Total: 47491 W: 7825 L: 7517 D: 32149
http://tests.stockfishchess.org/tests/view/5a9456a80ebc590297cc8a89
It must be mentioned that a version of the PR with contempt 0
did not pass STC [0,5]. The version in the patch, which uses
default contempt 12, was found to be as strong as current master
on different matches against SF7 and SF8, both at STC and LTC.
One drawback maybe is that it raises the draw rate in self-play
from 56% to 59%, giving a little bit less sensitivity for SF
developpers to find evaluation improvements by selfplay tests
in fishtest.
Possible further work:
• tune the values accurately, while keeping in mind the drawrate issue
• check whether it is possible to remove linear and offset term
• try to simplify the S-shape curve
Bench: 5934644
Using a SPSA tuning session to optimize the time management
parameters.
With SPSA tuning it is not always possible to say where improvements
came from. Maybe some variables changed randomly or because result
was not sensitive enough to them. So my explanation of changes will
not be necessarily correct, but here it is.
• When decrease of thinking time was added by Joost a few months ago
if best move has not changed for several plies, one more competing
indicator was introduced for the same purpose along with increase
in score and absence of fail low at root. It seems that tuning put
relatively more importance on that new indicator what allowed to save
time.
• Some of this saved time is distributed proportionally between all
moves and some more time were given to moves when score dropped a lot
or best move changed.
• It looks also that SPSA redistributed more time from the beginning to
later stages of game via other changes in variables - maybe because
contempt made game to last longer or for whatever reason.
All of this is just small tweaks here and there (a few percentages changes).
STC (10+0.1):
LLR: 2.96 (-2.94,2.94) [0.00,4.00]
Total: 18970 W: 4268 L: 4029 D: 10673
http://tests.stockfishchess.org/tests/view/5a9291a40ebc590297cc8881
LTC (60+0.6):
LLR: 2.95 (-2.94,2.94) [0.00,4.00]
Total: 72027 W: 12263 L: 11878 D: 47886
http://tests.stockfishchess.org/tests/view/5a92d7510ebc590297cc88ef
Additional non-regression tests at other time controls
Sudden death 60s:
LLR: 2.95 (-2.94,2.94) [-4.00,0.00]
Total: 14444 W: 2715 L: 2608 D: 9121
http://tests.stockfishchess.org/tests/view/5a9445850ebc590297cc8a65
40 moves repeating at LTC:
LLR: 2.95 (-2.94,2.94) [-4.00,0.00]
Total: 10309 W: 1880 L: 1759 D: 6670
http://tests.stockfishchess.org/tests/view/5a9566ec0ebc590297cc8be1
This is a functional patch only for time management, but the bench
does not reflect this because it uses fixed depth search, so the number
of nodes does not change during bench.
No functional change.
Make contempt dependent on the current score of the root position.
The idea is that we now use a linear formula like the following to decide
on the contempt to use during a search :
contempt = x + y * eval
where x is the base contempt set by the user in the "Contempt" UCI option,
and y * eval is the dynamic part which adapts itself to the estimation of
the evaluation of the root position returned by the search. In this patch,
we use x = 18 centipawns by default, and the y * eval correction can go
from -20 centipawns if the root eval is less than -2.0 pawns, up to +20
centipawns when the root eval is more than 2.0 pawns.
To summarize, the new contempt goes from -0.02 to 0.38 pawns, depending if
Stockfish is losing or winning, with an average value of 0.18 pawns by default.
STC:
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 110052 W: 24614 L: 23938 D: 61500
http://tests.stockfishchess.org/tests/view/5a72e6020ebc590f2c86ea20
LTC:
LLR: 2.97 (-2.94,2.94) [0.00,5.00]
Total: 16470 W: 2896 L: 2705 D: 10869
http://tests.stockfishchess.org/tests/view/5a76c5b90ebc5902971a9830
A second match at LTC was organised against the current master:
ELO: 1.45 +-2.9 (95%) LOS: 84.0%
Total: 19369 W: 3350 L: 3269 D: 12750
http://tests.stockfishchess.org/tests/view/5a7acf980ebc5902971a9a2e
Finally, we checked that there is no apparent problem with multithreading,
despite the fact that some threads might have a slightly different contempt
level that the main thread.
Match of this version against master, both using 5 threads, time control 30+0.3:
ELO: 2.18 +-3.2 (95%) LOS: 90.8%
Total: 14840 W: 2502 L: 2409 D: 9929
http://tests.stockfishchess.org/tests/view/5a7bf3e80ebc5902971a9aa2
Include suggestions from Marco Costalba, Aram Tumanian, Ronald de Man, etc.
Bench: 5207156
Set the default contempt value of Stockfish to 20 centipawns.
The contempt feature of Stockfish tries to prevent the engine from
simplifying the position too quickly when it feels that it is very
slightly behind, instead keeping the tension a little bit longer.
Various tests in November 2017 have proved that our current imple-
mentation works well against SF7 (which is about 130 Elo weaker than
current master) and than the Elo gain is an increasing function of
contempt, going (against SF7) from +0 Elo when contempt is set at
zero centipawns, to +30 Elo when contempt is 40 centipawns.
See pull request 1325 for details:
https://github.com/official-stockfish/Stockfish/pull/1325
This november discussion left open the decision of which "default"
value for contempt we should use for Stockfish, taking into account
the various uses ofStockfish (opening preparation for humans, computer
online tournaments,analysis tool for web pages, human/computer play,
etc).
This pull request proposes to set the default contempt value of SF
to twenty centipawns, which turns out to be the highest value which
is not a regression against current master, as this seemed to be a
good compromise between risk and safety. A couple of SPRT[-3..1]
tests were done to bisect this value:
Contempt 10: http://tests.stockfishchess.org/tests/view/5a5d42d20ebc5902977e2901 (PASSED)
Contempt 15: http://tests.stockfishchess.org/tests/view/5a5d41740ebc5902977e28fa (PASSED)
Contempt 20: http://tests.stockfishchess.org/tests/view/5a5d42060ebc5902977e28fc (PASSED)
Contempt 25: http://tests.stockfishchess.org/tests/view/5a5d433f0ebc5902977e2904 (FAILED)
Surprisingly, a test at "very long time control" hinted that using
contempt 20 is not only be non-regressive against contempt 0, but
may actually exhibit some small Elo gain, giving a likehood of superio-
rity of 88.7% after 8500 games:
VLTC:
ELO: 2.28 +-3.7 (95%) LOS: 88.7%
Total: 8521 W: 1096 L: 1040 D: 6385
http://tests.stockfishchess.org/tests/view/5a60b2820ebc590297b9b7e0
Finally, there was some concerns that a contempt value of 20 would
be worse than a value of 7, but a test with 20000 games at STC was
neutral:
STC:
ELO: 0.45 +-3.1 (95%) LOS: 61.2%
Total: 20000 W: 4222 L: 4196 D: 11582
http://tests.stockfishchess.org/tests/view/5a64d2fd0ebc590297903868
See the comments in pull request 1361 for the long, nice discussion
(180 entries :-)) leading to the decision to propose contempt 20 as
the default value:
https://github.com/official-stockfish/Stockfish/pull/1361
Whether Stockfish should strictly adhere to the Komodo and Houdini
semantics and add the UCI commands to force the contempt to be White
in the so-called "analysis mode" is still under discussion, and may
be or may not be the object of a future commit.
Bench: 5783344
For efficiency reasons current master only allows for transposition table sizes that are N = 2^k in size, the index computation can be done efficiently as (hash % N) can be written instead as (hash & 2^k - 1). On a typical computer (with 4, 8... etc Gb of RAM), this implies roughly half the RAM is left unused in analysis.
This issue was mentioned on fishcooking by Mindbreaker:
http://tests.stockfishchess.org/tests/view/5a3587de0ebc590ccbb8be04
Recently a neat trick was proposed to map a hash into the range [0,N[ more efficiently than (hash % N) for general N, nearly as efficiently as (hash % 2^k):
https://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/
namely computing (hash * N / 2^32) for 32 bit hashes. This patch implements this trick and now allows for general hash sizes. Note that for N = 2^k this just amounts to using a different subset of bits from the hash. Master will use the lower k bits, this trick will use the upper k bits (of the 32 bit hash).
There is no slowdown as measured with [-3, 1] test:
http://tests.stockfishchess.org/tests/view/5a3587de0ebc590ccbb8be04
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 128498 W: 23332 L: 23395 D: 81771
There are two (smaller) caveats:
1) the patch is implemented for a 32 bit hash (so that a 64 bit multiply can be used), this effectively limits the number of clusters that can be used to 2^32 or to 128Gb of transpostion table. That's a change in the maximum allowed TT size, which could bother those using 256Gb or more regularly.
2) Already in master, an excluded move is hashed into the position key in rather simple way, essentially only affecting the lower 16 bits of the key. This is OK in master, since bits 0-15 end up in the index, but not in the new scheme, which picks the higher bits. This is 'fixed' by shifting the excluded move a few bits up. Eventually a better hashing scheme seems wise.
Despite these two caveats, I think this is a nice improvement in usability.
Bench: 5346341
This shoudl reduce time losses experienced by
users after new time management code.
Verified for no regression in very short TC (4sec + 0.1)
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 35262 W: 7426 L: 7331 D: 20505
Bench 5322108
What this patch does is:
* increase safety margin from 40ms to 60ms. It's worth noting that the previous
code not only used 60ms incompressible safety margin, but also an additional
buffer of 30ms for each "move to go".
* remove a whart, integrating the extra 10ms in Move Overhead value instead.
Additionally, this ensures that optimumtime doesn't become bigger than maximum
time after maximum time has been artificially discounted by 10ms. So it keeps
the code more logical.
Tested at 3 different time controls:
Standard 10+0.1
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 58008 W: 10674 L: 10617 D: 36717
Sudden death 16+0
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 59664 W: 10945 L: 10891 D: 37828
Tournament 40/10
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 16371 W: 3092 L: 2963 D: 10316
bench: 5479946
Simplify out low level sync stuff (mutex
and friends) and avoid to use them directly
in many functions.
Also some renaming and better comment while
there.
No functional change.
a single Xeon Phi can present itself as a single numa node with up to 288 threads (4 threads per hardware core).
Tested to work as expected with a Xeon Phi CPU 7230 up to 256 threads.
No functional change
Closes#1045
Allow to specifiy the log file name, this comes
handy in case of self-matches so that each SF
instance writes into a different log file.
No functional change.
Simplify time management code by removing hard stops for unchanging first root moves.
Search is now stopped earlier at the end iteration if it did not have fail-lows at root.
This simplification also fixes pondering bug. Ponder flag was true by default
and cutechess-cli doesn't change it to false even though no pondering is possible.
Fix the issue by setting the default value of 'Ponder' flag to false.
10+0.1:
ELO: 3.51 +-3.0 (95%) LOS: 99.0%
Total: 20000 W: 3898 L: 3696 D: 12406
40+0.4:
ELO: 1.39 +-2.7 (95%) LOS: 84.7%
Total: 20000 W: 3104 L: 3024 D: 13872
60+0.06:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 37231 W: 5333 L: 5236 D: 26662
Stopped run at 100+1:
LLR: 1.09 (-2.94,2.94) [-3.00,1.00]
Total: 37253 W: 4862 L: 4856 D: 27535
Resolves#523Fixes#510
The only interesting change is the moving of
stack[MAX_PLY+4] back to its original position
in id_loop (now renamed Thread::search).
No functional change.
Introduce helper function Search::reset() which clears all kind of search
memory, in order to restore a deterministic search state.
Generalize TT.clear() into Search::reset() for the following use cases:
- bench: needed to guarantee deterministic bench (ie. if you call bench from
interactive command line twice in a row you get the same value).
- Clear Hash: restore clean search state, which is the purpose of this button.
- ucinewgame: ditto.
No functional change.
Resolves#346
Currently Zobrist::castling[] are not properly zeroed
and rely on the compiler to do this at startup, but this
makes Position::init() to set different values every time
it is called!
This is a bit odd, and although not impacting normal usage,
can yield to subtle misbehaviour, very difficult to track
down, in case we happen to call it more than once for some
reason. I found this while developing tuning support and
it took me a while to track it down.
So properly init Zobrist::castling[]
No functional change.
Resolves#329
When running more games in parallel, or simply when running a game
with a background process, due to how OS scheduling works, there is no
guarantee that the CPU resources allocated evenly between the two
players. This introduces noise in the result that leads to unreliable
result and in the worst cases can even invalidate the result. For
instance in SF test framework we avoid running from clouds virtual
machines because are a known source of very unstable CPU speed.
To overcome this issue, without requiring changes to the GUI, the idea
is to use searched nodes instead of time, and to convert time to
available nodes upfront, at the beginning of the game.
When nodestime UCI option is set at a given nodes per milliseconds
(npmsec), at the beginning of the game (and only once), the engine
reads the available time to think, sent by the GUI with 'go wtime x'
UCI command. Then it translates time in available nodes (nodes =
npmsec * x), then feeds available nodes instead of time to the time
management logic and starts the search. During the search the engine
checks the searched nodes against the available ones in such a way
that all the time management logic still fully applies, and the game
mimics a real one played on real time. When the search finishes,
before returning best move, the total available nodes are updated,
subtracting the real searched nodes. After the first move, the time
information sent by the GUI is ignored, and the engine fully relies on
the updated total available nodes to feed time management.
To avoid time losses, the speed of the engine (npms) must be set to a
value lower than real speed so that if the real TC is for instance 30
secs, and npms is half of the real speed, the game will last on
average 15 secs, so much less than the TC limit, providing for a
safety 'time buffer'.
There are 2 main limitations with this mode.
1. Engine speed should be the same for both players, and this limits
the approach to mainly parameter tuning patches.
2. Because npms is fixed while, in real engines, the speed increases
toward endgame, this introduces an artifact that is equivalent to an
altered time management. Namely it is like the time management gives
less available time than what should be in standard case.
May be the second limitation could be mitigated in a future with a
smarter 'dynamic npms' approach.
Tests shows that the standard deviation of the results with 'nodestime'
is lower than in standard TC, as is expected because now all the introduced
noise due the random speed variability of the engines during the game is
fully removed.
Original NIT idea by Michael Hoffman that shows how to play in NIT mode
without requiring changes to the GUI. This implementation goes a bit
further, the key difference is that we read TC from GUI only once upfront
instead of re-reading after every move as in Michael's implementation.
No functional change.
Verified with perft there is no speed regression,
and code is simpler. It is also conceptually correct
becuase an extended move is just a move that happens
to have also a score.
No functional change.
Import C++11 branch from:
https://github.com/mcostalba/Stockfish/tree/c++11
The version imported is teh last one as of today:
6670e93e50
Branch is fully equivalent with master but syzygy
tablebases that are missing (but will be added with
next commit).
bench: 8080602
On platforms where size_t is 32 bit, we
can have an overflow in this expression:
(mbSize * 1024 * 1024)
Fix it setting max hash size of 2GB on platforms
where size_t is 32 bit.
A small rename while there: now struct Cluster
is definied inside class TranspositionTable so
we should drop the redundant TT prefix.
No functional change.
Adds support for Syzygy tablebases to Stockfish. See
the Readme for information on using the tablebases.
Tablebase support can be enabled/disabled at the Makefile
level as well, by setting syzygy=yes or syzygy=no.
Big/little endian are both supported.
No functional change (if Tablebases are not used).
Resolves#6
Stockfish allocates the default hash (32MB) in main(), before entering UCI::loop(). If there is not enough
memory, the program will crash even before UCI::loop() is entered and the GUI is given a change to specify a
lower Hash value.
This defective design could be resolved by doing a lazy allocation upon "isready" command, as the UCI protocol
guarantees that "isready" will be sent at least once before any search. But it's a bit cumbersome when using
Stockfish "manually" to have to remember to type "isready" everytime.
So leave the current design, but reduce the default hash to 16MB instread of 32MB. In order to perform such
quick searches (depth=13), there is no reason to use so much Hash anyway. Another benefit is to introduce a
bit of hash pressure in bench, which increases chances to detect rare bugs related to TT replacement, for
example.
This is not a functional change, although it obviously changes the bench.
bench 7461879
Despite being neutral at STC, it turned out to be regressive at LTC:
40k games at LTC with Hash=8
ELO: -2.06 +-1.9 (95%) LOS: 1.4%
Total: 39720 W: 5740 L: 5976 D: 28004
40k games at LTC with Hash=128
ELO: -2.69 +-1.9 (95%) LOS: 0.2%
Total: 39149 W: 5702 L: 6005 D: 27442
bench 7477963
Also raise the admissible bounds to (-100,100), as there is no reason to prevent users from using high
values if they want to.
Does not regress in self play:
ELO: 0.10 +-2.0 (95%) LOS: 53.7%
Total: 40000 W: 7084 L: 7073 D: 25843
master vs SF 3
ELO: 182.86 +-2.7 (95%) LOS: 100.0%
Total: 40000 W: 21843 L: 2541 D: 15616
Contempt = 20 vs SF 3
ELO: 189.25 +-2.8 (95%) LOS: 100.0%
Total: 40000 W: 22721 L: 2859 D: 14420
Diff is therefore 6.4 +/- 3.9 elo against a 180-190 elo weaker engine, which is significantly positive,
as expected. This elo difference is likely understated, because of FishTest aggressive draw adjudication
though.
We could push Contempt further, but after 20cp, it would get in the way of FishTest draw adjudication
rule, and is likely to reduce the testing throughput as a result.
bench 8198667
Book handling belongs to GUI, we kept this code
for historical reasons, but nowdays there is
really no need of this old, (mostly) unused
and especially incorrect designed functionality.
It is up to the GUI to choose the book (far easier for
the user) and to select the book parameters. In no
place, including fishtest, TCEC, rating lists, etc.
the "own book" is used, moreover currently SF is
released without any book and even if in the future we
bundle a book in the release package, it will be the GUI
that will take care of it.
This corrects a wrong design decision that Galurung
and later Stockfish inherited from what was common
practice many yeas ago.
No functional change.
There is really little that user can achieve (apart
from a weakened engine) tweaking these parameters
that are already tuned and have no immediate or visible
effect.
So better do not expose them to the user and avoid the
typical "What is the best setup for my machine?" kind of
question (by far the most common, by far the most useless).
No functional change.
Retire current asymmetric king evaluation
and use a much simpler symmetric one.
As a side effect retire the infamous
'Aggressiveness' and 'Cowardice' UCI
options.
Tested in no-regression mode,
Passed both STC
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 33855 W: 5863 L: 5764 D: 22228
And LTC
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 40571 W: 5852 L: 5760 D: 28959
bench: 8321835
After last Joona's patch there is no measurable
difference between the option set or unset.
Tested by Andreas Strangmüller with 16 threads
on his Dual Opteron 6376.
After 5000 games at 15+0.05 the result is:
1 Stockfish_14050822_T16_on : 3003 5000 (+849,=3396,-755), 50.9 %
2 Stockfish_14050822_T16_off : 2997 5000 (+755,=3396,-849), 49.1 %
bench: 880215