Add a logarithmic term in the optimism computation, increase
the maximal optimism and lower the contempt offset.
This increases the dynamics of the optimism aspects, giving
a boost for balanced positions without skewing too much on
unbalanced positions (but this version will enter panic mode
faster than previous master when behind, trying to draw faster
when slightly behind). This helps, since optimism is in general
a good thing, for instance at LTC, but too high optimism
rapidly contaminates play.
passed STC:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 159343 W: 34489 L: 33588 D: 91266
http://tests.stockfishchess.org/tests/view/5a8db9340ebc590297cc85b6
passed LTC:
LLR: 2.97 (-2.94,2.94) [0.00,5.00]
Total: 47491 W: 7825 L: 7517 D: 32149
http://tests.stockfishchess.org/tests/view/5a9456a80ebc590297cc8a89
It must be mentioned that a version of the PR with contempt 0
did not pass STC [0,5]. The version in the patch, which uses
default contempt 12, was found to be as strong as current master
on different matches against SF7 and SF8, both at STC and LTC.
One drawback maybe is that it raises the draw rate in self-play
from 56% to 59%, giving a little bit less sensitivity for SF
developpers to find evaluation improvements by selfplay tests
in fishtest.
Possible further work:
• tune the values accurately, while keeping in mind the drawrate issue
• check whether it is possible to remove linear and offset term
• try to simplify the S-shape curve
Bench: 5934644
Use a recursive std::array with variadic template
parameters to get rid of the last redundacy.
The first template T parameter is the base type of
the array, the W parameter is the weight applied to
the bonuses when we update values with the << operator,
the D parameter limits the range of updates (range is
[-W * D, W * D]), and the last parameters (Size and
Sizes) encode the dimensions of the array.
This allows greater flexibility because we can now tweak
the range [-W * D, W * D] for each table.
Patch removes more lines than what adds and streamlines
the Stats soup in movepick.h
Closes PR#1422 and PR#1421
No functional change.
The first depth 2 margin triggers the verification quiescence search.
This qsearch() result has to be better then the second lower margin,
so we only skip the razoring when the qsearch gives a significant
improvement.
Passed STC:
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 32133 W: 7395 L: 7101 D: 17637
http://tests.stockfishchess.org/tests/view/5a93198b0ebc590297cc8942
Passed LTC:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 17382 W: 3002 L: 2809 D: 11571
http://tests.stockfishchess.org/tests/view/5a93b18c0ebc590297cc89c2
This Elo-gaining version was further simplified following a suggestion
of Marco Costalba:
STC:
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 15553 W: 3505 L: 3371 D: 8677
http://tests.stockfishchess.org/tests/view/5a964be90ebc590297cc8cc4
LTC:
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 13253 W: 2270 L: 2137 D: 8846
http://tests.stockfishchess.org/tests/view/5a9658880ebc590297cc8cca
How to continue after this patch?
Reformating the razoring code (step 7 in search()) to unify the
depth 1 and depth 2 treatements seems quite possible, this could
possibly lead to more simplifications.
Bench: 5765806
Using a SPSA tuning session to optimize the time management
parameters.
With SPSA tuning it is not always possible to say where improvements
came from. Maybe some variables changed randomly or because result
was not sensitive enough to them. So my explanation of changes will
not be necessarily correct, but here it is.
• When decrease of thinking time was added by Joost a few months ago
if best move has not changed for several plies, one more competing
indicator was introduced for the same purpose along with increase
in score and absence of fail low at root. It seems that tuning put
relatively more importance on that new indicator what allowed to save
time.
• Some of this saved time is distributed proportionally between all
moves and some more time were given to moves when score dropped a lot
or best move changed.
• It looks also that SPSA redistributed more time from the beginning to
later stages of game via other changes in variables - maybe because
contempt made game to last longer or for whatever reason.
All of this is just small tweaks here and there (a few percentages changes).
STC (10+0.1):
LLR: 2.96 (-2.94,2.94) [0.00,4.00]
Total: 18970 W: 4268 L: 4029 D: 10673
http://tests.stockfishchess.org/tests/view/5a9291a40ebc590297cc8881
LTC (60+0.6):
LLR: 2.95 (-2.94,2.94) [0.00,4.00]
Total: 72027 W: 12263 L: 11878 D: 47886
http://tests.stockfishchess.org/tests/view/5a92d7510ebc590297cc88ef
Additional non-regression tests at other time controls
Sudden death 60s:
LLR: 2.95 (-2.94,2.94) [-4.00,0.00]
Total: 14444 W: 2715 L: 2608 D: 9121
http://tests.stockfishchess.org/tests/view/5a9445850ebc590297cc8a65
40 moves repeating at LTC:
LLR: 2.95 (-2.94,2.94) [-4.00,0.00]
Total: 10309 W: 1880 L: 1759 D: 6670
http://tests.stockfishchess.org/tests/view/5a9566ec0ebc590297cc8be1
This is a functional patch only for time management, but the bench
does not reflect this because it uses fixed depth search, so the number
of nodes does not change during bench.
No functional change.
This is the sequel of the previous patch, we now let the parent node initialize
stat score to zero once for all grandchildren.
Initialize statScore to zero for the grandchildren of the current position.
So statScore is shared between all grandchildren and only the first grandchild
starts with statScore = 0. Later grandchildren start with the last calculated
statScore of the previous grandchild. This influences the reduction rules in
LMR which are based on the statScore of parent position.
Tests results against the previous patch:
STC:
LLR: 2.96 (-2.94,2.94) [0.00,4.00]
Total: 23676 W: 5417 L: 5157 D: 13102
http://tests.stockfishchess.org/tests/view/5a9423a90ebc590297cc8a46
LTC:
LLR: 2.96 (-2.94,2.94) [0.00,4.00]
Total: 35485 W: 6168 L: 5898 D: 23419
http://tests.stockfishchess.org/tests/view/5a9435550ebc590297cc8a54
Bench: 5643520
Let the parent node initialize stat score to zero once for all siblings.
Initialize statScore to zero for the children of the current position.
So statScore is shared between sibling positions and only the first sibling
starts with statScore = 0. Later siblings start with the last calculated
statScore of the previous sibling. This influences the reduction rules in
in LMR which are based on the statScore of parent position.
STC:
LLR: 2.96 (-2.94,2.94) [0.00,4.00]
Total: 22683 W: 5202 L: 4946 D: 12535
http://tests.stockfishchess.org/tests/view/5a93315f0ebc590297cc894f
LTC:
LLR: 2.95 (-2.94,2.94) [0.00,4.00]
Total: 48548 W: 8346 L: 8035 D: 32167
http://tests.stockfishchess.org/tests/view/5a933ba90ebc590297cc8962
Bench: 5833683
To compute dicovered check or pinned pieces we use some bitwise
operators that are not really needed because already accounted for
at the caller site.
For instance in evaluation we compute:
pos.pinned_pieces(Us) & s
Where pinned_pieces() is:
st->blockersForKing[c] & pieces(c)
So in this case the & operator with pieces(c) is useless,
given the outer '& s'.
There are many places where we can use the naked blockersForKing[]
instead of the full pinned_pieces() or discovered_check_candidates().
This path is simpler than original and gives around 1% speed up for me.
Also tested for speed by mstembera and snicolet (neutral in both cases).
No functional change.
When iid (Internal iterative deepening) is invoked, the prior value of ttValue is
not guaranteed to be VALUE_NONE. As such, it is currently possible to enter a state
in which ttValue has a specific value which is inconsistent with tte->bound() and
tte->depth(). Currently, ttValue is only used within the search in a context that
prevents this situation from making a difference (and so this change is non-functional,
but this is not guaranteed to remain the case in the future.
For instance, just changing the tt depth condition in singular extension node to be
tte->depth() >= depth - 4 * ONE_PLY
instead of
tte->depth() >= depth - 3 * ONE_PLY
interacts badly with the absence of ttMove in iid. For the ttMove to become a singular
extension candidate, singularExtensionNode needs to be true. With the current master,
this requires that tte->depth() >= depth - 3 * ONE_PLY. This is not currently possible
if tte comes from IID, since the depth 'd' used for the IID search is always less than
depth - 4 * ONE_PLY for depth >= 8 * ONE_PLY (below depth 8 singularExtensionNode can
never be true anyway). However, with DU-jdto/Stockfish@251281a , this condition can be
met, and it is possible for singularExtensionNode to become true after IID. There are
then two mechanisms by which this patch can affect the search:
• If ttValue was VALUE_NONE prior to IID, the fact that this patch sets ttValue allows
the 'ttValue != VALUE_NONE' condition of singularExtensionNode to be met.
• If ttValue wasn't VALUE_NONE prior to IID, the fact that this patch modifies ttValue's
value causes a different 'rBeta' to be calculated if the singular extension search is
performed.
Tested at STC for non-regression:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 76981 W: 17060 L: 17048 D: 42873
http://tests.stockfishchess.org/tests/view/5a7738b70ebc5902971a9868
No functional change
The razoring heuristic is quite a drastic pruning technique,
using a depth 0 search at internal nodes of the search tree
to estimate the true value of depth n nodes. This patch limits
this razoring to the case of internal nodes of depth 1.
Author: Jarrod Torriero (DU-jdto)
STC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 8043 W: 1865 L: 1716 D: 4462
http://tests.stockfishchess.org/tests/view/5a90a9290ebc590297cc86c1
LTC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 32890 W: 5577 L: 5476 D: 21837
http://tests.stockfishchess.org/tests/view/5a90c8510ebc590297cc86d5
Opportunities opened by this patch: it would be interesting to
know if it brings Elo to re-introduce razoring or soft razoring
at depth >= 2, maybe using a larger margin to compensate for the
increased pruning effect.
Bench: 5227124
Make contempt dependent on the current score of the root position.
The idea is that we now use a linear formula like the following to decide
on the contempt to use during a search :
contempt = x + y * eval
where x is the base contempt set by the user in the "Contempt" UCI option,
and y * eval is the dynamic part which adapts itself to the estimation of
the evaluation of the root position returned by the search. In this patch,
we use x = 18 centipawns by default, and the y * eval correction can go
from -20 centipawns if the root eval is less than -2.0 pawns, up to +20
centipawns when the root eval is more than 2.0 pawns.
To summarize, the new contempt goes from -0.02 to 0.38 pawns, depending if
Stockfish is losing or winning, with an average value of 0.18 pawns by default.
STC:
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 110052 W: 24614 L: 23938 D: 61500
http://tests.stockfishchess.org/tests/view/5a72e6020ebc590f2c86ea20
LTC:
LLR: 2.97 (-2.94,2.94) [0.00,5.00]
Total: 16470 W: 2896 L: 2705 D: 10869
http://tests.stockfishchess.org/tests/view/5a76c5b90ebc5902971a9830
A second match at LTC was organised against the current master:
ELO: 1.45 +-2.9 (95%) LOS: 84.0%
Total: 19369 W: 3350 L: 3269 D: 12750
http://tests.stockfishchess.org/tests/view/5a7acf980ebc5902971a9a2e
Finally, we checked that there is no apparent problem with multithreading,
despite the fact that some threads might have a slightly different contempt
level that the main thread.
Match of this version against master, both using 5 threads, time control 30+0.3:
ELO: 2.18 +-3.2 (95%) LOS: 90.8%
Total: 14840 W: 2502 L: 2409 D: 9929
http://tests.stockfishchess.org/tests/view/5a7bf3e80ebc5902971a9aa2
Include suggestions from Marco Costalba, Aram Tumanian, Ronald de Man, etc.
Bench: 5207156
This patch simplifies the time management code, removing the extra
thinking time for moves with draw PV and increasing thinking time
for all moves proportionally by around 4%.
Last time when the time management was carefully tuned was 1.5-2 years
ago. As new patches were getting added, time management was drifting out
of optimum. This happens because when search becomes more precise pv and
score are becoming more stable, there are less fail lows, best move is
picked earlier and there are less best move changes. All this factors are
entering in time management, and average time per move is decreasing with
more and more good patches. For individual patches such effect is small
(except some) and may be up or down, but when there are many of them,
effect is more substantial. The same way benchmark with more and more
patches is slowly drifting down on average.
So my understanding that back in October adding more think time for draw
PV showed positive Elo because time management was not well tuned, there
was more time available, and think_hard patch applied this additional time
to moves with draw PV, while just retuning back to optimum would recover Elo
anyway. It is possible that absence of contempt also helped, as SF9 is showing
less 0.0 scores than the October version.
Anyway, to me it seems that proper place to deal with draw PV is search, and
contempt sounds as much better solution. In time management there is little
additional elo, and if some code is not helping like removed here, it is better
to discard it. It is simpler to find genuine improvement if code is clean.
• Passed STC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 20487 W: 4558 L: 4434 D: 11495
http://tests.stockfishchess.org/tests/view/5a7706ec0ebc5902971a9854
• Passed LTC:
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 41960 W: 7145 L: 7058 D: 27757
http://tests.stockfishchess.org/tests/view/5a778c830ebc5902971a9895
• Passed an additional non-regression [-5..0] test at the time control
of 60sec for the game (sudden death) with disabled draw adjudication:
LLR: 2.95 (-2.94,2.94) [-5.00,0.00]
Total: 8438 W: 1675 L: 1586 D: 5177
http://tests.stockfishchess.org/tests/view/5a7c3d8d0ebc5902971a9ac0
• Passed an additional non-regression [-5..0] test at the time control
of 1sec+1sec per move with disabled draw adjudication:
LLR: 2.97 (-2.94,2.94) [-5.00,0.00]
Total: 27664 W: 5575 L: 5574 D: 16515
http://tests.stockfishchess.org/tests/view/5a7c3e820ebc5902971a9ac3
This is a functional change for the time management code.
Bench: 4983414
It seems to be a waste of time to loop through all remaining root moves
after finishing each PV line. This patch skips this until we have reached
the last PV line (this is the way it was done in Glaurung and very early
versions of Stockfish).
No functional change in Single PV mode.
MultiPV=3 STC and LTC tests
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 3113 W: 1248 L: 1064 D: 801
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 2260 W: 848 L: 679 D: 733
Bench: 5023629
It does the following:
- If a TB win or loss value allows an alpha or beta cutoff, the cutoff is taken.
- Otherwise, the search of the current subtree continues. In PV nodes, the final value returned is adjusted to reflect that the position is a TB win (or loss).
The patch also fixes a potential problem caused by root_probe() and root_probe_wdl() dirtying the root-move scores.
This patch removes the limitation of current master that a mate is never found if the root position is not yet in the TBs, but the path to mate at some point enters the TBs. The patch is intended to preserve the efficiency and effectiveness of the current TB probing approach.
No functional change (withouth TB)
1. avoid recursive call of verification.
For the interested side to move recursion makes no sense.
For the other side it could make sense in case of mutual zugzwang,
but I was not able to figure out any concrete problematic position.
Allows the removal of 2 local variables.
2. avoid further reduction by removing R += ONE_PLY;
Benchmark with zugzwang-suite (see #1338), max 45 secs per position:
Patch solves 33 out of 37
Master solves 31 out of 37
STC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 76188 W: 13866 L: 13840 D: 48482
http://tests.stockfishchess.org/tests/view/5a5612ed0ebc590297da516c
LTC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 40479 W: 5247 L: 5152 D: 30080
http://tests.stockfishchess.org/tests/view/5a56f7d30ebc590299e4550e
bench: 5340015
The heuristic to avoid thread binding if less than 8 threads are requested resulted in the first 7 threads not being bound.
The branch was verified to yield a roughly 13% speedup by @CoffeeOne on the appropriate hardware and OS, and an earlier version of this patch tested well on his machine:
http://tests.stockfishchess.org/tests/view/5a3693480ebc590ccbb8be5a
ELO: 9.24 +-4.6 (95%) LOS: 100.0%
Total: 5000 W: 634 L: 501 D: 3865
To make sure all threads (including mainThread) are bound as soon as the total number exceeds 7, recreate all threads on a change of thread number.
To do this, unify Threads::init, Threads::exit and Threads::set are unified in a single Threads::set function that goes through the needed steps.
The code includes several suggestions from @joergoster.
Fixes issue #1312
No functional change
For efficiency reasons current master only allows for transposition table sizes that are N = 2^k in size, the index computation can be done efficiently as (hash % N) can be written instead as (hash & 2^k - 1). On a typical computer (with 4, 8... etc Gb of RAM), this implies roughly half the RAM is left unused in analysis.
This issue was mentioned on fishcooking by Mindbreaker:
http://tests.stockfishchess.org/tests/view/5a3587de0ebc590ccbb8be04
Recently a neat trick was proposed to map a hash into the range [0,N[ more efficiently than (hash % N) for general N, nearly as efficiently as (hash % 2^k):
https://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/
namely computing (hash * N / 2^32) for 32 bit hashes. This patch implements this trick and now allows for general hash sizes. Note that for N = 2^k this just amounts to using a different subset of bits from the hash. Master will use the lower k bits, this trick will use the upper k bits (of the 32 bit hash).
There is no slowdown as measured with [-3, 1] test:
http://tests.stockfishchess.org/tests/view/5a3587de0ebc590ccbb8be04
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 128498 W: 23332 L: 23395 D: 81771
There are two (smaller) caveats:
1) the patch is implemented for a 32 bit hash (so that a 64 bit multiply can be used), this effectively limits the number of clusters that can be used to 2^32 or to 128Gb of transpostion table. That's a change in the maximum allowed TT size, which could bother those using 256Gb or more regularly.
2) Already in master, an excluded move is hashed into the position key in rather simple way, essentially only affecting the lower 16 bits of the key. This is OK in master, since bits 0-15 end up in the index, but not in the new scheme, which picks the higher bits. This is 'fixed' by shifting the excluded move a few bits up. Eventually a better hashing scheme seems wise.
Despite these two caveats, I think this is a nice improvement in usability.
Bench: 5346341
* A better contempt implementation for Stockfish
The round 2 of TCEC season 10 demonstrated the benefit of having a nice contempt implementation: it gives the strongest programs in the tournament the ability to slow down the game when they feel the position is slightly worse, prefering to stay in a complicated (even if slightly risky) middle game rather than simplifying by force into a drawn endgame.
The current contempt implementation of Stockfish is inadequate, and this patch is an attempt to provide a better one.
Passed STC non-regression test against master:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 83360 W: 15089 L: 15075 D: 53196
http://tests.stockfishchess.org/tests/view/5a1bf2de0ebc590ccbb8b370
This contempt implementation is showing promising results in certains situations. For instance, it obtained a nice +30 Elo gain when playing with contempt=40 against Stockfish 7, compared to current master:
• master against SF 7 (20000 games at LTC): +121.2 Elo
• this patch with contempt=40 (20000 games at LTC): +154.11 Elo
This was the result of real cooperative work from the Stockfish team, with key ideas coming from Stefan Geschwentner (locutus2) and Chris Cain (ceebo) while most of the community helped with feedback and computer time.
In this commit the bench is unchanged by default, but you can test at home with the new contempt in the UCI options. The style of play will change a lot when using contempt different of zero (I repeat: not done in this version by default, however)!
The Stockfish team is still deliberating over the best default contempt value in self-play and the best contempt modeling strategy, to help users choosing a contempt value when playing against much weaker programs. These informations will be given in future commits when available :-)
Bench: 5051254
* Remove the prefetch
No functional change.
Four very minor edits. Note that tte->save() uses posKey and
not pos.key() in other places.
Originally I also added a futility_move_counts() function to
make things more consistent with the futility_margin() and
reduction() functions. But then razor_margin[] should probably
also be turned into a function, etc. Maybe a good idea, maybe not.
So I did not include it.
Non functional change.
Stockfish currently relies on the "filter_root_moves" function also
having the side effect of clamping Cardinality against MaxCardinality
(the actual piece count in the tablebases). So if we skip this function,
we will end up probing in the search even without tablebases installed.
We cannot bail out of this function before this check is done, so move
the MultiPV hack a few lines below.
If the PV leads to a draw (3-fold / 50-moves) position
and we're ahead of time, think a little longer, possibly
finding a better way.
As this is most likely effective at higher draw rates,
tried speculative LTC after a yellow STC:
STC:
http://tests.stockfishchess.org/tests/view/59eb173a0ebc590ccbb8975d
LLR: -2.95 (-2.94,2.94) [0.00,5.00]
Total: 56095 W: 10013 L: 9902 D: 36180
elo = 0.688 +- 1.711 LOS: 78.425%
LTC:
http://tests.stockfishchess.org/tests/view/59eba1670ebc590ccbb897b4
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 59579 W: 7577 L: 7273 D: 44729
elo = 1.773 +- 1.391 LOS: 99.381%
bench: 5234652
A band-aid patch to workaround current TB code
limitations with multi PV.
Hopefully this will be removed after committing the
big update of TB impementation, now under discussion.
No functional change.
If the search is quit before skill.pick_best is called,
skill.best_move might be MOVE_NONE.
Ensure skill.best is always assigned anyhow.
Also retire the tricky best_move() and let the underlying
semantic to be clear and explicit.
No functional change.
The first change (ss->statScore >= 0) does nothing.
The second change ((ss-1)->statScore >= 0 ) has a massive change.
(ss-1)->statScore is not set until (ss-1) begins to apply LMR to moves.
So we now increase the reduction for bad quiets when our opponent is
running through the first captures and the hash move.
STC
LLR: 2.95 (-2.94,2.94) [0.00,4.00]
Total: 57762 W: 10533 L: 10181 D: 37048
LTC
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 19973 W: 2662 L: 2480 D: 14831
Bench: 5037819
This patch lets ss->ply be equal to 0 at the root of the search.
Currently, the root has ss->ply == 1, which is less intuitive:
- Setting the rootNode bool has to check (ss-1)->ply == 0.
- All mate values are off by one: the code seems to assume that mated-in-0
is -VALUE_MATE, mate-1-in-ply is VALUE_MATE-1, mated-in-2-ply is VALUE_MATE+2, etc.
But the mate_in() and mated_in() functions are called with ss->ply, which is 1 in
at the root.
- The is_draw() function currently needs to explain why it has "ply - 1 > i" instead
of simply "ply > i".
- The ss->ply >= MAX_PLY tests in search() and qsearch() already assume that
ss->ply == 0 at the root. If we start at ss->ply == 1, it would make more sense to
go up to and including ss->ply == MAX_PLY, so stop at ss->ply > MAX_PLY. See also
the asserts testing for 0 <= ss->ply && ss->ply < MAX_PLY.
The reason for ss->ply == 1 at the root is the line "ss->ply = (ss-1)->ply + 1" at
the start for search() and qsearch(). By replacing this with "(ss+1)->ply = ss->ply + 1"
we keep ss->ply == 0 at the root. Note that search() already clears killers in (ss+2),
so there is no danger in accessing ss+1.
I have NOT changed pv[MAX_PLY + 1] to pv[MAX_PLY + 2] in search() and qsearch().
It seems to me that MAX_PLY + 1 is exactly right:
- MAX_PLY entries for ss->ply running from 0 to MAX_PLY-1, and 1 entry for the
final MOVE_NONE.
I have verified that mate scores are reported correctly. (They were already reported
correctly due to the extra ply being rounded down when converting to moves.)
The value of seldepth output to the user should probably not change, so I add 1 to it.
(Humans count from 1, computers from 0.)
A small optimisation I did not include: instead of setting ss->ply in every invocation
of search() and qsearch(), it could be set once for all plies at the start of
Thread::search(). This saves a couple of instructions per node.
No functional change (unless the search searches a branch MAX_PLY deep), so bench
does not change.
After increasing the number of threads, the histories were not cleared,
resulting in uninitialized memory usage.
This patch fixes this by clearing threads histories in Thread c'tor as
is the idomatic way.
This fixes issue 1227
No functional change.
If any thread found a 'mate in x' stop the search. Previously only
mainThread would do so. Requires the bestThread selection to be
adjusted to always prefer mate scores, even if the search depth is less.
I've tried to collect some data for this patch. On 30 cores, mate finding
seems 5-30% faster on average. It is not so easy to get numbers for this,
as the time to find a mate fluctuates significantly with multi-threaded runs,
so it is an average over 100 searches for the same position. Furthermore,
hash size and position make a difference as well.
Bench: 5965302
Avoid constructing, passing as a parameter and binding a useless empty tuple of pointers in the qsearch move picker constructor.
Also reformat the scoring function in movepicker.cpp and do some cleaning in evaluate.cpp while there.
No functional change.
Rewrite perft to be placed naturally inside new
bench code. In particular we don't have special
custom code to run perft anymore but perft is
just a new parameter of 'go' command.
So user API is now changed, old style command:
$perft 5
becomes
$go perft 4
No functional change.