Don't update refutation table in case of
previous move is MOVE_NULL or MOVE_NONE
and don't try refutation if is already
a killer move.
Pass both short TC
LLR: 2.96 (-2.94,2.94)
Total: 4310 W: 953 L: 869 D: 2488
And long one
LLR: 2.95 (-2.94,2.94)
Total: 6707 W: 1254 L: 1184 D: 4269
bench: 4785954
Very good result both at short TC 15+0.05
LLR: 2.95 (-2.94,2.94)
Total: 2803 W: 596 L: 483 D: 1724
And at long TC 60+0.05
LLR: 2.95 (-2.94,2.94)
Total: 2862 W: 548 L: 431 D: 1883
bench: 4329221
This info is normally printed together with
PV info in uci_pv() but when search is stopped,
for instance when max search time is reached,
uci_pv is not called and we miss this bits.
Suggested by gravy_train
No functional change.
This fixes an assert while testing with debug on.
Assert was due to static null pruning returning value
eval - futility_margin(depth, (ss-1)->futilityMoveCount)
That was sometimes higher than VALUE_INFINITE triggering
an assert at the caller site.
Because eval con be equal to ttValue and anyhow is read from
TT that can be corrupted in SMP case, we need to sanity
check it before to use.
bench: 4176431
Crash is due to uninitialized ss->futilityMoveCount that
when happens to be negative, yields to an out of range
access in futility_margin().
Bug is subtle because it shows itself only in SMP case.
Indeed in single thread mode we only use the
Stack ss[MAX_PLY_PLUS_2];
Allocated at the begin of id_loop() and due to pure
(bad) luck, it happens that for all the MAX_PLY_PLUS_2
elements, ss[i].futilityMoveCount >= 0
Note that the patch does not prevent futilityMoveCount
to be overwritten after, for instance singular search
or null verification, but to keep things readable and
because the effect is almost unmeasurable, we here
prefer a slightly incorrect but simpler patch.
bench: 4311634
Instead of a pointer. This should fix the issue of
remaining with a stale pointer when for instance calling
IID, but also null search verification, singular search
and razoring where we call search with the same ss
pointer. In this case ss->ei is overwritten in the
search() call and upon returning remains stale.
This patch could have a performance hit because Eval::Info
is big (176 bytes) and during splitting we copy 4 ss entries.
On the good side, this patch is a clean solution.
Proposed by Gary.
No functional change.
Allow to use EvalInfo struct, populated by
evaluation(), in search.
In particular we allocate Eval::Info on the stack
and pass a pointer to this to evaluate().
Also add to Search::Stack a pointer to Eval::Info,
this allows to reference eval info of previous/next
nodes.
WARNING: Eval::Info is NOT initialized and is populated
by evaluate(), only if the latter is called, and this
does not happen in all the code paths, so care should be
taken when accessing this struct.
No functional change.
The idea is to fail high more easily in static
null test if in the parent node we are already
very deep in the move list, so the propability
to fail high there is very low.
[edit: I have slightly changed the functionality
moving
ss->futilityMoveCount = moveCount;
At the end of the pruning code, this should not affect
ELO in anyway, but makes code more natural and logic]
Test with SPRT is good at 15+0.05
LLR: 2.96 (-2.94,2.94)
Total: 50653 W: 10024 L: 9780 D: 30849
And at 60+0.05
LLR: 2.97 (-2.94,2.94)
Total: 40799 W: 7227 L: 6921 D: 26651
bench: 4530093
It seems stronger both at fast 15+0.05 TC with fixed game number test:
ELO: 2.74 +-2.7 (95%) LOS: 97.6%
Total: 24000 W: 4698 L: 4509 D: 14793
And also at long 60+0.05 TC with SPRT
LLR: 3.05 (-2.94,2.94)
Total: 38986 W: 6845 L: 6547 D: 25594
bench: 5157061
Use an optinal argument instead of a template
parameter. Interestingly, not only is simpler,
but also faster, perhaps due to less L1 instruction
cache pressure because we don't duplicate the very
used SEE code path.
No functional change.
Increasing depth limit to 10 plies seems stronger
after 16K games at 15"+0.05 (ELO: +3.56) and also
repeating the test at 60"+0.05 TC:
ELO: 2.08 +-3.1 (95%) LOS: 90.9%
Total: 16000 W: 2641 L: 2545 D: 10814
Moreover setting the limit to 12 is proved stronger
then limit set to 10 by direct SPRT test at 15"+0.05:
ELO: 2.56 +-2.0 (95%) LOS: 99.5%
Total: 46568 W: 9240 L: 8897 D: 28431
So we directly set the limit to 12, the strongest setup.
bench: 4361224
Master IID formula is depth / 2
Previous patch is depth - 4 * ONE_PLY
This one is the middle way:
(dept/2 + depth-4*ONE_PLY)/2 -> depth-2*ONE_PLY-depth/4
After 16000 games at 60+0.05 th 1
ELO: 4.08 +-3.1 (95%) LOS: 99.5%
Total: 16000 W: 2742 L: 2554 D: 10704
bench: 4781239
Same idea of 5af8179647
in qsearch() but applied to search()
After 15500 games at 15+0.05
ELO: 4.48 +-3.4 (95%) LOS: 99.5%
Total: 15500 W: 3061 L: 2861 D: 9578
bench: 4985829
Always before pruning the move, it's important to check that:
bestValue > VALUE_MATED_IN_MAX_PLY
See example position:
8/2p1p3/P1NpP3/3k4/1P1BN3/2P1P3/2Q5/6K1 w - - 0 1
http://support.stockfishchess.org/discussions/problems/268-wrong-declaring-a-forced-mate-in-3-moves
This problem was present in 2.3.1, then it was fixed by my patch.
After 24000 games at 15+0.05
ELO: 2.40 +-4.4 (95%) LOS: 95.7%
Total: 24000 W: 4774 L: 4608 D: 14618
bench: 4465997
This reverts commit a24da071f0
Seems a regression when tested against 2.3.1
With this patch, have after 20000 games at 60+0.05, we have
ELO: 13.42 +-4.8 (95%) LOS: 100.0%
Total: 20000 W: 3746 L: 2974 D: 13280
Instead with the patch reverted:
ELO: 16.62 +-4.8 (95%) LOS: 100.0%
Total: 20000 W: 3816 L: 2860 D: 13324
Although we are within error bounds here we take the conservative
approach of not introducing changes that are not proved stronger
It doesn't mean that the change shall be weaker, simply that we
don't want to take any risk.
No functional change.
Here the rational seems to be that if after one try easy
move detection fails then the easy move is not so easy :-)
After 15563 games at 60+0.05
ELO: 3.04 +-5.5 (95%) LOS: 97.0%
Total: 15563 W: 2664 L: 2528 D: 10371
No functional change.
This seems good at short TC controls.
After 10000 games at 20+0.05
ELO: 9.56 +-6.8 (95%) LOS: 100.0%
Total: 10000 W: 1949 L: 1674 D: 6377
Testing at long TC and regression testing is still
ongoing. So this is a bit speculative commit and
could be reverted in the future.
Also re-testing at long TC the SEE pruning in PV nodes
seems less effective (perhaps even a regression, but
still ongoing) so disabled for now.
bench: 4968764
This reverts commit 0d68b523a3.
After easy move semplification this machinery is not
needed anymore (because of we don't need to know if a
root move is a recapture)
No functional change.
Detect a move as easy only if it is the only one ;-)
or if is much better than remaining ones after we
have spent 20% of search time.
Tests are ongoing, but it seems this semplification
stands. Anyhow it is experimental for now and could
be reverted/improved with further work Gary is
testing right now.
No functional change.
After previous patch if split point master is
waiting for job and "Use Sleeping Threads" is
false (our condition for official releases) then
it will lock/unlock splitPoint mutex in a super
tight loop badly affecting performance.
Rewrite the code to lock only when we are about
to finish. Note that race condition on slavesMask
is anyhow fixed.
No functional change.
There is no point searching a move that is forced.
It wastes time while allowing computer opponents to
fill hash with 100% accuracy.
[edit: Condition moved together with "easy move" ones]
Bench identical: 4922272
We detect an easy move as a recapture with an
high margin on the second best move.
Unfortunatly the recapture detection is broken
becuase we identify as a recapture any move that
follows an opponent's previous capture !
This patch fix the logic to correctly detect a
real re-capture.
No functional change.
Note that we read shared data without lock
protection, so code is theoretically prone to
torn reads. But, first splitPoint pointer
never changes, and alpha is of integer type so
it is read in a single DWORD access.
No functional change.
This patch is actually the sum of two contributions that
have been tested independently:
1) Pruning of negative SEE moves in PV
After 10000 games at 20+0.05
ELO: 5.18 +-7 (95%) LOS: 99.2%
Total: 10000 W: 1952 L: 1803 D: 6245
2) Remove of bestValue > VALUE_MATED_IN_MAX_PLY condition
After 23000 games at 20+0.05
ELO: 1.63 +-4 (95%) LOS: 88.1%
Total: 23000 W: 4232 L: 4124 D: 14644
The whole patch as been re-tested at long TC with positive results:
After 10000 games at 60+0.05
ELO: 4.31 +-7 (95%) LOS: 98.3%
Total: 10000 W: 1765 L: 1641 D: 6594
Rename startPosPly to gamePly and increment/decrement
the variable after each do/undo move. This adds a little
overhead in do_move() but we will need to have the
game ply during the search for the next patches now
under test.
Currently we don't increment gamePly in do_null_move()
becuase it is not needed at the moment. Could change
in the future.
As a nice side effect we can now remove an hack in
startpos_ply_counter().
No functional change.
Rename to insertion_sort so to avoid confusion
with std::sort, also move it to movepicker.cpp
and use the bit slower std::stable_sort in
search.cpp where it is used in not performance
critical paths.
No functional change.