Use a more traditional approach, along the same lines
of do_move().
It is true that we copy more in do_null_move(), but we
save the work in undo_null_move(). Speed test shows the
new code to be even a bit faster.
No functional change.
Fix again TimerThread::idle_loop() to prevent a
theoretical race with 'exit' flag in ~Thread().
Indeed in Thread d'tor we raise 'exit' and then
call notify() that is lock protected, so we
have to check again for 'exit' before going to
sleep in idle_loop().
Also same change in Thread::idle_loop() where we
now check for 'exit' before to go to sleep.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Great code simplification: - instead do not futility
prune threat refutations. allows_move() is therefore removed.
4000 games at 50,000 nodes/move:
1085-989-1926 [51.2%] LOS=98.3%
4000 games in 10"+0.1"
756-751-2493 [50.1%] LOS=55.1%
EDIT: I have retested the patch of Lucas in a slightly different form
(without pruning in PvNode) and test mre or less confirms that
60 lines of code are totally unuseful:
After 6195 games at 15"+0.05"
1333 - 1325 - 3537 ELO 0
bench 5140990
Subclass MainThread and TimerThread and declare
idle_loop() virtual. This allow us to cleanly
remove a good bunch of hacks, relying on C++
polymorphism to do the job.
No functional change.
Rename it is_finished and use it only in main
thread to signal search is finished. This allows
us to simplify the complex SMP logic.
Ultra tricky patch: deep test is required under
wide conditions like pondering on and option
"Use Sleeping Threads" set to false.
No functional change.
Revert patch c581b7ea36
Seems a regression after testing from Gary:
ELO: 7.24 +- 99%: 17.03 95%: 12.93
LOS: 97.86%
Wins: 439 Losses: 381 Draws: 1962
And mine:
After 5410 games at 15"+0.05
Wins: 936 Losses: 1141 Draws: 3333 ELO -13
Moreover we know that there is a regression in the range
of patches which include the fromNull patch.
Probably this is not the only regression since 2.3.1 and
perhaps the idea under fromNull is good, but at the moment,
while in deep regression hunting, better to be on the safe
side and revert it entirely.
My guess on why this is a regression is that using the
negated evaluation of previous ply in case of null search
fails to take in account the king safety asymmetry between
the two colors. This is of course just a guess.
bench 5503830
Sometimes is faster, but not always and on very long mates
produces strange scores probably due to truncation of PV
artifacts.
So simply perform normal search also in case of UCI 'mate x'
command, with the only difference that when a mate in x is
found search returns immediately.
No functional change.
Following a user request I added the handling of UCI:
go mate x
Currently we just return from a PV node if x moves have been
done. Probably not the best approach. I have looked at Fruit/Toga
sources and there is even simpler: engine falls back on a fixed
depth search.
No functional change.
And return on using TT as backing store for position
evaluations.
Tests (even on single thread) show eval cache was a regression.
In multi thread result should be even worst because eval cache
is a per-thread struct, while TT is shared.
After 4957 games at 15"+0.05 (single thread)
eval cache vs master 969 - 1093 - 2895 -9 ELO
So previous reported result of +18 ELO was probably due to an
issue in the testing framework (a bug in cutechess-cli) that
has been fixed in the meanwhile.
bench: 5386711
In case of null search at low depths returns a fail low
due to a threat then, rather than return beta-1 (to cause
a re-search at full depth in the parent node), we set a flag
threatExtension = true (false by default) that will cause
moves that prevent the threat to be extended of one ply
in the following search.
Idea and patch is by Lucas Braesch.
Lucas also did the tests:
1500 games in 5"+0.05":
SF_threatExtension vs SF_20121222: 366 - 331 - 803 [51.2%] LOS=90.8%
3000 games in 10"+0.1":
SF_threatExtension vs SF_20121222: 610 - 559 - 1831 [50.8%] LOS=93.2%
Tests confirmed by Gary after 10570 games,
ELO: 2.79 +- 99%: 8.72 95%: 6.63
LOS: 94.08%
Wins: 1523 Losses: 1438 Draws: 7607
And finally by me at 15"+0.05, single thread, 3824 games
threatExtension vs master 768 - 692 - 2364 +7 ELO
bench 4918443
Test by Joona prooves the new feature don't value 70 added lines of code.
Grand totals after 10040 games (crashes: 0) for tt_both
master_9edc7 - 6a93488_6a934: 1756 - 1688 - 6596 ELO +2 (+- 2.7)
Confirmed by test of Gary:
After 8680 games:
ELO: 0.80 +- 99%: 9.62 95%: 7.31
LOS: 65.38%
Wins: 1288 Losses: 1268 Draws: 6130
Thanks a lot to both for testing it !!!
bench 5149248
This is more complex than what I'd like but I
was unable to split in small chunks.
Here we add 2 slots to TTEntry (valueUpper and depthUpper)
so that sizeof(TTEntry) returns to the original 16 bytes
and we can pack exactly 4 entries in a 64 bytes cache line.
Now we save an upper bound score alongside a lower (exact)
score. The idea is to increase TT cut-offs rates becuase
there is now an higher probability for a node to use TT info.
This patch is highly experimental and probably needs further
steps as is hinted by an unrealistic bench number:
bench: 2022385
In search(), after we evalute the position, in case there
isn't any TT entry we create one with just the evaluation
score.
This patches removes that code. The reason becuase the patch
deserves a single commit it is becuase introduces a (very small)
functional change due to the fact that the total number of
TT stores is less now and this slightly alters the TT hits
of our benchmark.
bench: 4983262
Rely fully on eval cache. Note that we still save eval
info to TT, this is not needed at this moment and will be
removed in future patches. We keep it so to have a "non
functional change" patch.
No functional change.
In case of a RootNode or a SpNode move has
been already checked for legality so we can
skip a redundant check.
Spotted by Frank Genot.
No functional change.
In qsearch we should update the bestValue as we do
in case of futilityValue < beta, also when pruning
moves with non-positive see.
Spotted by Lucas Braesch
Bench: 5695710
Send the PV lines to GUI only once at the end of the
PV search loop or just in case of long searches.
We need to sync also sending of "currmove" info to
avoid sending info on current move without first
informing the GUI on the PV line we are searching on.
No functional change.
At this point we have already verified (value > alpha)
and this implies, in case of a non-PV node, where search
window size is zero, that value >= beta.
This is not so self-evident, so document the code with
an assert condition.
No functional change.
Let the caller to decide where to redirect (cout or cerr) the
ASCII representation of the position. Rename the function to
reflect this.
Renamed also from_fen() and to_fen() to set() and fen() respectively.
No functional change.
In case a PvNode node has a static evaluation above alpha but
no available moves we want to flag the node as BOUND_EXACT,
not as BOUND_UPPER as is currently.
The behaviour was recently introduced with patch d471c49700
of 3/10/2012
Spotted by Hongzhi Cheng.
bench: 5558464
Both Lucas re-test and Jean-Francois confirrm it
is a regression.
Here Jean-Francois's results after 3600 games :
Score of 96d3b1c92b vs 3b87314: 690 - 729 - 2181 [0.495] 3600
ELO: -3.86 +- 99%: 14.94 95%: 11.35
LOS: 15.03%
Wins: 690 Losses: 729 Draws: 2181 Total: 3600
Bench: 5404066
Don't prune and eventually extend check moves of type
DISCO_CHECK (pun intended, Lucas will understand :-) ).
Patch from Lucas Braesch that has also tested it:
Result: 879-661-2137, score=52.96%, LOS=100.00% (time 10"+0.1")
I have started a verification test right now.
bench: 6004966
In this case we try a rather drastic
approach: we simply don't futility prune
in qsearch when arriving from a null move.
So we save evaluating and also save to mess
with eval margins at all because margin is used
only in futility.
Also accuracy should not be affected, actually it
improves because we don't prune anything anymore.
bench: 5404066
Performs well at very short TC of 40/4+0.05 (courtesy of Jean-Francois):
Wins: 2503 Losses: 2146 Draws: 5581 Total: 10230 +12 ELO
But is poor at longer TC of 20"+0.05
Wins: 321 Losses: 373 Draws: 1141 Total: 1808 -10 ELO
The patch was clearly a tradoff between speed and accuracy and
the most interesting part of it are test results that can be
commented as follows:
- A short TC is very sensible to any speed increase
- A longer TC is more sensible to accuracy and less to speed
So a patch that does not change speed is suitable to be tested at
short TC, while a speed/accuracy compromise patch is IMO better to
be tested at longer TC to verify loss of accuracy can be tolerated.
In this case the revert is only temporary. We will come back again
once we will be able to preserve the evaluation margin.
bench: 5809010
Reuse the evaluation of the parent with inverted sign and
set margin to zero (this is an hack!).
This is done only in qsearch where almost 15% of calls are
from a null move. In normal search the number of nodes where
(ss-1)->currentMove == MOVE_NULL
is almost zero and so there is no need of using this trick.
The big advantage of this patch is a speed-up due to skipped
evaluate() calls, that are very costly.
Functionality is of course affected and we will need to proper
test it later. For now we just register a 3-4% speed up.
Suggested by Hongzhi Cheng.
bench: 5051328