It is bogusly assigned from moves[i].move instead of mlist[i].move
or equivalently to moves[count].move that it seem more clear to me.
Bug is hidden while all the moves are included, in this default case
moves[i].move and mlist[i].move are the same variable.
Also a bit of cleanup while there.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
After proof testing on 3 different engines these
are the results:
Stockfish - Toga II 1.4.1SE +130 -132 =132 49.75%
Stockfish - Deep Sieng 3.0 +145 -110 =150 54.45%
Stockfish - HIARCS 12 MP +94 -149 =150 43.00%
So it seems no regressions occurs, although also no
improvment. But anyhow this patch increases Stockfish
strenght against itself, so merge it.
Note that this patch not only adds back razoring at depth
one, but also increases razor depth limit from 3 to 4
because hard coded depth 4 limit is no more overwritten
by UCI parameter that otherwise defaults to 3.
Because futility margins array has a fixed size we cannot
arbitrarly choose or change the SelectiveDepth parameter,
otherwise we have a crash for values bigger then array size.
On the other hand tweaking of this parameter requires some
modification to the hardcoded margins, so makes sense to hard
code also this very bounded one.
Who wants to experiment is of course free to change the sources.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Currently futility is allowed when depth < SelectiveDepth
and SelectiveDepth is 7*OnePly, so the comprison is
always true.
Patch could introduce a functional change only if
we choose to increase SelectiveDepth.
Currently no functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
When testing at 1'+0" time control results are still
reasonably good. We have made two sessions on two
different PC.
After 840 games Mod - Orig: +221 -194 =425 +10 ELO (two CPU)
After 935 games Mod - Orig: +246 -222 =467 +9 ELO (single CPU)
So it seems that with fast CPU and/or longer time controls
benefits of the patch are a bit reduced. This could be due
to the fact that only 3% of nodes are pruned by razoring at
depth one and these nodes are very swallow ones, mostly get
pruned anyway with only a slightly additional cost, even
without performing any do_move() call.
Another reason is that sometime (0,3%% of cases) a possible
good move is missed typically in positions when moving side
gives check, as example in the following one
3r2k1/pbpp1nbp/1p6/3P3q/6RP/1P4P1/P4Pb1/3Q2K1 w - -
The winning move Rxg7+ is missed.
Bottom line is that patch seems good for blitz times, perhaps
also for longer times. We should test against a third engine
(Toga ?) to have a final answer regarding this new setup.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Some time ago it was found by Marco Costalba that it's better
to disable razoring at depth one, because given the very low
evaluation of the node, futility pruning would already do
the job at very low cost and avoiding missing important moves.
Now enable razoring there again, but only when our quickly evaluated
material advantage is more than a rook. The idea is to try razoring
only when it's extremely likely that it will succeed.
Extreme lightning speed test show promising result:
Orig - Mod: +1285 =1495 -1348
This needs to be tested with longer time controls though.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Restructure RazorMargins and FutilityMargins arrays so that their
values can be more easily tuned.
Add RazorApprMargins array which replaces razorAtDepthOne concept,
because setting RazorApprMargin very high value at ply one is
same as not razoring there at all.
Comment out setting razoring and futility margins through uci to
avoid errors while tuning.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
If a variable will be populated reading an UCI option
then do not hard code its default values.
This avoids misleadings when reading the sources.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
And upon reentering the same position try it as first.
Normally qsearch moves order is already very good, first move
is the cut off in almost 90% of cases. With this patch, we get
a cut off on TT move of 98%.
Another good side effect is that we don't generate captures
and/or checks when we already have a TT move.
Unfortunatly we found a TT move only in 1% of cases. So real
impact of this patch is relatively low.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Restore old behaviour that after search is aborted we call insert_pv,
before breaking out from the iterative deepening loop.
Remove one useless FIXME and document other better.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Check all fail highs in assumed PV with greater care (fruit/Toga already does this).
Add a flag when aspiration search fails high at ply 1 to prevent search to
be terminated prematurely.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
This reverts commit 55dd98f7c717b94a659931cd20e088334b1cf7a6.
Revert fallback-system for root_search
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
However also this patch is immediately reverted. For three reasons:
1) the case it affects is very rare (and then we are likely to lose anyway),
so we can well live without this.
2) Because the case is so rare it's hard to test this change properly.
3) To perform fallback search, we must reset so many global variables that this
patch is very likely both buggy and extremely bad style.
Consider including this again if we clean-up global variables one day...
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
However just after finished writing this patch I realized that this
is not the way to go. So this will be immediately reverted.
(Just save this here in git in case I change my mind later :) )
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Implement system where aspiration search window is calculated using
values from previous iterations.
And then some crazy experimental stuff: If search fails low at the root,
don't widen window, but continue and hope we will find a better move
with given window. If search fails high at the root, cut immediately,
add some more time and start new iteration.
Note: this patch is not complete implementation, but a first test
for this idea. There are many FIXMEs left around. Most importantly
how to deal with the situation when we don't have any move!
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
In null move search do not jump directly in
qsearch() from depth(4*OnePly), but only
from depth(3*OnePly).
After 999 games at 1+0: +248 -224 =527 +8ELO
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It doesn't seem to work against Toga. After more then 400 games
we are at -13 ELO while, without it we are at + 5 ELO after 1000
games.
So revert for now to keep code simple.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Strangely enough it seems that optimization doesn't work.
After 760 games at 1+0: +155 -184 =421 -13 ELO
Probably the overhead, although small, for setting the flag
is not compensated by the saved evaluation call.
This could be due to the fact that after a TT value is stored,
if and when we hit the position again the stored TT value is
actually used as a cut-off so that we don't need to go on
with another search and evaluation is avoided in any case.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
When the stored TT value equals the static value set a
proper flag so to not call evaluation() if we hit the
same position again but use the stored TT value instead.
This is another trick to avoid calling costly evaluation()
in qsearch.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
After testing it seems patch is bad:
After 999 games 1+0: +242 -271 =486 -10 ELO
So restore saving of TT at the end but using new Joona
idea of storing as VALUE_TYPE_UPPER/VALUE_TYPE_LOWER instead
of VALUE_TYPE_EXACT.
Some optimization is still possible but better test new ideas
one by one.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Instead of just drop evaluation score after stand pat
logic save it in TT so to be reused if the same position
occurs again.
Note that we NEVER use the cached value apart to avoid an
evaluation call, in particulary we never return to caller
after a succesful tt hit.
To accomodate this a new value type VALUE_TYPE_EVAL has been
introduced so that ok_to_use_TT() always returns false.
With this patch we cut about 15% of total evaluation calls.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It is now disabled by default due to bad results
against a pool of engines...more testing is needed tough.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
This is the only "correct" exact value we can store.
Otherwise there could be spurious failed high/low nodes.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
I have just made a new rule that no modification
that increases pruning is allowed if after 1000 games
ELO is not increased by at least 10 point (was +5 in this case)
Yes, I like this kind of nosense rules :-)
Previous setup didn't change anything
After 996 games 1+0: +267 -261 =468 +2 ELO
Now with this new setup we have
After 999 games 1+0: +277 -245 =477 +11 ELO
Seems reasonable...
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Reduce of two plies near the leafs and when we still
have enough depth to go so to limit horizon effects.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Note that some pawns and material info has been switched
to int from int8_t.
This is a waste of space but it is not clear if we have a
faster or slower code (or nothing changed), some test should be
needed.
Few warnings still are alive.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It does not seem to clearly improve things and
in any case is disabled by default, so retire for now.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Prune more moves after a null search because of
a lower beta limit then in main search.
In test positions reduces the searched nodes of 30% !!!!
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It is slower the previous uglier but faster code.
So completely restore old one for now :-(
Just leave in the rework of status backup/restore in do_move().
We will cherry pick bits of previous work once we are sure
we have fixed the performance regression.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
We don't backup anymore but use the renamed StateInfo
argument passed in do_move() to store the new position
state when doing a move.
Backup is now just revert to previous StateInfo that we know
because we store a pointer to it.
Note that now backing store is up to the caller, Position is
stateless in that regard, state is accessed through a pointer.
This patch will let us remove all the backup/restore copying,
just a pointer switch is now necessary.
Note that do_null_move() still uses StateInfo as backup.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
We store it now in the same UndoInfo struct as 'previous'
field, so when doing a move we also know where to get
the previous info when undoing the back the move.
This is needed for future patches and is a nice cleanup anyway.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>