However just after finished writing this patch I realized that this
is not the way to go. So this will be immediately reverted.
(Just save this here in git in case I change my mind later :) )
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Implement system where aspiration search window is calculated using
values from previous iterations.
And then some crazy experimental stuff: If search fails low at the root,
don't widen window, but continue and hope we will find a better move
with given window. If search fails high at the root, cut immediately,
add some more time and start new iteration.
Note: this patch is not complete implementation, but a first test
for this idea. There are many FIXMEs left around. Most importantly
how to deal with the situation when we don't have any move!
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
In null move search do not jump directly in
qsearch() from depth(4*OnePly), but only
from depth(3*OnePly).
After 999 games at 1+0: +248 -224 =527 +8ELO
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It doesn't seem to work against Toga. After more then 400 games
we are at -13 ELO while, without it we are at + 5 ELO after 1000
games.
So revert for now to keep code simple.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Strangely enough it seems that optimization doesn't work.
After 760 games at 1+0: +155 -184 =421 -13 ELO
Probably the overhead, although small, for setting the flag
is not compensated by the saved evaluation call.
This could be due to the fact that after a TT value is stored,
if and when we hit the position again the stored TT value is
actually used as a cut-off so that we don't need to go on
with another search and evaluation is avoided in any case.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
When the stored TT value equals the static value set a
proper flag so to not call evaluation() if we hit the
same position again but use the stored TT value instead.
This is another trick to avoid calling costly evaluation()
in qsearch.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
After testing it seems patch is bad:
After 999 games 1+0: +242 -271 =486 -10 ELO
So restore saving of TT at the end but using new Joona
idea of storing as VALUE_TYPE_UPPER/VALUE_TYPE_LOWER instead
of VALUE_TYPE_EXACT.
Some optimization is still possible but better test new ideas
one by one.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Instead of just drop evaluation score after stand pat
logic save it in TT so to be reused if the same position
occurs again.
Note that we NEVER use the cached value apart to avoid an
evaluation call, in particulary we never return to caller
after a succesful tt hit.
To accomodate this a new value type VALUE_TYPE_EVAL has been
introduced so that ok_to_use_TT() always returns false.
With this patch we cut about 15% of total evaluation calls.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It is now disabled by default due to bad results
against a pool of engines...more testing is needed tough.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
This is the only "correct" exact value we can store.
Otherwise there could be spurious failed high/low nodes.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
I have just made a new rule that no modification
that increases pruning is allowed if after 1000 games
ELO is not increased by at least 10 point (was +5 in this case)
Yes, I like this kind of nosense rules :-)
Previous setup didn't change anything
After 996 games 1+0: +267 -261 =468 +2 ELO
Now with this new setup we have
After 999 games 1+0: +277 -245 =477 +11 ELO
Seems reasonable...
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Reduce of two plies near the leafs and when we still
have enough depth to go so to limit horizon effects.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Note that some pawns and material info has been switched
to int from int8_t.
This is a waste of space but it is not clear if we have a
faster or slower code (or nothing changed), some test should be
needed.
Few warnings still are alive.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It does not seem to clearly improve things and
in any case is disabled by default, so retire for now.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Prune more moves after a null search because of
a lower beta limit then in main search.
In test positions reduces the searched nodes of 30% !!!!
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It is slower the previous uglier but faster code.
So completely restore old one for now :-(
Just leave in the rework of status backup/restore in do_move().
We will cherry pick bits of previous work once we are sure
we have fixed the performance regression.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
We don't backup anymore but use the renamed StateInfo
argument passed in do_move() to store the new position
state when doing a move.
Backup is now just revert to previous StateInfo that we know
because we store a pointer to it.
Note that now backing store is up to the caller, Position is
stateless in that regard, state is accessed through a pointer.
This patch will let us remove all the backup/restore copying,
just a pointer switch is now necessary.
Note that do_null_move() still uses StateInfo as backup.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
We store it now in the same UndoInfo struct as 'previous'
field, so when doing a move we also know where to get
the previous info when undoing the back the move.
This is needed for future patches and is a nice cleanup anyway.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Pass value as an argument instead or recalculating it.
Altough call is cheap this is a very hot path so with
this patch total time spent for move_is_capture() is almost
halved.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
This is somewhat taken from Stockfish 1.2 Default,
only the razoring thresold are updated, not the
razoring depth.
At the end razoring is a bit more aggressive. Results
seems slightly positive.
After 999 games +239 =536 -224 Elo +5
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Optimistic razoring settings. It is stronger with
most engines but weaker with someones.
The default is instead more solid and uniform with all
the opponents.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Add also the possibility to razor at ply one.
It is disable dby default but it seems stronger
against Stockfish itself. It is still not clear if
is stronger against other engines. By now leave
disabled.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Less prune at the bottom and at the middle, a bit more
at the top.
After 747 games: +215 =345 -187 +13 elo
Also introduced a vector of margins, now that start to be a lot
it is a more flexible solution.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Because razoring verification after qsearch() cuts more
then 40% of candidates, do not waste a costly qsearch for
nodes at depth one that will be probably discarded anyway
by futility.
Also tight razoring conditions to keep dangerous false
negatives below 0,05%. Still not clear if it is enough.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Bug fix merged from Glaurung 2.2 for search_pv()
Added the same fix also to sp_search_pv() where
was missing.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Use a margin to compare with beta so that positions
that after the verifying qsearch have gained a lot of points
are not discarded just becasue not above beta.
Also remove the second condition on depth <= OnePly, it
was too risky and added only a 2% more of pruned nodes.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>