Compiler complies that 'cnt' is initialized but
unused (in !CheckThreeFold case). Moving the
definition of 'cnt'out of the loop seems to do
the trick.
No functional change.
Instead of use a variable so to resolve many conditions
already at compile time. In quiesce is also where we
have most of the InCheck nodes and is one of the most
performance critical code paths.
Speed up of 1.5% with Clang and 1% with gcc
Suggested by Hongzhi Cheng.
No functional change.
When checking if a move defends the threatened piece
we correctly remove from the occupancy bitboard the
moved piece. This patch removes from the occupancy also
the threatening piece so to consider the cases of moves
that defend the threatened piece x-raying through the
threat move.
As example in this position:
r3k2r/p1ppqp2/Bn4p1/3p1n2/4P1N1/5Q1P/PPP2P1P/R3K2R w KQkq - 1 10
The threat black move is dxe4. With this patch we include
(and so don't prune) white's Bb7 that would be pruned otherwise.
The number of affected position is very low, around 1% of cases,
so we don't expect ELO changes, neverthless this is the logical
and natural thing to do.
Patch suggested by Hongzhicheng.
new bench: 5323798
ReducedStateInfo is a redundant struct that is also
prone to errors, indeed must be updated any time is
updated StateInfo. It is a trick to partial copy a
StateInfo object in do_move().
This patch takes advantage of builtin macro offsetof()
to directly calculate the number of quad words to copy.
Note that we still use memcpy to do the actual job of
copying the (48 bytes) of data.
Idea by Richard Vida.
No functional and no performance change.
Only in not performance critical code like pretty_pv(),
otherwise continue to use the good old C-style arrays
like in extract/insert PV where I have done some code
refactoring anyhow.
No functional change.
Silly typo (introduced in e304db9d1e) completely
messed up move notation in case of promotions causing
"Illegal move" warning in cutechess-cli.
Reported by Jörg Oster.
No functional change.
In multi-threads runs with debug on we experience some
asserts due to the fact that TT access is intrinsecally
racy and its contents cannot be always trusted so must
be validated before to be used and this is what the
patch does.
No functional case.
And restore old behaviour of not returning from a RootNode
without updating RootMoves[].
Also renamed is_draw() template parameters to reflect a
'positive' logic (Check instead of Skip) that is easier
to follow.
New bench: 5312693
When signal 'stop' is raised we return bestValue
that could be still set at -VALUE_INFINITE and
this triggers an assert. Fix it by returning
a value we know for sure is not +-VALUE_INFINITE.
Reported by 平岡拓也 Hiraoka.
No functional change.
Note that the actual pickup is done in the class
d'tor so to be sure it is always triggered, even
in case of a sudden exit due to a 'stop' signal.
No functional change.
Function move_to_san() requires the Position to be
passed by referenced because a do/undo move is done
inside the function to detect a possible mate and to
add to the san string the corresponding '#' suffix.
Instead of passing a copy of current position pass
directly the original position object after const
casting it. This has the advantage to avoid a costly
Position copy, on the down side a bench test could
report different searched nodes if print(move) is
used, due to the additionals do_move() calls.
No functional change.
Existing Makefile is buggy for PowerPC, it has no
SSE, yet it is given it if Prefetch is enabled,
because it isn't ARMv7.
Patch from Matthew Brades.
No functional change.
We can have ttValue == VALUE_NONE when we use a TT
slot to just save a position static evaluation, but
in this case we also save DEPTH_NONE so to avoid
using the ttValue in search. This happens to work,
but due to a number of lucky and tricky cases that
we now documnet through a bunch of asserts and a
little change to value_from_tt() that has no real
effect but clarifing the code.
No functional change.
On some platforms 128 MB of RAM for TT is too much,
so run 'bench' with the default 32 MB size.
No functional change although of course now 'bench'
reports a different number: 5545018
Use popcount() instead in the only calling place.
It is used only at initialization so there is no
speed regression and anyhow even initialization
itself is not slowed down: magic bitboard setup
stays around 175 msec on my slow 32bit Core Duo.
No functional change.
Simplify the code and doing this introduce a couple
of (very small) functional changes:
- Always compare to depth even in "mate value" condition
- TT cut-off in qsearch also in case of PvNode, as in search
Verified against regression with 2500 games at 30"+0.05
on 2 threads: 451 - 444 - 1602
Functional changed: new bench is 5544977
As Joona says: "The problem is that when doing full
window search (-VALUE_INFINITE, VALUE_INFINITE), and
pruning all the moves will return fail low which is
mate score, because only clause touching alpha is
"mate distance pruning". So we are returning mate score
although we are just pruning all the moves. In reality
there probably is no mate in sight.
Bug spotted and fixed by Joona.
Implement lsb/msb using armv7 assembly instructions.
msb is the easiest one, using a gcc intrinsic that generates
code using the ARM's clz instruction. lsb is also using this
clz instruction, but with the help of ARM's 'rbit' (bit
reversing) instruction. This leads to a >2% speed gain.
I also renamed 'arm-32' to the more meaningfull 'armv7' in the Makefile
No functional change.
Port to qsearch() the same changes we recently
added to search().
Overall this search refactoring series shows
almost 2% speed up on gcc compile.
No functional change.
When using the Makefile (as for the mingw case),
IS_64BIT and USE_BSFQ are already set with
ARCH=x86-64 and do not need to be redefined.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
First disable Contempt Factor during analysis, then
calculate the modified draw score from the point of
view of the player, so from the point of view of
RootPosition color.
Thanks to Ryan Taker for suggesting the fixes.
No functional change.
These kind of arch specific code is really nasty
to make it right becuase you need to verify on
all the platforms.
Now should compile properly also on ARM
Reported by Jean-Francois.
No functional change.
In particular on ARM processors. Original patch by
Jean-Francois, sligtly modified by me to preserve
the meaning of NO_PREFETCH flag.
Verified with gcc, clang and icc that prefetch instruction
is correctly created.
No functional change.