Move prefetching code inside do_move() so to allow a
very early prefetching and to put as many instructions
as possible between prefetching and following retrieve().
With this patch retrieve() times are cutted of another 25%
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Mostly of times we are interested only in the sign of SEE,
namely if a capture is negative or not.
If the capturing piece is smaller then the captured one we
already know SEE cannot be negative and this information
is enough most of the times. And of course it is much
faster to detect then a full SEE.
Note that in case see_sign() is negative then the returned
value is exactly the see() value, this is very important,
especially for ordering capturing moves.
With this patch the calls to the costly see() are reduced
of almost 30%.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Instead of add and subtract pqst values corrisponding to
the move starting and destination squares, do it in one
go with the helper function pst_delta<>()
This simplifies the code and also better documents that what
we need is a delta value, not an absolute one.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Rename to move_is_promotion() to be more clear, also add
a new function move_promotion_piece() to get the
promotion piece type in the few places where is needed.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Micro optimization in do_move(), a quick check
avoid us to update castle rights in almost 90%
of cases.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
When SEE piece values changed in aaad48464b
of 9/12/2008 we forgot to update the value assigned in
case of captured king.
In that patch we changed the SEE piece values but without
proper testing. Probably it is a good idea to make some
tests with the old Glaurung values.
Bug spotted by Joona.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
In Position we store a pointer to a StateInfo record
kept outside of the Position object.
When copying a position we copy also that pointer so
after the copy we have two Position objects pointing
to the same StateInfo record. This can be dangerous
so fix by copying also the StateInfo record inside
the new Position object and let the new st pointer
point to it. This completely detach the copied
Position from the original one.
Also rename setStartState() as saveState() and clean up
the API to state more clearly what the function does.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
We don't need different names between a function and a
template. Compiler will know when use one or the other.
This let use restore original count_1s_xx() names instead of
sw_count_1s_xxx so to simplify a bit the code.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Unfortunatly, due to Chess960 compatibility we cannot
extend also to castling where the destinations squares
are not guaranteed to be empty.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Avoid a clear_bit() + set_bit() sequence but update bitboards
with only one xor instructions.
This is faster and simplifies the code.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Use pieces_of_type() instead of pieces_of_color_and_type()
in an hot loop and cut of almost 10% SEE execution time.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It breaks also Glaurung 2 book parsing.
We really need to work on book.cpp, but for now just
leave compatibility just for Glaurung 2 books.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
This is a very hot path function, profiling on Intel compiler
shows that inlining cuts in half the overhead.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Instead of loop across all legal moves to find a mate
loop across possible check moves only.
This reduces more then 10 times the number of moves to
check for a possible mate.
Also rename generate_checks() in generate_non_capture_checks()
so to better clarify what the function does.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
MSVC gives:
warning C4146: unary minus operator applied to unsigned type,
result still unsigned
When finds -b where b is an unsigned integer. So rewrite the two's
complement in a way to avoid the warning. Theoretically the new
version is slower, but in practice changes nothing.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Some changes to the zobrist keys, to make them identical
to those used by Glaurung 1.
The only purpose is to make it possible for both programs
to use the same opening book.
Merged from Glaurung current development snapshot.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Note that some pawns and material info has been switched
to int from int8_t.
This is a waste of space but it is not clear if we have a
faster or slower code (or nothing changed), some test should be
needed.
Few warnings still are alive.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It seems that these compilers do not like inline functions
that call a template when template definition is not in scope.
So move functions from header to in *.cpp file
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Quickly filter out some calls to sliders attacks
when we already know that will fail for sure.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Rewritten hidden_checkers() to avoid calling
sliders attacks functions but just a much
faster squares_between()
Also a good code semplification.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Instead of copying all, copy only the fields that
are updated incrementally, not the ones that are
recalcuated form scratch anyway.
This reduces copy overhead of 30%.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It is slower the previous uglier but faster code.
So completely restore old one for now :-(
Just leave in the rework of status backup/restore in do_move().
We will cherry pick bits of previous work once we are sure
we have fixed the performance regression.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Another optimization that let us remove another half
of find_hidden_checks(them, DcCandidates) calls.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
This let us to calculate only pinners when we now that
dc candidates are not possible.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Use a more strict condition to check if we have captured
an opponent pinner or hidden checker.
With this patch the occurrence of checkerCaptured == true are
reduced of 50%.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
In do_move() use previous pinned bitboards values to compute
the new one after the move. In particulary we end up with the
same bitboards in most cases. So detect these cases and just
keep the old values.
This should speedup a lot this slow computation in a very hot
path so that we can use this important info everywhere in the
code at very cheap cost.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
There was one occurence when the StateInfo variable went
out of scope before the corresponding Position object.
This yelds to a crash. Bug was not hit before because occurs
only when using an UCI interface and not the usual benchmark.
The fix consists in copying internally the content of the
about to stale StateInfo.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Remove pinned pieces from attacks when calculating
SEE value.
Algorithm is not perfect, there should be no false
positives, but can happen that we miss to remove a
pinned piece. Currently we don't cach 100% of cases,
but is a tradeoff between speed and accuracy. In any
case we stay on the safe side, so we remove an attacker
when we are sure it is pinned.
About only 0,5% of cases are affected by this patch, not
a lot given the hard work: this is a difficult patch!
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Instead of copy the old state in the new one, copy only
fields that will be updated incrementally, not the ones
that will be recalculcated anyway.
This let us copy 13 bytes instead of 28 for each do_move()
call.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Probably is slightly slow, but code is surely better
in this way. We will optimize later for speed.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>