1
0
Fork 0
mirror of https://github.com/sockspls/badfish synced 2025-04-30 08:43:09 +00:00
Commit graph

564 commits

Author SHA1 Message Date
atumanian
073eed590e Optimisation of Position::see and Position::see_sign
Stephane's patch removes the only usage of Position::see, where the
returned value isn't immediately compared with a value. So I replaced
this function by its optimised and more specific version see_ge. This
function also supersedes the function Position::see_sign.

bool Position::see_ge(Move m, Value v) const;

This function tests if the SEE of a move is greater or equal than a
given value. We use forward iteration on captures instread of backward
one, therefore we don't need the swapList array. Also we stop as soon
as we have enough information to obtain the result, avoiding unnecessary
calls to the min_attacker function.

Speed tests (Windows 7), 20 runs for each engine:
Test engine: mean 866648, st. dev. 5964
Base engine: mean 846751, st. dev. 22846
Speedup: 1.023

Speed test by Stephane Nicolet

Fishtest STC test:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 26040 W: 4675 L: 4442 D: 16923
http://tests.stockfishchess.org/tests/view/57f648990ebc59038170fa03

No functional change.
2016-10-08 06:38:36 +02:00
Stéphane Nicolet
28240d375c Simplify pinners conditions in SEE()
Use the following transformations:

- to check that A is included in B, testing "(A & ~B) == 0" is faster
than "(A & B) == A"

- to remove the intersection of A and B from A, doing "A &= ~B;" is as
fast as "if (A & B) A &= ~B;" but is simpler.

Overall, the simpler patch version is 0.3% than current master.

No functional change.
2016-09-22 08:31:23 +02:00
Guenther Demetz
943ae89be1 Fix pin-aware SEE
Correct pinners calculation and fix bug with pinned
pieces giving check. With this patch 'pinners' only
returns sliders with exactly one defensive piece between
the slider and the attacked square (in other words, pinners
returns exact pinners).

This was a co-operation between Marco Costalba,
Stphane Nicolet and me.

Special thanks to Ronald de Man for reporting the bug with
pinned pieces giving check, discussed here:
https://groups.google.com/forum/?fromgroups=#!topic/fishcooking/S_4E_Xs5HaE

STC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 132118 W: 23578 L: 23645 D: 84895

LTC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 36424 W: 4770 L: 4670 D: 26984

bench: 6272231
2016-09-21 08:42:25 +02:00
Marco Costalba
057d710fc2 Fix indentation in struct FromToStats
And other little trivial stuff.

No functional change.
2016-09-17 09:51:20 +02:00
Guenther Demetz
ace8e951d7 Simplify code for pinaware SEE
This is the most compact and neatest version
is was able to produce.

On normal builds I have a small slowdown:
normal builds base vs. simplification (gcc 4.8.1 Win7-64 i7-3770 @ 3.4GHz x86-64-modern)
Results for 20 tests for each version:

        Base      Test      Diff
Mean    1974744   1969333   5411
StDev   11825     10281     5874
p-value: 0,178
speedup: -0,003

On pgo-builds however I measure a nice 1.1% speedup

pgo-builds base vs. simplification
Results for 20 tests for each version:

        Base      Test      Diff
Mean    1974119   1995444   -21325
StDev   8703      5717      4623
p-value: 1
speedup: 0,011

No functional change.
2016-09-12 15:45:00 +02:00
Guenther Demetz
90ce24b11e Pinned aware SEE
Don't allow pinned pieces to attack the exchange-square as long all
pinners (this includes also potential ones) are on their original
square.
As soon a pinner moves to the exchange-square or get captured on it, we
fall back to standard SEE behaviour.

This correctly handles the majority of cases with absolute pins.

bench: 6883133
2016-09-12 09:31:09 +02:00
Marco Costalba
e340ce221c Syntactic sugar to loop across pieces
Also add some comments to the new operator~(Piece).

No functional change.
2016-09-04 15:33:17 +02:00
syzygy
ca6c9f85a5 Change from [Color][PieceType] to [Piece]
Speed up of almost 1% in both normal and
pgo builds.

No functional change.
2016-09-04 09:22:09 +02:00
lucasart
13b4444d9e Change exclusion key setup
Should depend on which move is excluded. This
allow us to remove the dedicated Position::exclusion_key().

STC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 59814 W: 11136 L: 11083 D: 37595

LTC:
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 31023 W: 4187 L: 4080 D: 22756

bench 7553379
2016-09-02 08:37:01 +02:00
Marco Costalba
1ee2838214 Retire CheckInfo
Move its content directly under StateInfo.

Verified for no speed regression.

No functional change.
2016-08-28 08:08:13 +02:00
Stéphane Nicolet
805afcbf3d Move CheckInfo under StateInfo
This greately simplifies usage because hides to the
search the implementation specific CheckInfo.

This is based on the work done by Marco in pull request #716,
implementing on top of it the ideas in the discussion: caching
the calls to slider_blockers() in the CheckInfo structure,
and simplifying the slider_blockers() function by removing its
first parameter.

Compared to master, bench is identical but the number of calls
to slider_blockers() during bench goes down from 22461515 to 18853422,
hopefully being a little bit faster overall.

archlinux, gcc-6
make profile-build ARCH=x86-64-bmi2
50 runs each

bench:
base = 2356320 +/- 981
test = 2403811 +/- 981
diff = 47490 +/- 1828

speedup = 0.0202
P(speedup > 0) = 1.0000

perft 6:
base = 175498484 +/- 429925
test = 183997959 +/- 429925
diff = 8499474 +/- 469401

speedup = 0.0484
P(speedup > 0) = 1.0000

perft 7 (but only 10 runs):
base = 185403228 +/- 468705
test = 188777591 +/- 468705
diff = 3374363 +/- 476687

speedup = 0.0182
P(speedup > 0) = 1.0000

$ ./pyshbench ../Stockfish/master ../Stockfish/test 20
run base     test     diff
...

base = 2501728 +/- 182034
test = 2532997 +/- 182034
diff = 31268 +/- 5116

speedup = 0.0125
P(speedup > 0) = 1.0000

No functional change.
2016-08-27 09:53:26 +02:00
Stéphane Nicolet
abac509ccb Teach check_blockers to check also non-king pieces
This is a prerequisite for next patch

No functional change.
2016-05-28 14:52:21 +02:00
DU-jdto
c737062436 Remove some pointless micro-optimizations
Seems to give around 1% speed-up for CPUs with popcnt support.
Seems to give a very minor speed-up for CPUs without popcnt.

No functional change

Resolves #646
2016-04-23 02:04:28 +01:00
Marco Costalba
7eaea3848c StateInfo is usually allocated on the stack by search()
And passed in do_move(), this ensures maximum efficiency and
speed and at the same time unlimited move numbers.

The draw back is that to handle Position init we need to
reserve a StateInfo inside Position itself and use at
init time and when copying from another Position.

After lazy SMP we don't need anymore this gimmick and we can
get rid of this special case and always pass an external
StateInfo to Position object.

Also rewritten and simplified Position constructors.

Verified it does not regress with a 3 threads SMP test:
ELO: -0.00 +-12.7 (95%) LOS: 50.0%
Total: 1000 W: 173 L: 173 D: 654

No functional change.
2016-04-17 08:29:33 +02:00
Marco Costalba
d30994ecd5 Hide global visibility when not needed
Also move PieceValue definition in psqt.cpp,
where it is initialized.

Fix a warning in popcount16() with Intel compiler

No functional change.
2016-04-09 10:42:04 +02:00
mstembera
8fb45caade Simplify popcnt
Also a speedup(about 1%) on 64-bit w/o hardware popcnt

Retire Max15 and Full template parameters
(Contributed by Marco Costalba)

Now that we have just SW and HW versions, use
template default parameter to get rid of explicit
template parameters.

Retire bitcount.h and move the only defined
function to bitboard.h

No functional change

Resolves #620
2016-04-08 18:52:15 +01:00
ppigazzini
d4af15f682 Update AUTHORS and copyright notice
No functional change

Resolves #555
2016-01-02 09:43:51 +00:00
Marco Costalba
9742fb10fd Update Copyright year
No functional change.

Resolves #554
2016-01-01 10:17:36 +00:00
lucasart
fca8dbc029 Avoid friend
operator<<(os, pos) does not need to access any private members of pos.

No functional change.

Resolves #492
2015-11-10 21:46:02 +00:00
Stéphane Nicolet
80d7556af7 Some code and comment cleanup
- Remove all references to split points
- Some grammar and spelling fixes

No Functional change

Resolves #478
2015-10-29 15:28:59 +00:00
mstembera
3e2591d83c Minor clean up of some function parameters
No function change

Resolves #416
2015-09-07 20:17:39 +01:00
Marco Costalba
e6310b3469 Rename Position::list
Use Position::square and Position::squares instead.

This allow us to remove king_square(), simplify
endgames and to have more naming uniformity.

Moreover, this is a prerequisite step in case
in the future we decide to retire piece lists
altoghter and use pop_lsb() to loop across
pieces and serialize the moves. In this way
we just need to change definition of Position::square
to something like:

template<PieceType Pt> inline
Square Position::square(Color c) const {
  return lsb(byColorBB[c]);
}

No functional change.
2015-08-04 09:51:06 +02:00
Marco Costalba
2795aedbc3 Checking for rook color when setting castling
In Chess960 we can have legal positions with
opponent rook in A or H file and with castling
available, for instance:

4k3/pppppppp/8/8/8/8/PPPPPPPP/rR2K3 w Q - 0 1

In those cases we pick up the wrong rook when
setting castling.

Fix it by checking the color of the rook.

Bug reported by Matthew Lai.

No functional change.
2015-05-29 05:38:40 +02:00
Marco Costalba
ee0371f86e Cleanup work in misc.cpp
Also some code style tidy up of latest patches.

Also renamed checkSq -> checkSquares because it
is a bitboard and not a square.

No functional change.
2015-05-10 09:42:26 +02:00
Marco Costalba
578b21bbee Split PSQT init from Position init
Easier for tuning psq tables:

TUNE(myParameters, PSQT::init);

Also move PSQT code in a new *.cpp file, and retire the
old and hacky psqtab.h that required to be included only
once to work correctly, this is not idiomatic for a header
file.

Give wide visibility to psq tables (previously visible only
in position.cpp), this will easy the use of psq tables outside
Position, for instance in move ordering.

Finally trivial code style fixes of the latest patches.

Original patch of Lucas Braesch.

No functional change.
2015-05-03 20:07:15 +02:00
lucasart
6c040c821a Retire FORCE_INLINE
No speed regression on my machine (i7-3770k, gcc 4.9.1, linux 3.16):

        stat        test     master   diff
        mean   2,482,415  2,474,987  7,906
        stdev      4,603      5,644  2,497

        speedup        0.32%
        P(speedup>0)  100.0%

Fishtest 9+0.03:

ELO: 0.26 +-1.8 (95%) LOS: 61.2%
Total: 60000 W: 12437 L: 12392 D: 35171

No functional change.

Resolves #334
2015-04-15 21:21:45 +01:00
Marco Costalba
a66c73deef Allow Position::init() to be called more than once
Currently Zobrist::castling[] are not properly zeroed
and rely on the compiler to do this at startup, but this
makes Position::init() to set different values every time
it is called!

This is a bit odd, and although not impacting normal usage,
can yield to subtle misbehaviour, very difficult to track
down, in case we happen to call it more than once for some
reason. I found this while developing tuning support and
it took me a while to track it down.

So properly init Zobrist::castling[]

No functional change.

Resolves #329
2015-04-10 20:39:15 +01:00
joergoster
c6f987d1ad Fix the comment for Position::is_draw()
We no longer check for insufficient material.

No functional change

Resolves #299
2015-03-18 20:30:50 +00:00
Marco Costalba
686b45e121 Retire one do_move() overload
After Lucas patch it is almost useless.

No functional change.
2015-02-15 12:23:03 +01:00
lucasart
dc13004283 Compute checkers from scratch
This micro-optimization only complicates the code and provides no benefit.
Removing it is even a speedup on my machine (i7-3770k, linux, gcc 4.9.1):

stat        test     master    diff
mean   2,403,118  2,390,904  12,214
stdev     12,043     10,620   3,677

speedup       0.51%
P(speedup>0) 100.0%

No functional change.
2015-02-15 12:11:05 +01:00
Marco Costalba
8f10f6c9cd Shuffle put_piece() and friends signatures
It is more consistent with the others member functions.

No functional change.
2015-02-08 18:17:08 +01:00
Marco Costalba
3184852bdc Small tweaks in do_move and friends
Also remove useless StateCopySize64 optimization:
compiler uses SSE movups instruction anyhow and
does not need this trick (verified with fishbench).

No functional change.
2015-02-08 13:09:29 +01:00
Marco Costalba
99c9cae586 Avoid casting to char* in prefetch()
Funny enough, gcc __builtin_prefetch() expects
already a void*, instead Windows's _mm_prefetch()
requires a char*.

The patch allows to remove ugly casts from caller
sites.

No functional change.
2015-02-07 19:13:41 +01:00
Marco Costalba
152a4dc5cd Rewrite pos_is_ok()
No functional change.
2015-02-07 15:02:28 +01:00
Marco Costalba
47a0768102 Micro-optimize SEE
Results for 10 tests for each version (gcc 4.8.3 on mingw):

            Base      Test      Diff
    Mean    1502447   1507917   -5470
    StDev   3119      1364      4153

p-value: 0,906
speedup: 0,004

Results for 10 tests for each version (MSVC 2013):

            Base      Test      Diff
    Mean    1400899   1403713   -2814
    StDev   1273      2804      2700

p-value: 0,851
speedup: 0,002

No functional change.
2015-02-07 12:21:39 +01:00
Marco Costalba
8b0fee9998 Rename dbg_hit_on_c() to dbg_hit_on()
Use an overload instead of a new named function.

I have found this handier and easier when adding
some quick debug code.

No functional change.
2015-02-07 11:15:38 +01:00
Marco Costalba
2ca2c3f35b Fun with lambdas
Use lambda functions instead of has_positive_value()
and toggle_case()

No functional change.
2015-01-21 11:33:53 +01:00
Marco Costalba
4eb2d8ce09 Assorted headers cleanup
Mostly comments fixing and other small things.

No functional change.
2015-01-11 22:56:35 +01:00
Marco Costalba
42b48b08e8 Update copyright year
No functional change.
2015-01-10 11:46:28 +01:00
Marco Costalba
aea2fde611 Assorted formatting and comment tweaks in position.h
No functional change.
2015-01-07 09:09:41 +01:00
Marco Costalba
91cc82aa25 Let material probing to access per-thread table
It is up to material (and pawn) table look up
code to know where the per-thread tables are,
so change API to reflect this.

Also some comment fixing while there

No functional change.
2015-01-02 21:31:02 +01:00
Marco Costalba
b8fd1a78dc Improve comments in UCI
And simplify naming while there.

No functional change.

Resolves #159
2014-12-14 23:50:33 +00:00
Marco Costalba
5943600a89 Assorted nitpicking code-style
No functional change.
2014-12-10 12:38:13 +01:00
Ernesto Gatti
158864270a Simpler PRNG and faster magics search
This patch replaces RKISS by a simpler and faster PRNG, xorshift64* proposed
by S. Vigna (2014). It is extremely simple, has a large enough period for
Stockfish's needs (2^64), requires no warming-up (allowing such code to be
removed), and offers slightly better randomness than MT19937.

Paper: http://xorshift.di.unimi.it/
Reference source code (public domain):
http://xorshift.di.unimi.it/xorshift64star.c

The patch also simplifies how init_magics() searches for magics:

- Old logic: seed the PRNG always with the same seed,
  then use optimized bit rotations to tailor the RNG sequence per rank.

- New logic: seed the PRNG with an optimized seed per rank.

This has two advantages:
1. Less code and less computation to perform during magics search (not ROTL).
2. More choices for random sequence tuning. The old logic only let us choose
from 4096 bit rotation pairs. With the new one, we can look for the best seeds
among 2^64 values. Indeed, the set of seeds[][] provided in the patch reduces
the effort needed to find the magics:

64-bit SF:
Old logic -> 5,783,789 rand64() calls needed to find the magics
New logic -> 4,420,086 calls

32-bit SF:
Old logic -> 2,175,518 calls
New logic -> 1,895,955 calls

In the 64-bit case, init_magics() take 25 ms less to complete (Intel Core i5).

Finally, when playing with strength handicap, non-determinism is achieved
by setting the seed of the static RNG only once. Afterwards, there is no need
to skip output values.

The bench only changes because the Zobrist keys are now different (since they
are random numbers straight out of the PRNG).

The RNG seed has been carefully chosen so that the
resulting Zobrist keys are particularly well-behaved:

1. All triplets of XORed keys are unique, implying that it
   would take at least 7 keys to find a 64-bit collision
   (test suggested by ceebo)

2. All pairs of XORed keys are unique modulo 2^32

3. The cardinality of { (key1 ^ key2) >> 48 } is as close
   as possible to the maximum (65536)

Point 2 aims at ensuring a good distribution among the bits
that determine an TT entry's cluster, likewise point 3
among the bits that form the TT entry's key16 inside a
cluster.

Details:

     Bitset   card(key1^key2)
     ------   ---------------
RKISS
     key16     64894   = 99.020% of theoretical maximum
     low18    180117   = 99.293%
     low32    305362   = 99.997%

Xorshift64*, old seed
     key16     64918   = 99.057%
     low18    179994   = 99.225%
     low32    305350   = 99.993%

Xorshift64*, new seed
     key16     65027   = 99.223%
     low18    181118   = 99.845%
     low32    305371   = 100.000%

Bench: 9324905

Resolves #148
2014-12-08 08:18:26 +08:00
hxim
fbb53524ef Rename some variables for more clarity.
No functional change.

Resolves #131
2014-12-08 07:53:33 +08:00
hxim
0a1f54975f Fix some comments
No functional change.

Resolves #123
2014-11-19 06:39:17 +08:00
Gary Linscott
bffe32f4fe Fix fen output for castling rights
This is a regression from 428962a

We have to cast to char here, otherwise the compiler
interprets it as an integer, and writes a number.

No functional change

Resolves #122
2014-11-19 06:37:59 +08:00
sf-x
d65c9a3262 Use PHASE_MIDGAME in game_phase()
No functional change

Resolves #117
2014-11-17 07:50:33 +08:00
lucasart
375797d51c Retire CACHE_LINE_ALIGNMENT
Speed tests showed no benefit.

No functional change.

Resolves #97
2014-11-07 14:27:04 -05:00
Marco Costalba
fc0733087a Restore std::dec after std::hex
Code is leaking a std::hex, and causes subsequent
sync_cout output to be in hexadecimal.

Spotted by Lucas

No functional change.
2014-11-02 08:03:52 +01:00