Currently we use original sorting after a fail low to
research at wider window. This patch instead sorts the
moves according to the last available move's scores.
Strangely no functional change, but should be.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Changed FutilityMarginsMatrix dimensions to be a power of two
so that compiler can produce a faster accessing code.
Introduced print_pv_info() to remove some redundant code in
root_search()
Remaining stuff is triviality and documentation tweaks.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Under Windows we use CreateThread() to setup threads and
we pass a pointer to a variable that receives the thread
identifier, but this parameter is optional and we don't
use it, so remove it.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It happens that NULL is 0, but the conventional meaning is of
a zero pointer, so repleace with an explicit 0 integer value.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Consider sligtly negative captures as good if at low
depth and far from beta.
After 999 games at 1+0
Mod vs Orig +169 =694 -136 +11 ELO
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
It raises an assert under Windows, it is not clear why but it
happens that idle_loop() is called with incorrect threadID and
the assert triggered is:
assert(threadID >= 0 && threadID < MAX_THREADS);
So revert the patch for now, but we should understand why it
fails.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
We can't do it with full guarantee anyway because
there is always a possible race between the setting of
state to THREAD_SLEEPING and actual sleeping.
So just remove the not perfect code to avoid misunderstandings.
This reflects what we have done in wake_sleeping_threads() in
the previous patch.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Currently there is no guarantee that threads are sleeping
when calling wake_sleeping_threads() because put_threads_to_sleep()
returns without waiting for threads to actually sleep.
Assert can be easily triggered calling put_threads_to_sleep() and
wake_sleeping_threads() in a tight loop.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
This option likely has very low meaning for playing strength and style,
so I see no need to keep this configurable
No functional change
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
search() is used as a "leading star" and other routines
are modified according to it.
No functional change
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Was broken and fixing would be too messy.
Now this option is only activated in single thread mode
No functional change
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Just to avoid misunderstandings.
True staticValue is available through search stack
No functional change
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Bug spotted by Jouni Uski and fix suggested by Pablo Vazquez
Also add note that we are not always handling fifty move rule correctly
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
I cannot see connection between the two.
Also add one FIXME for illogical behaviour
No functional change
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
I don't know if enumerating sections is a good idea,
but for me code is more readable this way
No functional change
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
I cannot see any reason to do this. Even this is not enough to fix
theoretical race case on Windows which doesn't seem to cause any
problems in practice anyhow
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Resurrect extra check for sleeping in POSIX code.
This necessary to prevent ugly races between
thread_broadcast and thread_cond_wait.
After thread has woken up, it marks itself as available.
Another thread must not do this, because of possible race.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
So we can use a const value instead of a pointer in
split().
Also pass NULL instead of a faked address of alpha in
case split is called from a non-PV node.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Intrinsic __popcnt64() returns an unsigned __int64, cast
to an integer and silence the warning.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Joona says that sp_update_pv() does not pass the split point
boundaries, so there is no risk to corrupt data from another
split point. Also the race on thread_should_stop() is harmless
because of this.
So revert the patch and come back to single lock.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Joona says:
1. We should not be afraid of "AllThreadsShouldExit" flag.
Because when this is set to true we _must_not_ be searching (= All
splits must have been undone).
And if we are not searching it's impossible that some other thread
could give us work to do. So setting state to THREAD_AVAILABLE
doesn't do any harm. If you want to add check for this, you could do
it like this:
if (threads[threadID].state == THREAD_WORKISWAITING)
{
+ assert(!AllThreadsShouldExit)
threads[threadID].state = THREAD_SEARCHING;
2a. If waitSp->cpus == 0, setting state to THREAD_AVAILABLE makes
no harm either, because helpful master concept dictates that _only_
our own slave can book us. If we don't have any slaves, noone has the
right to book us.
2b. If point (2a) is not correct then your extra check only adds extra race:
In smp code checking for waitSp->cpus > 0 is not enough. It's possible that
our slave immediately exits and another thread
books us as a slave when our state is still
THREAD_AVAILABLE. So instead of adding extra level of security we have
just introduced extra race.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
When we found a cut-off then lock all the split point chain,
not only current one to avoid races in case two threads running
on different split points where one is ancestor then the other,
find a beta cut-off at the same time, in this case we want only
one to call sp_update_pv().
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>