search() is used as a "leading star" and other routines
are modified according to it.
No functional change
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Was broken and fixing would be too messy.
Now this option is only activated in single thread mode
No functional change
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Just to avoid misunderstandings.
True staticValue is available through search stack
No functional change
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Bug spotted by Jouni Uski and fix suggested by Pablo Vazquez
Also add note that we are not always handling fifty move rule correctly
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
I cannot see connection between the two.
Also add one FIXME for illogical behaviour
No functional change
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
I don't know if enumerating sections is a good idea,
but for me code is more readable this way
No functional change
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
I cannot see any reason to do this. Even this is not enough to fix
theoretical race case on Windows which doesn't seem to cause any
problems in practice anyhow
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Resurrect extra check for sleeping in POSIX code.
This necessary to prevent ugly races between
thread_broadcast and thread_cond_wait.
After thread has woken up, it marks itself as available.
Another thread must not do this, because of possible race.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
So we can use a const value instead of a pointer in
split().
Also pass NULL instead of a faked address of alpha in
case split is called from a non-PV node.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Intrinsic __popcnt64() returns an unsigned __int64, cast
to an integer and silence the warning.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Joona says that sp_update_pv() does not pass the split point
boundaries, so there is no risk to corrupt data from another
split point. Also the race on thread_should_stop() is harmless
because of this.
So revert the patch and come back to single lock.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Joona says:
1. We should not be afraid of "AllThreadsShouldExit" flag.
Because when this is set to true we _must_not_ be searching (= All
splits must have been undone).
And if we are not searching it's impossible that some other thread
could give us work to do. So setting state to THREAD_AVAILABLE
doesn't do any harm. If you want to add check for this, you could do
it like this:
if (threads[threadID].state == THREAD_WORKISWAITING)
{
+ assert(!AllThreadsShouldExit)
threads[threadID].state = THREAD_SEARCHING;
2a. If waitSp->cpus == 0, setting state to THREAD_AVAILABLE makes
no harm either, because helpful master concept dictates that _only_
our own slave can book us. If we don't have any slaves, noone has the
right to book us.
2b. If point (2a) is not correct then your extra check only adds extra race:
In smp code checking for waitSp->cpus > 0 is not enough. It's possible that
our slave immediately exits and another thread
books us as a slave when our state is still
THREAD_AVAILABLE. So instead of adding extra level of security we have
just introduced extra race.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
When we found a cut-off then lock all the split point chain,
not only current one to avoid races in case two threads running
on different split points where one is ancestor then the other,
find a beta cut-off at the same time, in this case we want only
one to call sp_update_pv().
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
This is a per split-point request, not per-thread. When we find
a beta cut-off in current thread's split point or in or in some
ancestor of the current split point then threads should stop
immediately the search and return to idle_loop().
The check is done by thread_should_stop() that now looks only
at split point's chain.
No functional change and a good semplification.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
This is easier to follow and also reduces the points
where state changes to mainly idle_loop() and split().
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
This is because when we are below 4 * OnePly, the null move
will directly jump to qsearch and if we are below beta,
our opponent is above beta and will get immediate
stand pat cut off.
So basically this patch is just optimizing away useless
evaluation calls. dbg_hit_on() runs show that this heuristic
is correct >99% of cases. Transposition table probably causes
some inaccurary?
After 1148 games on QUAD
mod-orig: 583 - 565 +5 elo
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
In this case is dangerous because in split() we reset the flag to
false, but if it was set due to a cut-off higher in the tree we
completely miss that and go on with the full search.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Instead of other flags this is not a state flag, i.e. does
not defines a state for the thread, but a request because
after we raise 'stopRequest' flag the corresponding thread is
not stopped, but continues to run for a while until it returns
from sp_search() in idle_loop.
It is important the name reflects this.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Among them 'stop' and 'printCurrentLineRequest' could have
random value, so reset to a known state before to leave the
search.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
This is the first nice effect of previous patch !
Because thread_should_stop() should be declared 'const' we
need to remove the setting of 'stop' flag to true that
turns out to be a bug because thread_should_stop() is called
outside from lock protection while 'stop' flag is a volatile
shared variable so cannot be changed when not in lock.
Note that this bugs fires ONLY when we use more then 2 threads,
so commonly only in a QUAD or OCTAL machine.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Main aim of this patch is to consolidate all the thread related stuff
behind a single class interface so to avoid messing with global flags
and having thread code scattered among non-thread related stuff.
Another advantage is that now access to thread's variables is
more controlled, in particular we can differentiate between
read and write accesses by the mean of different interfaces, it
is so simpler to understand how a function is related to threads.
Lastly this rewrite is the base for future code consolidations and
semplifications that are easier now that we have only one thread's
access point.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Ensure threads are sleeping when leaving init_threads() and
the newly introduced put_threads_to_sleep().
Also ensure threads are not sleeping when leaving
wake_sleeping_threads().
As a side effect we now leave think() with all the threads
(but the main one) guaranteed to sleep. So when we enter
again in think(), after the opponent next move, we know
threads must be sleeping.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Wait inside wake_sleeping_threads() for the threads to be
effectively and reliably woken up.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>