1
0
Fork 0
mirror of https://github.com/sockspls/badfish synced 2025-04-29 16:23:09 +00:00

Careful SMP locking - Fix very occasional hangs

Louis Zulli reported that Stockfish suffers from very occasional hangs with his 20 cores machine.

Careful SMP debugging revealed that this was caused by "a ghost split point slave", where thread
was marked as a split point slave, but wasn't actually working on it.

The only logical explanation for this was double booking, where due to SMP race, the same thread
is booked for two different split points simultaneously.

Due to very intermittent nature of the problem, we can't say exactly how this happens.

The current handling of Thread specific variables is risky though. Volatile variables are in some
cases changed without spinlock being hold. In this case standard doesn't give us any kind of
guarantees about how the updated values are propagated to other threads.

We resolve the situation by enforcing very strict locking rules:
- Values for key thread variables (splitPointsSize, activeSplitPoint, searching)
can only be changed when the thread specific spinlock is held.
- Structural changes for splitPoints[] are only allowed when the thread specific spinlock is held.
- Thread booking decisions (per split point) can only be done when the thread specific spinlock is held.

With these changes hangs didn't occur anymore during 2 days torture testing on Zulli's machine.

We probably have a slight performance penalty in SMP mode due to more locking.

STC (7 threads):
ELO: -1.00 +-2.2 (95%) LOS: 18.4%
Total: 30000 W: 4538 L: 4624 D: 20838

However stability is worth more than 1-2 ELO points in this case.

No functional change

Resolves #422
This commit is contained in:
Joona Kiiski 2015-09-06 21:36:59 +01:00
parent 3e2591d83c
commit 613dc66c12
2 changed files with 8 additions and 2 deletions

View file

@ -1622,6 +1622,7 @@ void Thread::idle_loop() {
else
assert(false);
spinlock.acquire();
assert(searching);
searching = false;
@ -1633,6 +1634,7 @@ void Thread::idle_loop() {
// After releasing the lock we can't access any SplitPoint related data
// in a safe way because it could have been released under our feet by
// the sp master.
spinlock.release();
sp->spinlock.release();
// Try to late join to another split point if none of its slaves has

View file

@ -145,6 +145,7 @@ void Thread::split(Position& pos, Stack* ss, Value alpha, Value beta, Value* bes
SplitPoint& sp = splitPoints[splitPointsSize];
sp.spinlock.acquire(); // No contention here until we don't increment splitPointsSize
spinlock.acquire();
sp.master = this;
sp.parentSplitPoint = activeSplitPoint;
@ -167,6 +168,7 @@ void Thread::split(Position& pos, Stack* ss, Value alpha, Value beta, Value* bes
++splitPointsSize;
activeSplitPoint = &sp;
activePosition = nullptr;
spinlock.release();
// Try to allocate available threads
Thread* slave;
@ -194,6 +196,9 @@ void Thread::split(Position& pos, Stack* ss, Value alpha, Value beta, Value* bes
Thread::idle_loop(); // Force a call to base class idle_loop()
sp.spinlock.acquire();
spinlock.acquire();
// In the helpful master concept, a master can help only a sub-tree of its
// split point and because everything is finished here, it's not possible
// for the master to be booked.
@ -205,8 +210,6 @@ void Thread::split(Position& pos, Stack* ss, Value alpha, Value beta, Value* bes
// We have returned from the idle loop, which means that all threads are
// finished. Note that decreasing splitPointsSize must be done under lock
// protection to avoid a race with Thread::can_join().
sp.spinlock.acquire();
--splitPointsSize;
activeSplitPoint = sp.parentSplitPoint;
activePosition = &pos;
@ -214,6 +217,7 @@ void Thread::split(Position& pos, Stack* ss, Value alpha, Value beta, Value* bes
*bestMove = sp.bestMove;
*bestValue = sp.bestValue;
spinlock.release();
sp.spinlock.release();
}