We give a S(21,11) bonus for knight threats on the next moves
against enemy queen. The threats are from squares which are
"not strongly protected" and which may be empty, contain enemy
pieces or even one of our piece at the moment (N,B,Q,R) -- hence
be two-steps threats in the later case because we will have to
move our piece and *then* attack the enemy queen with the knight.
STC: http://tests.stockfishchess.org/tests/view/5a9e442e0ebc590297cb6162
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 35129 W: 7346 L: 7052 D: 20731
LTC: http://tests.stockfishchess.org/tests/view/5a9e6e620ebc590297cb617f
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 42442 W: 6695 L: 6414 D: 29333
How to continue from there?
• Trying to refine the threat condition ("not strongly protected")
• Trying the two-steps idea for bishops or rooks threats against queen
Bench: 6051247
Similar removal of superposition code trick as in the
"Simplify tropism computation" patch. This simplification
of the space() function will allow us to specify space
masks which can reach into enemy territory.
passed STC:
LLR: 3.38 (-2.94,2.94) [-3.00,1.00]
Total: 184630 W: 40581 L: 40758 D: 103291
http://tests.stockfishchess.org/tests/view/5a8433360ebc590297cc80c5
passed LTC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 231799 W: 37647 L: 37858 D: 156294
http://tests.stockfishchess.org/tests/view/5a96a34a0ebc590297cc8cfd
No functional change.
Add a logarithmic term in the optimism computation, increase
the maximal optimism and lower the contempt offset.
This increases the dynamics of the optimism aspects, giving
a boost for balanced positions without skewing too much on
unbalanced positions (but this version will enter panic mode
faster than previous master when behind, trying to draw faster
when slightly behind). This helps, since optimism is in general
a good thing, for instance at LTC, but too high optimism
rapidly contaminates play.
passed STC:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 159343 W: 34489 L: 33588 D: 91266
http://tests.stockfishchess.org/tests/view/5a8db9340ebc590297cc85b6
passed LTC:
LLR: 2.97 (-2.94,2.94) [0.00,5.00]
Total: 47491 W: 7825 L: 7517 D: 32149
http://tests.stockfishchess.org/tests/view/5a9456a80ebc590297cc8a89
It must be mentioned that a version of the PR with contempt 0
did not pass STC [0,5]. The version in the patch, which uses
default contempt 12, was found to be as strong as current master
on different matches against SF7 and SF8, both at STC and LTC.
One drawback maybe is that it raises the draw rate in self-play
from 56% to 59%, giving a little bit less sensitivity for SF
developpers to find evaluation improvements by selfplay tests
in fishtest.
Possible further work:
• tune the values accurately, while keeping in mind the drawrate issue
• check whether it is possible to remove linear and offset term
• try to simplify the S-shape curve
Bench: 5934644
Use a recursive std::array with variadic template
parameters to get rid of the last redundacy.
The first template T parameter is the base type of
the array, the W parameter is the weight applied to
the bonuses when we update values with the << operator,
the D parameter limits the range of updates (range is
[-W * D, W * D]), and the last parameters (Size and
Sizes) encode the dimensions of the array.
This allows greater flexibility because we can now tweak
the range [-W * D, W * D] for each table.
Patch removes more lines than what adds and streamlines
the Stats soup in movepick.h
Closes PR#1422 and PR#1421
No functional change.
The first depth 2 margin triggers the verification quiescence search.
This qsearch() result has to be better then the second lower margin,
so we only skip the razoring when the qsearch gives a significant
improvement.
Passed STC:
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 32133 W: 7395 L: 7101 D: 17637
http://tests.stockfishchess.org/tests/view/5a93198b0ebc590297cc8942
Passed LTC:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 17382 W: 3002 L: 2809 D: 11571
http://tests.stockfishchess.org/tests/view/5a93b18c0ebc590297cc89c2
This Elo-gaining version was further simplified following a suggestion
of Marco Costalba:
STC:
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 15553 W: 3505 L: 3371 D: 8677
http://tests.stockfishchess.org/tests/view/5a964be90ebc590297cc8cc4
LTC:
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 13253 W: 2270 L: 2137 D: 8846
http://tests.stockfishchess.org/tests/view/5a9658880ebc590297cc8cca
How to continue after this patch?
Reformating the razoring code (step 7 in search()) to unify the
depth 1 and depth 2 treatements seems quite possible, this could
possibly lead to more simplifications.
Bench: 5765806
In pawn structures like white pawns f6,h6 against black pawns f7,g6,h7
the attack on the king is blocked by the own pawns. So decrease the
penalty for king safety.
See diagram and discussion in
https://github.com/official-stockfish/Stockfish/pull/1434
A sample position that this patch wants to avoid is the following
1rr2bk1/3q1p1p/2n1bPpP/pp1pP3/2pP4/P1P1B3/1PBQN1P1/1K3R1R w - - 0 1
White pawn storm on the king side was a disaster, it locked the king
side completely. Therefore, all the king tropism bonus that white have
on the king side are useless, and kingadjacent attacks too. Master
gives White a static +4.5 advantage, but White cannot win that game.
The patch is lowering this evaluation artefact.
STC:
LLR: 2.94 (-2.94,2.94) [0.00,5.00]
Total: 16467 W: 3750 L: 3537 D: 9180
http://tests.stockfishchess.org/tests/view/5a92102d0ebc590297cc87d0
LTC:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 64242 W: 11130 L: 10745 D: 42367
http://tests.stockfishchess.org/tests/view/5a923dc80ebc590297cc8806
This version includes reformatting and speed optimization by Alain Savard.
Bench: 5643527
Using a SPSA tuning session to optimize the time management
parameters.
With SPSA tuning it is not always possible to say where improvements
came from. Maybe some variables changed randomly or because result
was not sensitive enough to them. So my explanation of changes will
not be necessarily correct, but here it is.
• When decrease of thinking time was added by Joost a few months ago
if best move has not changed for several plies, one more competing
indicator was introduced for the same purpose along with increase
in score and absence of fail low at root. It seems that tuning put
relatively more importance on that new indicator what allowed to save
time.
• Some of this saved time is distributed proportionally between all
moves and some more time were given to moves when score dropped a lot
or best move changed.
• It looks also that SPSA redistributed more time from the beginning to
later stages of game via other changes in variables - maybe because
contempt made game to last longer or for whatever reason.
All of this is just small tweaks here and there (a few percentages changes).
STC (10+0.1):
LLR: 2.96 (-2.94,2.94) [0.00,4.00]
Total: 18970 W: 4268 L: 4029 D: 10673
http://tests.stockfishchess.org/tests/view/5a9291a40ebc590297cc8881
LTC (60+0.6):
LLR: 2.95 (-2.94,2.94) [0.00,4.00]
Total: 72027 W: 12263 L: 11878 D: 47886
http://tests.stockfishchess.org/tests/view/5a92d7510ebc590297cc88ef
Additional non-regression tests at other time controls
Sudden death 60s:
LLR: 2.95 (-2.94,2.94) [-4.00,0.00]
Total: 14444 W: 2715 L: 2608 D: 9121
http://tests.stockfishchess.org/tests/view/5a9445850ebc590297cc8a65
40 moves repeating at LTC:
LLR: 2.95 (-2.94,2.94) [-4.00,0.00]
Total: 10309 W: 1880 L: 1759 D: 6670
http://tests.stockfishchess.org/tests/view/5a9566ec0ebc590297cc8be1
This is a functional patch only for time management, but the bench
does not reflect this because it uses fixed depth search, so the number
of nodes does not change during bench.
No functional change.
This is the sequel of the previous patch, we now let the parent node initialize
stat score to zero once for all grandchildren.
Initialize statScore to zero for the grandchildren of the current position.
So statScore is shared between all grandchildren and only the first grandchild
starts with statScore = 0. Later grandchildren start with the last calculated
statScore of the previous grandchild. This influences the reduction rules in
LMR which are based on the statScore of parent position.
Tests results against the previous patch:
STC:
LLR: 2.96 (-2.94,2.94) [0.00,4.00]
Total: 23676 W: 5417 L: 5157 D: 13102
http://tests.stockfishchess.org/tests/view/5a9423a90ebc590297cc8a46
LTC:
LLR: 2.96 (-2.94,2.94) [0.00,4.00]
Total: 35485 W: 6168 L: 5898 D: 23419
http://tests.stockfishchess.org/tests/view/5a9435550ebc590297cc8a54
Bench: 5643520
Let the parent node initialize stat score to zero once for all siblings.
Initialize statScore to zero for the children of the current position.
So statScore is shared between sibling positions and only the first sibling
starts with statScore = 0. Later siblings start with the last calculated
statScore of the previous sibling. This influences the reduction rules in
in LMR which are based on the statScore of parent position.
STC:
LLR: 2.96 (-2.94,2.94) [0.00,4.00]
Total: 22683 W: 5202 L: 4946 D: 12535
http://tests.stockfishchess.org/tests/view/5a93315f0ebc590297cc894f
LTC:
LLR: 2.95 (-2.94,2.94) [0.00,4.00]
Total: 48548 W: 8346 L: 8035 D: 32167
http://tests.stockfishchess.org/tests/view/5a933ba90ebc590297cc8962
Bench: 5833683
The current master applies Eval::Tempo even to leaves evaluated
as draw by some of the static evaluation functions of endgame.cpp
(for instance KNN vs K or stalemates in KP vs K). This results in
some lines being reported as +0.07 or -0.07 when the terminal
position has reached such endgames (0.07 being about the value
of a tempo for Stockfish).
This patch does not apply Eval::tempo to these positions. This leads
to more nodes being evaluated as VALUE_DRAW during search, giving more
opportunities for cut-offs in alpha-beta.
STC:
LLR: 2.96 (-2.94,2.94) [0.00,4.00]
Total: 52602 W: 11776 L: 11403 D: 29423
http://tests.stockfishchess.org/tests/view/5a8cb8f60ebc590297cc8546
LTC:
LLR: 2.97 (-2.94,2.94) [0.00,4.00]
Total: 156613 W: 26820 L: 26158 D: 103635
http://tests.stockfishchess.org/tests/view/5a8f452d0ebc590297cc865a
Bench: 4924749
To compute dicovered check or pinned pieces we use some bitwise
operators that are not really needed because already accounted for
at the caller site.
For instance in evaluation we compute:
pos.pinned_pieces(Us) & s
Where pinned_pieces() is:
st->blockersForKing[c] & pieces(c)
So in this case the & operator with pieces(c) is useless,
given the outer '& s'.
There are many places where we can use the naked blockersForKing[]
instead of the full pinned_pieces() or discovered_check_candidates().
This path is simpler than original and gives around 1% speed up for me.
Also tested for speed by mstembera and snicolet (neutral in both cases).
No functional change.
Apply -mdynamic-no-pic in a single place in the Makefile instead of 5 places.
Verified on three different Macs:
- a MacBook from 2013
- a MacBook running MacOS 10.9.5
- an iMac running MacOS 10.13.3
No functional change.
The previous asymmetry measure of the pawn structure only used to
consider the number of pawns on semi-opened files in the position.
With this patch we also increase the measure by the number of passed
pawns for both players.
Many thanks to the community for the nice feedback on the previous
version, with special mentions to Alain Savard and Marco Costalba
for clarity and speed suggestions.
STC:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 13146 W: 3038 L: 2840 D: 7268
http://tests.stockfishchess.org/tests/view/5a91dd0c0ebc590297cc877e
LTC:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 27776 W: 4771 L: 4536 D: 18469
http://tests.stockfishchess.org/tests/view/5a91fdd50ebc590297cc879b
How to continue after this patch?
Stockfish will now evaluate more positions with passed pawns, so
tuning the passed pawns values may bring Elo. The patch has also
consequences on the initiative term, where we might want to give
different weights to passed pawns and semi-openfiles (idea by
Stefano Cardanobile).
Bench: 5302866
Move the first killer move out of the capture stage, combining treatment
of first and second killer move.
passed STC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 55777 W: 12367 L: 12313 D: 31097
http://tests.stockfishchess.org/tests/view/5a88617e0ebc590297cc8351
Similar to an earlier proposition of Günther Demetz, see pull request #1075.
I think it is more robust and readable than master, why hand-unroll the loop
over the killer array, and duplicate code ?
This version includes review comments from Marco Costalba.
Bench: 5227124
The previous asymmetry measure of the pawn structure only used to
consider the number of pawns on semi-opened files in the postions.
With this patch we also increase the measure by the number of passed
pawns for both players.
STC:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 13146 W: 3038 L: 2840 D: 7268
http://tests.stockfishchess.org/tests/view/5a91dd0c0ebc590297cc877e
LTC:
LLR: 2.96 (-2.94,2.94) [0.00,5.00]
Total: 27776 W: 4771 L: 4536 D: 18469
http://tests.stockfishchess.org/tests/view/5a91fdd50ebc590297cc879b
How to continue from there: Stockfish will now evaluate more positions
with passed pawns, so tuning the passed pawns values may bring Elo.
The patch also has consequences on the initiative term.
Bench: 5302866
When iid (Internal iterative deepening) is invoked, the prior value of ttValue is
not guaranteed to be VALUE_NONE. As such, it is currently possible to enter a state
in which ttValue has a specific value which is inconsistent with tte->bound() and
tte->depth(). Currently, ttValue is only used within the search in a context that
prevents this situation from making a difference (and so this change is non-functional,
but this is not guaranteed to remain the case in the future.
For instance, just changing the tt depth condition in singular extension node to be
tte->depth() >= depth - 4 * ONE_PLY
instead of
tte->depth() >= depth - 3 * ONE_PLY
interacts badly with the absence of ttMove in iid. For the ttMove to become a singular
extension candidate, singularExtensionNode needs to be true. With the current master,
this requires that tte->depth() >= depth - 3 * ONE_PLY. This is not currently possible
if tte comes from IID, since the depth 'd' used for the IID search is always less than
depth - 4 * ONE_PLY for depth >= 8 * ONE_PLY (below depth 8 singularExtensionNode can
never be true anyway). However, with DU-jdto/Stockfish@251281a , this condition can be
met, and it is possible for singularExtensionNode to become true after IID. There are
then two mechanisms by which this patch can affect the search:
• If ttValue was VALUE_NONE prior to IID, the fact that this patch sets ttValue allows
the 'ttValue != VALUE_NONE' condition of singularExtensionNode to be met.
• If ttValue wasn't VALUE_NONE prior to IID, the fact that this patch modifies ttValue's
value causes a different 'rBeta' to be calculated if the singular extension search is
performed.
Tested at STC for non-regression:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 76981 W: 17060 L: 17048 D: 42873
http://tests.stockfishchess.org/tests/view/5a7738b70ebc5902971a9868
No functional change
The razoring heuristic is quite a drastic pruning technique,
using a depth 0 search at internal nodes of the search tree
to estimate the true value of depth n nodes. This patch limits
this razoring to the case of internal nodes of depth 1.
Author: Jarrod Torriero (DU-jdto)
STC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 8043 W: 1865 L: 1716 D: 4462
http://tests.stockfishchess.org/tests/view/5a90a9290ebc590297cc86c1
LTC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 32890 W: 5577 L: 5476 D: 21837
http://tests.stockfishchess.org/tests/view/5a90c8510ebc590297cc86d5
Opportunities opened by this patch: it would be interesting to
know if it brings Elo to re-introduce razoring or soft razoring
at depth >= 2, maybe using a larger margin to compensate for the
increased pruning effect.
Bench: 5227124
This is one of the most difficult to understand but also
most important and speed critical functions of SF.
This patch rewrites some part of it to hopefully
make it clearer and drop some redundant variables
in the process.
Same speed than master (or even a bit more).
Thanks to Chris Cain for useful feedback.
No functional change.
Where variable names are explicitly incorrect, I feel morally obligated to at least
suggest an alternative. There are many, but these two are especially egregious.
No functional change.
As far as can tell, semiopenFiles are set if there is a pawn anywhere on
the file. The removed condition would be true even if the pawns were very
advanced, which doesn't make sense if we're looking for a trapped rook.
Seems the engine fairs better with this removed. My guess s that the
condition that mobility is 3 or less does this well enough.
Begs the question whether this is a mobility issue alone... not sure.
Should I do LTC test?
STC
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 13377 W: 3009 L: 2871 D: 7497
http://tests.stockfishchess.org/tests/view/5a855be40ebc590297cc8166
Passed LTC
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 16288 W: 2813 L: 2685 D: 10790
http://tests.stockfishchess.org/tests/view/5a8575a80ebc590297cc817e
Bench: 5006365
Make contempt dependent on the current score of the root position.
The idea is that we now use a linear formula like the following to decide
on the contempt to use during a search :
contempt = x + y * eval
where x is the base contempt set by the user in the "Contempt" UCI option,
and y * eval is the dynamic part which adapts itself to the estimation of
the evaluation of the root position returned by the search. In this patch,
we use x = 18 centipawns by default, and the y * eval correction can go
from -20 centipawns if the root eval is less than -2.0 pawns, up to +20
centipawns when the root eval is more than 2.0 pawns.
To summarize, the new contempt goes from -0.02 to 0.38 pawns, depending if
Stockfish is losing or winning, with an average value of 0.18 pawns by default.
STC:
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 110052 W: 24614 L: 23938 D: 61500
http://tests.stockfishchess.org/tests/view/5a72e6020ebc590f2c86ea20
LTC:
LLR: 2.97 (-2.94,2.94) [0.00,5.00]
Total: 16470 W: 2896 L: 2705 D: 10869
http://tests.stockfishchess.org/tests/view/5a76c5b90ebc5902971a9830
A second match at LTC was organised against the current master:
ELO: 1.45 +-2.9 (95%) LOS: 84.0%
Total: 19369 W: 3350 L: 3269 D: 12750
http://tests.stockfishchess.org/tests/view/5a7acf980ebc5902971a9a2e
Finally, we checked that there is no apparent problem with multithreading,
despite the fact that some threads might have a slightly different contempt
level that the main thread.
Match of this version against master, both using 5 threads, time control 30+0.3:
ELO: 2.18 +-3.2 (95%) LOS: 90.8%
Total: 14840 W: 2502 L: 2409 D: 9929
http://tests.stockfishchess.org/tests/view/5a7bf3e80ebc5902971a9aa2
Include suggestions from Marco Costalba, Aram Tumanian, Ronald de Man, etc.
Bench: 5207156
This patch simplifies the time management code, removing the extra
thinking time for moves with draw PV and increasing thinking time
for all moves proportionally by around 4%.
Last time when the time management was carefully tuned was 1.5-2 years
ago. As new patches were getting added, time management was drifting out
of optimum. This happens because when search becomes more precise pv and
score are becoming more stable, there are less fail lows, best move is
picked earlier and there are less best move changes. All this factors are
entering in time management, and average time per move is decreasing with
more and more good patches. For individual patches such effect is small
(except some) and may be up or down, but when there are many of them,
effect is more substantial. The same way benchmark with more and more
patches is slowly drifting down on average.
So my understanding that back in October adding more think time for draw
PV showed positive Elo because time management was not well tuned, there
was more time available, and think_hard patch applied this additional time
to moves with draw PV, while just retuning back to optimum would recover Elo
anyway. It is possible that absence of contempt also helped, as SF9 is showing
less 0.0 scores than the October version.
Anyway, to me it seems that proper place to deal with draw PV is search, and
contempt sounds as much better solution. In time management there is little
additional elo, and if some code is not helping like removed here, it is better
to discard it. It is simpler to find genuine improvement if code is clean.
• Passed STC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 20487 W: 4558 L: 4434 D: 11495
http://tests.stockfishchess.org/tests/view/5a7706ec0ebc5902971a9854
• Passed LTC:
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 41960 W: 7145 L: 7058 D: 27757
http://tests.stockfishchess.org/tests/view/5a778c830ebc5902971a9895
• Passed an additional non-regression [-5..0] test at the time control
of 60sec for the game (sudden death) with disabled draw adjudication:
LLR: 2.95 (-2.94,2.94) [-5.00,0.00]
Total: 8438 W: 1675 L: 1586 D: 5177
http://tests.stockfishchess.org/tests/view/5a7c3d8d0ebc5902971a9ac0
• Passed an additional non-regression [-5..0] test at the time control
of 1sec+1sec per move with disabled draw adjudication:
LLR: 2.97 (-2.94,2.94) [-5.00,0.00]
Total: 27664 W: 5575 L: 5574 D: 16515
http://tests.stockfishchess.org/tests/view/5a7c3e820ebc5902971a9ac3
This is a functional change for the time management code.
Bench: 4983414
The 'eval' debugging command in Terminal did not initialize the Eval::Contempt
variable, leading to random output during debugging sessions (normal search
was unaffected by the bug).
Example of session where the two 'eval' commands should give the same output,
but did not:
./stockfish
position startpos
d
eval
go depth 20
d
eval
The bug is fixed by initializing Eval::Contempt to SCORE_ZERO in Eval::trace
No functional change.
The current logic in master is to continue return quiet moves if their
history score is above 0. It appears as though this check can be
removed, which is also more logically consistent with the “skipQuiets”
semantics used in search.cpp.
This patch may open new opportunitiesto get Elo by changing or
tuning the definition of 'moveCountPruning' in line 830 of search.cpp,
because obeying skipQuiets without checking the history scores makes
the search more sensitive to 'moveCountPruning'.
STC
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 34780 W: 7680 L: 7584 D: 19516
http://tests.stockfishchess.org/tests/view/5a79f8d80ebc5902971a99db
LTC
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 38757 W: 6732 L: 6641 D: 25384
http://tests.stockfishchess.org/tests/view/5a7afebe0ebc5902971a9a46
Bench 4954595
Enable link-time optimization in the Makefile when compiling with clang.
Also update travis.yml to use clang++-5.0 and llvm-5.0-dev.
No functional change.
For these recaptures, we’re are only considering those captures
that recapture the recapture square (small portion of all the
captures). Therefore, scoring all of the captures and pick_besting
out of the whole group is not necessary.
STC
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 85583 W: 18978 L: 18983 D: 47622
http://tests.stockfishchess.org/tests/view/5a717faa0ebc590f2c86e9a7
LTC
LLR: 2.96 (-2.94,2.94) [-3.00,1.00]
Total: 20231 W: 3533 L: 3411 D: 13287
http://tests.stockfishchess.org/tests/view/5a73ad330ebc5902971a96ba
Bench: 5023593
The difference between QCAPTURES_1 and QCAPTURES_2 quiescence search stages
boils down to a simple check of depth. The way it's being done now is
unnecessarily complex.
This patch is simpler, clearer, and easier to understand.
Passed SPRT[-3..1] test at STC:
LLR: 2.95 (-2.94,2.94) [-3.00,1.00]
Total: 99755 W: 22158 L: 22192 D: 55405
http://tests.stockfishchess.org/tests/view/5a71f41c0ebc590f2c86e9cb
No functional change.
It seems to be a waste of time to loop through all remaining root moves
after finishing each PV line. This patch skips this until we have reached
the last PV line (this is the way it was done in Glaurung and very early
versions of Stockfish).
No functional change in Single PV mode.
MultiPV=3 STC and LTC tests
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 3113 W: 1248 L: 1064 D: 801
LLR: 2.95 (-2.94,2.94) [0.00,5.00]
Total: 2260 W: 848 L: 679 D: 733
Bench: 5023629