So what else could you do? If the sorting of the non-captures is purely random, there seems no justification at all in reducing some more than others.
It will always be true that reducing moves will make you blind to any plans that involve these moves, and especially plans that use a sequence of highly reduced moves. So you'd better not reduce moves that are good, or you will start to overlook a lot.
In the example of your OP the black Queen is obviously in trouble, having no safe moves. So it would be a good plan to trap and then attack it, and Bd2 fits into that plan. Such considerations are totally beyond engines, however. They sacrifice reason for Elo. It is not important to trap the Queen here. Positions where you can succesfully trap the Queen do not occur often enough to have any impact...
Reduction Research Question
Moderators: hgm, Dann Corbit, Harvey Williamson
-
hgm
- Posts: 27701
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
-
D Sceviour
- Posts: 570
- Joined: Mon Jul 20, 2015 5:06 pm
Re: Reduction Research Question
It would seem that way. I am trying a new idea called gain score reduction. The idea is first to abandon LMR's completely and ignore all move count based reduction. The best move could be at the bottom of the move list and there is no oracle to know any better in advance. Each non-capturing move is assessed independently as to its potential merit based on SEE values, PST tables and history tables. The SEE values are similar to the Magic SEE value you mentioned earlier, except they are done move by move rather than during the move generation phase. The results so far have not been promising for as yet unclear reasons. Then again, the results have not been that bad either.hgm wrote:So what else could you do? If the sorting of the non-captures is purely random, there seems no justification at all in reducing some more than others.
They are in my games.hgm wrote:Positions where you can succesfully trap the Queen do not occur often enough to have any impact...
-
D Sceviour
- Posts: 570
- Joined: Mon Jul 20, 2015 5:06 pm
Re: Reduction Research Question
The problem with move count based reductions is that it assumes the move list is binomially distributed, and that all moves have the same opportunity to fail high or to produce a new PV. That is not true. The analogy is like the difference between trying to hit the center of a target, and trying to decide how far away to stand from the target to hit it. There is a correlation but the variables are independent. If you stand too far away from the target, you will never hit anything or a hit is too rare and volatile to be trustworthy. Some programs like Robolitto avoid reductions and jump immediately to a move list prune.hgm wrote: But the current paradigm for engine development is indeed close to black magic. You just try the various options, and test what works best. The 'official' meaning of LMR is that you research unreduced when the move beats alpha, i.e. only moves that fail low should be reduced. PVS is just a technique to increase alpha-beta efficiency through 'internal aspiration', i.e. narrowing the window compared to what it should logically be as a gamble, which pays off when the score is not in the range that you artificially excluded, and requires a re-search otherwise. It is not supposed to cause any cutoffs or reductions that could not have been made by normal alpha-beta and favorable move ordering.
To demonstrate this, output the scores from a full width one-ply search on any move list. You will see the scores do not fit a binomial pattern. Take a one-ply sorted move list from Bratko-Kopec #3 (picked at random):
[d]q1rr1k/3bbnnp/p2p1pp1/2pPp3/PpP1P1P1/1P2BNNP/2BQ1PRK/7R b - - 0 1
...
Code: Select all
# Move Value Capture
1 Kg8 15
2 Qc7 2
3 a5 -8
4 Qb7 -8
5 Qd8 -8
6 Rd8 -8
7 Nh5 -10
8 Qb8 -11
9 Rg8 -11
10 h6 -12
11 h5 -15
12 Nf5 -16
13 Nh6 -17
14 Bd8 -20
15 f5 -23
16 Qc6 -24
17 Nd8 -26
18 Bf5 -27
19 Qa8 -29
20 Be6 -32
21 g5 -52
22 Ng5 -59
23 Ba4 -275 1
24 Bg4 -290 1
25 Ne6 -298
26 Bb5 -329
27 Bc6 -331
Currently, I am attempting with some success a flat reduction profile, rather than a logarimithic based profile. The first few moves use a full depth search for captures and killers. Then for non-capture moves, a reduction is used that depends on the depth of search and PVS status. Move count is not considered. Any comments?
-
cdani
- Posts: 2204
- Joined: Sat Jan 18, 2014 10:24 am
- Location: Andorra
Re: Reduction Research Question
In Andscacs I sort depth 0 by number of nodes analyzed in previous iteration, depth 1 always by static evaluation (a little win), and depth 2+ with the standard hash/captures/history + others for quiets, ...
So any of the moves has some reason to be where it is. So following your example, Ng5 has a little greater probability of being weaker than Qd8.
So any of the moves has some reason to be where it is. So following your example, Ng5 has a little greater probability of being weaker than Qd8.
Daniel José -
http://www.andscacs.com
-
hgm
- Posts: 27701
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Reduction Research Question
IMO the problem is that the requirements for having a good score and having a small reduction are quite uncorrelated. You want scores to be reliable, i.e. almost certainly not worse than the score you assign them. OTOH, it pays to do deep searches on moves that have some potential to be good, even if most of the time they will be disastrous. If they are disastrous you just won't play them, and wasted a little time on their search. But that is what search is for; if we would only search the moves we are going to play we would be done in 30 nodes. Most of the search is to exclude alternatives.
So basically you want to reduce passive moves, and exempt active moves from reduction. It would be very helpful if you had some static analysis that would be able to see if a move could serve some purpose. E.g. clearing a pin can be useful. Setting up a discovered attack can be useful.
It would of course be rather time consuming to do such a static analysis. But if done only at high remaining depth, it would be done in so few nodes that this doesn't matter. And it is especially in these nodes that people nowadays apply insanely high reductions, so exempting active moves from that would have a big impact.
Insanely high null-move reductions are also very detrimental to the ability to play good Chess (as opposed to having high Elo). Null moves are really central in human thinking, for formulating long-range plans. You want to be able to figure out if you can attack that immobile trapped or pinned piece with an extra Knight or King so you can gain it when the opponnet stays idle, even if it takes 5 turns to get there. If each of these turns wouldbe reduced by 10 ply, you'll never get to do that. Moves with the same leaper, or in general moves that where not possible two ply before, should never be allowed to accumulate such an enormous reduction.
So basically you want to reduce passive moves, and exempt active moves from reduction. It would be very helpful if you had some static analysis that would be able to see if a move could serve some purpose. E.g. clearing a pin can be useful. Setting up a discovered attack can be useful.
It would of course be rather time consuming to do such a static analysis. But if done only at high remaining depth, it would be done in so few nodes that this doesn't matter. And it is especially in these nodes that people nowadays apply insanely high reductions, so exempting active moves from that would have a big impact.
Insanely high null-move reductions are also very detrimental to the ability to play good Chess (as opposed to having high Elo). Null moves are really central in human thinking, for formulating long-range plans. You want to be able to figure out if you can attack that immobile trapped or pinned piece with an extra Knight or King so you can gain it when the opponnet stays idle, even if it takes 5 turns to get there. If each of these turns wouldbe reduced by 10 ply, you'll never get to do that. Moves with the same leaper, or in general moves that where not possible two ply before, should never be allowed to accumulate such an enormous reduction.
-
cdani
- Posts: 2204
- Joined: Sat Jan 18, 2014 10:24 am
- Location: Andorra
Re: Reduction Research Question
In my intuition I see all this remotely like a cache organization, being the slowest ones like the heavy algorithms that gives better moves, and the fastest ones like the lighter ones; The heavier ones are constructed by the lighter ones.
I see some room of improvement thinking like this. For example, some sort of improved SEE probably can be a win if applied to high depth nodes.
Or as I told before, at least for Andscacs using the static evaluation (slow algorithm) is good for first ply after root ply (in the message I put mistakenly depth, but was ply).
I see some room of improvement thinking like this. For example, some sort of improved SEE probably can be a win if applied to high depth nodes.
Or as I told before, at least for Andscacs using the static evaluation (slow algorithm) is good for first ply after root ply (in the message I put mistakenly depth, but was ply).
Daniel José -
http://www.andscacs.com
-
D Sceviour
- Posts: 570
- Joined: Mon Jul 20, 2015 5:06 pm
Re: Reduction Research Question
Hello daniel. The ply 0 approach with node sorts has been used by different programs. Currently, I have abandoned ply 0 node sorts as it produces no average final node count reduction. I am not sure whether that is good logic.cdani wrote:So following your example, Ng5 has a little greater probability of being weaker than Qd8.
The Ply 1 full static evaluation looks very interesting and is something to look at in the future. However, a full static evaluation is inferior to the value results from an actual one-ply search. The B-K #3 one-ply results shown are as accurate as one can get as they derive from a full quiescent evaluation.
For the remaining plies, no SEE score can produce a better result on the horizon than a full quiescent search. The point is that the final move list values will never be anything better than a flat profile whether its an (almost) random guess, a SEE score, a static evaluation, or a full quiescent search. In fact, the recommended solution for B-K #3 is move 15. f5. There is no substitution for depth.
The best use for SEE evaluation may be for sorting exchanges, but there are very few exchanges in a move list. Once a move list appears to fail low in the remaining moves, then what is there besides black magic guesses? That is the reason for attempting a flat reduction for all moves.
_______________________________________________________
Re-looking at the original B-K #3 the correct FEN string reads:
2q1rr1k/3bbnnp/p2p1pp1/2pPp3/PpP1P1P1/1P2BNNP/2BQ1PRK/7R b - -
How could this possibly be transferred to the incorrect position given above as:
q1rr1k/3bbnnp/p2p1pp1/2pPp3/PpP1P1P1/1P2BNNP/2BQ1PRK/7R b - - 0 1
B-K #3 was pasted to Winboard, and then copied from Winboard back to a text file. It is not even a reasonable sequence of moves. It looks like the back rank has been shifted two squares to the left. Therefore, perhaps there is some kind of bug in Winboard.
I will have to spend more time double checking FEN strings. They cannot be taken for granted just because they have been copied from another source.
-
cdani
- Posts: 2204
- Joined: Sat Jan 18, 2014 10:24 am
- Location: Andorra
Re: Reduction Research Question
I see field of improvement on SEE for example on some sort of fast evaluation of all captures after the move, not only the captures on the destination square. Of course will be only valid on bigger depths plies.D Sceviour wrote: For the remaining plies, no SEE score can produce a better result on the horizon than a full quiescent search. The point is that the final move list values will never be anything better than a flat profile whether its an (almost) random guess, a SEE score, a static evaluation, or a full quiescent search.
Daniel José -
http://www.andscacs.com