niel5946 wrote: ↑Mon Apr 05, 2021 1:59 pmHere white has two different captures, namely cxb6 and Nxd7, but neither of these are good IMO since the former will just open up black's dark-squared bishop, and the latter will help black develop the knight (on top of this, the white knight on e5 is much better than black's light-squared bishop which really doesn't have any squares to go to.). Thus, white would loose the initiative by making either of these moves.
This is where the SEE scaling idea comes in: In a more advanced/difficult position, if we could scale an SEE value by some positional aspects - here the good knight vs. bad bishop, and mobility - moves like the two mentioned would be ordered further down, which would increase move ordering and thus probably playing strength.
OK, I see. This is not what I would consider gaining/losing a tempo. It is merely that the tactics contains positionally good or positionally bad moves, and that a materially equal trade can still be positionally bad because of that. Part of this would already be taken account of when you did not use fixed piece values in SEE, but the value from the Piece-Square Table. The recapture with the Knight would then be recognized as gaining a lot of Knight PST points in addition to capturing a Knight, while the (presumably high) PST bonus for the well-developed centralized Knight would just evaporate when it gets captured. The argument of opening the Bishop diagonal is based on mobility, though. You would only pay attention to that if you would do complete evaluations during SEE.
But that illustrates the point: basically SEE is just a search restricted to moves to a given square (and even forcing a specific order of those captures). To make it fast you could do it with a material-only evaluation. This is what people usually do, because they use the SEE only as a very approximate heuristic for move ordering. Like for deciding "I'd better first try to capture PxN than QxR, because PxN seems to gain me Knight for Pawn, while QxR will lose me a Queen for a Rook, because the latter is protected". Taking positional aspects into account for that comparison is totally unnecessary. And it would be hopelessly expensive, as a full evaluation is easily 1000 times more expensive that getting a piece value from a table.
About when to do this: My idea was not to _only_ do it in the root, since this would probably have minimal effect. It was to do it for nearly all moves close to the root and then gradually stop doing it further down in the tree. This would increase move ordering in the top of the tree, while also having descent speed closer to the leaves where more nodes are reached. I haven't quite thought about when to stop doing it for all SEE scores, perhaps half-way down the tree or something...
Well, the argument stays the same, except that perhaps the factor 1000 should be replaced by 10. What you don't do very close to the tips will not impact speed, as it is done in roughly 0% of the nodes.
The restriction that moves are limited to go to a given square has much bigger shortcomings that neglect of positional eval terms, though. Why would you care that between two moves that have SEE = +1 one was really +1.01 and the other +0.99 when the one with +1.01 in practice loses you a Queen? It is extremely more beneficial to push moves that lose a Queen to the back of your move list, than to get the right order for two moves that gain a Pawn. So if you are prepared to increase time for move ordering by a large factor, I would not do it by calling exact evaluations during SEE, but by doing a QS with PST-only eval.
Note that to get info from any kind of search that is useful for move ordering, you would have to open the search window enough. Upper bounds are not useful for move sorting. I think Michael Sherwin uses this technique, searching all moves at low depth with wider window, before searching them to full depth with the minimally required window.
The question, however, is how useful it is to order moves close to the root. If they were cut-nodes, you would already have found a cut-move for them in the previous iteration, which they find in the TT. As long as that move works it would probably be best to keep using it. Only when it fails low you have to deal with the task of finding a replacement. And often the alternatives have not been searched at all. So rather than just trying those in the static move order, you might want to do IDD, starting with an open window in the lowest iteration(s). Normal IID would still aggressively pursue the first move that seems to give a cutoff to full depth, and this might not be the best move at all.
In its most general form this technique would be 'selfdeepening IID' with an iteration-dependent window widening. (The opposit of 'aspiration', which is an artificial window narrowing.) The 'selfdeepening' here means that in case of a beta cutoff you would only deepen the cut-move, but in case it would flunk out before you reach full depth, won't have forgotten that the other moves still have not been searched in some lower iterations, and switch back to the last depth where the cut-move pre-empted them. Sort of a deepening loop within a deepening loop:
Code: Select all
Search(depth)
{
for(iterDepth = 1; iterDepth<= depth; iterDepth++) { // IID
for(ALL_MOVES) {
int cutDepth = iterDepth;
MakeMove();
do {
score = -Search(cutDepth-1);
} while(cutDepth <= depth && score >= beta); // deepen cut move
UnMake();
if(score > alpha) ...
}
}
}
If instead of beta wou would use beta + widening[depth][cutDepth] here, and do a full sort of the moves on score after every iteration, you would get the effect you desire. Presumably widening[d][id] would be 0 for small d, and nearly +INF (suppressing all beta cutoffs) for id=1 and large-enough d.