This is simply not true. From the point of view of one node, what you say is right, but doing this recursively makes it wrong. Take your example of extending normal moves by 1, good moves by 2, and doing nothing with bad moves. Suddenly a 1-ply search becomes infinite, going through all the lines with just normal moves. OTOH in the same 1-ply search, a bad move at the root only gets searched 0 ply (qsearch). So the two searches cannot be equivalent.bob wrote:How? You have 2 sets of moves. A and B. How is extending A and leaving B alone any different than leaving A alone and reducing B? All B moves are searched one ply deeper than the A moves. yes, you will report different search depths. But the final result will be _exactly_ the same.metax wrote:Are you sure they will converge to the same score and PV? For example, Program B would have a lower branching factor, wouldn't it? So with increasing time, wouldn't Program B get an advantage slowly?bob wrote:That was the point of my comments. You can extend, or reduce, or do nothing, which is exactly the same as extending 2, extending 1 or doing nothing. The depths will differ, but in the same time, they will all coverge to exactly the same score and PV (hash issues notwithstanding, of course).
I don't see how they could behave otherwise, once you think about it a bit. You just have to ignore the "depth" number since the two will be off by one ply (the one that reduces everything but checks will search one ply deeper in the same time required by the one that extends only checks to reach depth N-1.
EDIT: And now that I think about it, it's not always equivalent in the case of one node. Reductions are completely meaningless in a 1-ply search (all moves drop straight into qsearch), and generally reductions of >=N ply are all the same in an N ply search. So the easy case of {-1, 0, 1} (as a set of depths to add to bad, normal, and good moves) is not equivalent to { 0, 1, 2 } in a 1-ply search but rather { 1, 1, 2}.