Your 3 would of course deserve its own subthread.diep wrote: 3. What is your opinion of the idea of increasing the null move reduction with increased depth? Stockfish pushes this to the extreme, adding another ply of reduction for each 4 plies of search. I imagine you will think it is a bad idea, but I don't like to assume such things.
Larry
I look at it maybe from a different viewpoint than you do.
My viewpoint is: "what actually are we doing with nullmove?"
In principle we're doing a move. So we are replacing making a move by one of the worst possible moves.
In fact we replace our normal search by a refutation search carried out by the opponent and we do that RECURSIVE. So we just need to look at THIS position, not to the total search length. We need to look. How MUCH can we reduce THIS move, assuming it's the worst move in history?
So calculating a mathematical optimum there is very complicated, as we do some heavy assumptions and factual give the opponent the move.
Not possible in the same 'simple manner' like with reductions. This is a total different type of thing.
Yet in the end we simply replace a move by a search for the opponent.
If we're at search depth D, then nullmove gets carried out at D-R-1
Don't forget that '-1' as well.
In principle however we continue our normal search just at a reduced depth. We do this to basically prove how strong our position "at least" is.
So the first observation of course is that our R is going to be dependant upon how much we do our normal reductions.
If you take more risk there, then it makes sense to do that in nullmove as well.
Not that they are any similar, but let's put it this way, if we have some sort of crappy way of hashting, say we hash positions to 32 bits, it doesn't make sense to use ECC memory.
Now if i assume a reduction of 1 ply for the reductions.
We see there that we have at most
D - Red - 1 = D - 1 - 1 = D - 2
With R=2 for nullmove we get to:
D - R - 1 = D - 3
As for a given depth, we can risk reducing by 100% the same depth,
that means that we can double our reduction effort.
So that means D-4 is ok. Which is R=3 for nullmove.
Of course this is for the mainsearch, what happens the last few plies in chess engines - no one knows what is best to do there. That also changes each few years. In 80s they razored. Start 90s Ed Schroder had optimized it even further and just forward pruned even bigger search depths.
Then mid 90s with a tad more computing power it was only nullmove no nothing razoring, as nullmove picked up more tactics as well then and to quote Frans Morsch: "We don't have the system time to do any sophisticated pruning".
So i basically skip the last few plies in the above guessing.
Now if you are doing reductions of 2 ply.
that means effectively D - 2 - 1 = D-3
So i would then use R=5 everywhere for nullmove,
as R+1 == 6 which is double the 3 of D-3.
that would mean that you search 1 ply at:
1 = D - R - 1 => D = 2+R = 2+5 = 7 ply
So i'd design then some sort of superrazoring last so many plies, say last 4 plies and at plydepths 5..6 i'd do some fast tactics only nullmove, maybe just a sophisticated qsearch, to pick up some tactics.
From my viewpoint doing a nullmove R=2,3,4 doesn't make much sense if you are gonna reduce with a ply or 2 anyway (on top of the normal 1 ply reduction), as that means that just doing 2 moves reduced already reduces more than a nullmove, making nullmove pretty much worthless.
Of course it's unclear what to do last few plies.
Maybe works for you.
This answers your question?
What i intend to do with Diep is not such thin search for now. I intend to use R=3 everywhere with nullmove and reduce 1 ply and try to make an efficient parallel search on a cluster is yet another challenge from a shared memory box - if i lose too much to the compiler GCC and to the MPI overhead and to clumsy parallel overhead then already a 2 socket Sandy Bridge is gonna outsearch my 64 power efficient oldie L5420 Xeon cores.
Then just improve evaluation. My blindfolded guess was that if i can get a ply or 23 and improve evaluation bigtime, that's enough to win.
Vincent[/quote]
OK, so your answer to my question seems to be that the null reduction should not depend on depth remaining, except on the last few plies (we already do special stuff on those plies). This is very hard to test as it requires testing at deep levels where it is not very practical to get large sample sizes.
You also say that the amount of null reduction should go up with the amount of LMR reduction. But do you mean maximum reduction, average reduction, or what? A typical program might not reduce the first 3 moves, might reduce the next seven by one ply, and might reduce the rest by two plies. So how would you calculate the "proper" null reduction for such a program?
Larry