<blg snip, post is too long to follow>
Uri Blass wrote:
1)Maybe there is misunderstanding.
I claim that based on the same logic that say that you cannot compare move at different depth you also cannot compare moves at the same depth.
I did not agree with the logic that you cannot compare moves with different depth
2)depth 19, move A, score = -1.2; depth 21, move B, score = +.7
I prefer move B because the score is better.
The point is this. If you prefer B, what did you get from searching A and spending all that effort on it. In a normal alpha/beta search, you would have spent almost no time on A if B is better, which means you spend the effort on the move that counts, using _all_ processors.
Of course if I can search move A to depth 21 it is better information but I assume that I have no time for it.
There are cases when the score for move A is going to be 0.8 at depth 21 but I cannot know it.
I can also ask you what you prefer
depth 19 move A score=-1.2 depth 19 move B score=+0.7 and if you say move B I can show you cases that the picture is different if you search both to depth 21.
It is going to prove nothing.
3)I did not claim that unsyncronized splitting at the root is good relative to other parallel search techniques.
If you get effective speed improvement of 2.5 by splitting at the root with 100 processors(so you can use one processor for every root move) and get effective speed improvement of 50 by parallel search of today with 100 processors then of course parallel search of today is better.
First, you are not going to get 2.5. Again, there is some very detailed math in a paper I wrote, vetted by some people that know this kind of stuff inside/out. In this particular approach, the term "speedup" is almost meaningless since there is no way to measure speedup in an unsynchronized search that makes a lot of sense. Perhaps in test positions where there is _one_ solution that takes a pretty deep search to find. And in that case, this approach is simply going to suck. And suck very badly. Compared to even the most elementary parallel search approach.
My only claim is that it is not obvious that you cannot get effective speed improvement of more than 2 by unsyncronized splitting at the root and you cannot trust results of cray blitz when modern programs use different search and clearly search deeper.
Here's the test. Fine a group of positions that have a single key move, and which requires a fairly deep search to find. Run the program on one CPU to see how long it takes to find that move. Then run it on N (where N can be a large enough number to search each root move on a different processor. And again measure the time to find that move.
it is _not_ going to average 2.5x faster. Or 2.0X faster. Or even 1.5x faster. Because of the basic math of alpha/beta. Lukas has continually refused to run "problems" on his cluster saying something like "the cluster is for playing games not solving problems..." Which tells me it does poorly at these kinds of tests. And yet nearly every chess position is just another problem position where we need to find the best move...
Maybe comparing odd and even plies with cray blitz caused problems but today when programs get higher depth with all reductions and extensions you often get odd and even plies even if you search to the same depth and I do not understand your comment about having 2 different searchs
for crafty without check extensions.
My claim was that comparing depth A with depth A for default Crafty is equivalent to comparing depth A and depth A+1 for modified Crafty with no check extensions.
Here is the point. For a single iteration in Crafty, everything is searched the same way, same extensions, same reductions, same null-move searches, same evaluation, etc. Every move searched at the root gets extended and reduced in different ways. I agree with that. But the same rules apply to all moves. If I suddenly just search one of the moves to an extra ply, first I burn a significant amount of extra time doing so, and then the question is do I get anything back as a result. The answer is nothing useful within the alpha/beta framework. So even though every move is likely going to reach many different depths at different nodes, all are subject to the same strategy and overall things are reasonably equal in terms of what is done, how deep the branches go, etc.
Saying that comparing depth A and depth A+1 does not make sense for default Crafty and saying that the same idea is logical for modified Crafty
does not seem to me logical.
Uri