bob wrote:
The problem is, you are leaving out a _lot_ of "middle-work". Where does that large set of independent positions come from? If you feed them to the program (say take 300 WAC positions and say "gimme the results for these" then I agree, it will be faster to search all to the same depth, independently. But if you are analyzing a _game_, how do you create those positions without first doing analysis to discover the interesting positions, and then analyzing those to discover more interesting positions, etc. That is not going to scale perfectly. It is going to scale poorly.
Ok. we first agree to the obvious. If the positions are different then we agree.
Now lets come to the situation of the same game.
Lets say we have a position, and take the first 4-best moves (and also suppose they are not highly transposive).
The task is the following.
"for every one of the 4 best moves, autoplay for 20 plies (depth 15)"
(autoplay with fixed depth ==15)
Now you could do it in 2 way.
a) One 4-core engine will get through its variation
b) 4x 1-core engines will get extend one move each.
Which is faster? The 1-core engines _DO_ get benefit of the hash along the prolongation.
On the other hand, when the MP-engine finishes the first task (autoplay first one of the 4-candidate moves) and comes to extend the second, the hash is worthless (i have never seen hash working more than 4-5 plies, )