Milos wrote:bob wrote:Completely beyond hopeless...  Once I see a forced mate, or a forced draw, additional plies produce zero increase.  Backing up just one ply is not going to produce a player that is measurably weaker than the perfect player.  While back at depth 1-2-3-4-5 each additional ply produces a significant gain.  Diminishing returns is a fact, not a theory.
Probably you are not making even the smallest effort to understand what I am writing. 
Diminishing return in terms of depth defined as (delta)Elo/(delta)depth certainly exists. But this one is completely useless research topic because it doesn't provide us any useful information (we already know the answer intuitively).
 
OK, do we agree that if we go from depth D to depth D+1 we get more (Elo) than when going from depth G to depth G+1, so long as G > D?  Because I believe that to be an absolute fact.
Given that, let's move on.  We have two programs A and B.  My program is A and can search to depth 12.  It is new and doesn't do all the bells and whistles.  Your program is B and typically searches to depth 24.   We both start to work on a parallel search at the same time.  After 3 months, we compare notes and I produce a gain of +70 Elo, while you produce a gain of +35.  Is my SMP more efficient?  That is, if I take my SMP search and graft it perfectly into your program, will I get +70?  
Unlikely.  It takes (at depths around 12) a ply to gain 70 Elo.  It takes more than a ply to gain 70 Elo at depth 24, by the law of diminishing returns.  So comparing the Elo of the two programs does not say a thing about the efficiency of the parallel implementation.  Which is _exactly_ what I have been saying from the get-go.  And that is _exactly_ why any parallel algorithm text book or paper discusses speedup as _the_ measure of parallel algorithm efficiency.  Not abstract things like Elo gain, or reduced error margins, just pure speedup.  How much faster does the same algorithm run using N processors than when run using 1 processor.
Why we have to have this nonsensical kind of discussion is beyond me.  I teach parallel programming.  I have probably every text book written on the subject in my office.  I actually read them.  And know what the authors are doing/saying/measuring.
Is that so hard to grasp?
I'm not interested, when discussing parallel search, in comparing both the parallel algorithm _and_ the underlying chess engine skill level.  I only care about the parallel search implementation.  Measuring Elo gain gives a number that one can certainly draw conclusions from.  But not a number I can compare to another program's number to decide which parallel implementation is better...
The real deal is a diminishing return defined as (delta)Elo/(delta)speed-up. That one is the interesting one and so far we do not know the answer.
So to repeat what I wrote many post back (and what you obviously failed to read) if speed up of processor from 1x to 2x brings 50 Elo, will speed up from 512x to 1024x also bring 50 Elo, or it will bring less (or maybe even more)?
Definitely less.  Again, diminishing return is a fact, not a conjecture.  How much less is both debatable and irrelevant.  But less, for certain.
The fact is that EBF reduces as you progress deeper and deeper (due to more selectivity in search at higher depths - recursive nature of some prunings, etc. and due to less pieces on the board) so you actually gain more plies for the same speed up. The plies you gain have less value than before (in terms of Elo), but there are more of them. And the question is which effect is dominant (if any). And we don't know the answer.
Actually, some of us do...  You get the same Elo gain by doubling the speed of the hardware naturally, or with a parallel search that reaches the same depth in 1/2 the time.  Because both will reach exactly the same depth, at exactly the same time, and there's no difference.