hristo wrote:hgm wrote:bob wrote:Let me rephrase my comment. "What exactly are you talking about?"
Well, read back and you will know...
In short: you are claiming that 1/x is always smaller than 10/y because 1 < 10, no matter what x and y are ("regardless of investment"). That says much for your abilities in arithmetic.
H.G.,
this was not Bob's argument, even though you could infer the above, it was certainly not the central part of his argument.
(I must admit that sometimes you guys argue about strange things -- even though both of you are certifiably smart.)
For instance you said: "As the table is so small that it always resides in the L1 cache, this must be faster than the linear-array method." which presumes that the linear array doesn't fit in the cache or that in the process of searching, one would have to hit L2 or main memory. The evidence presented, by Bob, shows that L1 would be sufficient in his case -- searching only a single entry, on average, should fit in L1 just fine.
Both of you started this discussion in a rather cordial tone and then you both zeroed-in on a particular aspect of what the other side was saying that could be argued against.
For the purpose of chess programming, however, I believe that your approach (if it is a bit faster, on average, compared to the list method) might yield better chess-engine results for single CPU implementations. Based on pure speculation, having 0.1% extra time for eval and search (etc.) would be more beneficial than handling the fringe cases of the 50 move rule.
Besides, your implementation seems rather cute, just like you.
In the case of SMP, it is not immediately obvious that your approach would be better. I'm sorry, but in this case I dare not speculate. I have spent way too much time debugging threads -- so, "show me the money/profiler output"!
hgm wrote:
As for the relative speed: count instructions, and you will know. Or is counting too difficult too?
Even if your code is smaller and executes a bit faster, this is not sufficient to estimate its 'cost' in execution time. For instance if your code must be called more often (than the simple list search) and from different portions of the evaluation (or a function, program execution) it is possible that _it_ [your code] will cause extra cache misses ... so, just counting instructions is (quite often) not enough.
(BTW, why the vitriolic remark?)
Bob,
I agree that optimization must be considered from the top ("look into the highest time-consuming functions and algorithms, first"). However, it is perfectly reasonable to consider optimizations at other levels (including the bottom) so long as such optimizations can be regression tested and are opaque, with respect to the rest of the application. Always optimizing at the top can, unnecessarily, restricts one from gathering a low-hanging fruit elsewhere.
Regards to both of you,
Hristo
My point was simply that if the goal is to make the program measurably faster, then the place to start is at the top. I clearly showed that in my case, 0.51% over a pretty long game that ended by 50-move rule, was not the place to start. And, in fact, I questioned whether the hash approach was any faster at all, giving that 0.51% as "the target to beat". If hashing is no faster, then the discussion was stupid to start with. If it is faster, it still is not worth the effort of rewriting repetition detection until all the other things have been addressed. This is not exactly low-hanging fruit. It is more a "low-hanging molecule" that is not going to offer much for the effort of picking it off...
One certainly doesn't have to make his first programming effort on the procedure at the top of the profile. I often look at the top group and figure out where I might get the best speedup the quickest and start there. For example, the work on rotated bitboards years ago was not on the code that used the most time. But by that same reasoning, I am certainly not going to the bottom of the heap and try to speed up a 0.51% module when (a) it is not clear that the proposed improvement is any faster; (b) the proposed "improvement" might actually hurt later when it is "SMP time"...
hgm wrote:
I think I explained everything crystal clear, and if it is still beyond you, you will simply have to live with that...