mwyoung wrote:You know me here, I question everything when it to chess assumptions. It is my time to waste.geots wrote:mwyoung wrote:I don't know the if you are correct or not, since no one has tested them yet that I know of at longer time controls. I will let the results speak for themselves.geots wrote:mwyoung wrote:PS. I found the development versions of Stockfish are also weaker at short time controls, but getting better. My games are at longer time controls, the development versions of Stockfish were doing much better then Stockfish 3 against Houdini 3. That is what got me started testing development version of stockfish at long time controls, I was curious.
They are better, and they are strong- but they are no match for Houdini 3. I am sorry- but such is life.
That is why as chess engine testers, we test, and share our results. Would love to see more data points on this other then my own. But most likely that will not happen until a official release of Stockfish 4 or 3.5 or what ever the Stockfish team calls then next official version.
Let me explain something to you. Anytime someone says such and such is better or worse at this control or that control- that is another way of saying the engine in question needs a lot of work. Because the "studs" don't give a shit. They will play you at midnight in a cornfield with extension cords. At any control. To the best- all that is irrelevant bullshit.
Lets see what the results tell us. The conclusions will be made by Bayeselo not by me.
Look, this is my last thread, Mark. I have tried to help you a bit- but you are not interested.So I got better shit to do. Just let me close by saying no seasoned tester would ever post results the way you did. Because unless you list all the controls- gui, opening positions or generic book- if book- what is the move limit- TB or not- how many cores, what hash- all your results are totally useless bullshit without that. Your problem is you are learning but want to be treated like a pro. And that will turn people off quick.
Bye

