### Re: asmFish

Posted:

**Sun May 05, 2019 12:56 am**For those who are interested...

I'm stopping the Stockfish (SF) vs. asmFish match after one week at 100 games. The score is now 15:6 in favor of SF after 99 games (game 100 is 40 moves deep and equal). That's nearly a 3x greater victory factor! Here quality was a greater factor than calculating speed in a match that's virtually designed for asmFish's superior speed.

SF's score could have been higher as SF played weakly in a couple of endgames, failing to convert wins. I only use 5-man TableBases, and the engines were otherwise too stupid to have the technique to convert wins with more than 5 pieces. Also, a couple of times SF incorrectly simplified a dominant middlegame into an endgame of R + RP + BP versus R. It incorrectly saw the two pawn advantage as winning, not knowing that particular endgame is almost always drawn. Perhaps this missing endgame knowledge might be something that should be added to SF's codes.

For some, 100 games is too few for the result to have statistical value. I disagree as these games ran nearly 3 hours each. So, the engines had sufficient time/move to demonstrate their qualities. Again, this match was meant to be more of a tactics test than overall game test.

I'll audit each game when game #100 is completed.

All the best,

-Steve-

I'm stopping the Stockfish (SF) vs. asmFish match after one week at 100 games. The score is now 15:6 in favor of SF after 99 games (game 100 is 40 moves deep and equal). That's nearly a 3x greater victory factor! Here quality was a greater factor than calculating speed in a match that's virtually designed for asmFish's superior speed.

SF's score could have been higher as SF played weakly in a couple of endgames, failing to convert wins. I only use 5-man TableBases, and the engines were otherwise too stupid to have the technique to convert wins with more than 5 pieces. Also, a couple of times SF incorrectly simplified a dominant middlegame into an endgame of R + RP + BP versus R. It incorrectly saw the two pawn advantage as winning, not knowing that particular endgame is almost always drawn. Perhaps this missing endgame knowledge might be something that should be added to SF's codes.

For some, 100 games is too few for the result to have statistical value. I disagree as these games ran nearly 3 hours each. So, the engines had sufficient time/move to demonstrate their qualities. Again, this match was meant to be more of a tactics test than overall game test.

I'll audit each game when game #100 is completed.

All the best,

-Steve-