10 days ago Vincent Lejeune posted his Hard Test Suite 2008 here:
http://www.talkchess.com/forum/viewtopi ... 62&t=22402
This includes more than 80 different positions to solve including tactical motives like King´s Attack, positional decisions, Fortresses, Endgames and so on.
I did a test on Core 2 Quad E6600 (5 men) - 4 GB RAM giving a maximum of 600 seconds for each position to each engine using 1024 MB of RAM.
Not surprisingly for us there is no correlation between good performance in position solving and how engines perform in tournaments and matches. But it might be interesting for those doing a lot a analyses. Here are the results for some engines sorted from best to worse:
Deep Fritz 10.1
Result: 55 out of 81 = 67.9%. Average time = 80.20s / 17.43
Zappa Mexico II
Result: 51 out of 81 = 62.9%. Average time = 78.91s / 15.37
Bright 0.3d
Result: 49 out of 81 = 60.4%. Average time = 64.52s / 16.85
Hiarcs 12
Result: 47 out of 81 = 58.0%. Average time = 75.42s / 17.14
Rybka 2.3.2a x64
Result: 44 out of 81 = 54.3%. Average time = 63.74s / 19.27
Naum3.1 x64
Result: 40 out of 81 = 49.3%. Average time = 88.80s / 18.10
Glaurung 2.1 x64
Result: 36 out of 81 = 44.4%. Average time = 117.82s / 17.08
Fruit 2.4 Beta A
Result: 34 out of 81 = 41.9%. Average time = 124s / 15.70
Loop M1P
Result: 33 out of 81 = 40.7%. Average time = 121.05s / 16.03
Deep Shredder 11 x64
Result: 28 out of 81 = 34.5%. Average time = 120.84s / 14.46
More details here (scroll down for individual results on each position):
http://cegt.foren-city.de/topic,62,-har ... sults.html