er, wellbob wrote:OK, I can do that. But it needs to be precisely specified what I am measuring.chrisw wrote:Well, I have an idea I tested which is why I askbob wrote:I don't have any real data I can find, but a few years ago I changed the way I sort them.chrisw wrote:does anyone sort moves at ply one? presumably .....
what's your hit rate for finding the later bestmove (in an unsorted list a later bestmove should, on average, be found midway in the list)
anyone with better results than that? where on average is a bestmove found in your list? First 25%? first 30%?
Chris
I originally did a static eval of each move, and used SEE to determine whether the move appeared to lose material or not. I used this in Cray Blitz for years for the first sort for a brand new search.
A few years back, I replaced the SEE and eval with a simple call to Quiesce() after making each move, and using that score to sort them. I called Quiesce() with an open window to get a real score for each root move.
That's the sort for starting off. Once I do the first iteration, I then use the node count for the subtree produced by each root move to order them beyond that point. In general the best move from the previous iteration will have a node count _way_ bigger than any other move, unless there were "changes of mind" during that iteration where any one of the best moves could have the biggest count. To solve this, I always force the best move from ply-1 to the top of the list, and sort the rest based on their node counts.
We started doing this in Cray Blitz at the suggestion of Harry Nelson who was a fanatic about solving problem positions. He noticed that in a position where there is a deep winning combination on a move that looks bad at first, each iteration the size of the subtree for that move grows with respect to the subtrees of other moves, because it gets harder and harder to prove the move is bad. Until finally that move becomes best. The idea is to move those moves that are a bit harder to search closer to the front of the list to get to them sooner, rather than later (or never, if the iteration times out before they are reached.)
I do this today in Crafty, and several others have tried it and use it as well. Whether there are better approaches remains to be seen, but this does work pretty well...
would you be able to come up with some stats, say on a test suite of a zillion positions ...
how far down the move list is the bestmove (say anything found after 60 secs or so) - then average it over the N positions. express at percentage (with no sort giving result 50% presumably)
clearly if one can push the likely candidate bestmoves way up the front of the move list, this is equivalent to some search speed improvement (its faster to find a new best if its 3rd in the move list than if its 30th)
option 1:
For each iteration, I simply count how many times the best move was first, or second or ... down to how many times was the best move last.
option 2:
Only count the final iteration stats, since those reflect what happened on the iteration that actually produced the move played...
It would be easy to do either way although option 1 would be the easiest to do for me. I could then run any number of test positions you choose and report back on the results.
One _important_ note however, is that root-move ordering for normal games is not the same as root-move ordering for problem positions. Typically a test position has some surprising move (commonly a sacrifice) that will look bad until the search goes deep enough to expose the "punch line"... Something that moves those surprising moves up will help tactical test positions but hurt normal chess. One example is trying all captures first, regardless of material gain or loss. That will help with tactical positions that have a sacrifice as a starting point, but will make the tree larger for normal positions.
So, tell me how to proceed and I will fire it up...

let's makes some rules ...
run N test positions, or games positions or whatever you've got (I think I had about 2000+ test positions at the time)
junk anything that happens say for first 10 secs
then record whenever you get a new bestmove where it was in the movelist as a percentage (eg 5th out of 20 would be 25%) lets say for the next 50 secs.
average your percentage over N
for 2000 test positions that'll take 1.5 days or so
if times have changed and 10 secs -> 60 secs is silly with quad sx1000 turbo parallel or whatever the tech is nowadays, reduce time
anyway, you just need a consistent base for others to compare
IIRC the technique I used (which was quite brutal and could have been refined mucho) was getting 25% score (ie candidates were in first 25% of move list on average). if you and others are worse with current move sort, then we have an improvement - will then tell you what it is
Chris