My very first attempt at producing a working chess program may contain a clue. It was not a complete legal program as it could not castle, capture en passant or even promote. It did not even have an evaluation other than material. However, it was slightly more than just a material searcher. It selected between otherwise equal moves based upon one statistic. It kept track of the number of beta cutoffs (+computer/-opponent) for each root move. It would play the move that generated the 'best score'. Captures would only be played if they were materially better. The search algorithm was simple fixed depth alpha-beta and nothing else. The surprising thing about this is that the program could hold its own against Chessmaster 5000 (the current chessmaster at that time) and was even winning with the black pieces once towards the endgame when it encountered a position that it could not handle legally and had to forfeit.bob wrote:I have said many times, we are losing SO MUCH information it is not funny. We search billions of nodes, to get a couple of dozen moves for the PV. There's a wealth of information hidden in the tree. Singular extensions was one idea that comes from this sort of "data mining" idea, let the tree tell you which moves are more important. Ditto for which moves to reduce or not reduce, etc...matthewlai wrote:I have been thinking about something like that for a long time. I will probably try it some time in the next few months, but using a neural network instead, which should be more flexible, and should train much faster.sje wrote:Idea #8430: Optimizing move ordering, very slowly
An experiment:
Step 1: Collect a small set of test positions, preferably those where a traditional search will change its PV prediction several times. Run each of these through a program to a fixed depth with A/B pruning disconnected, instead relying upon a full minimax search at each node. During each search and at each searched node, record the moves at each node along with the move's backed-up minimax evaluation. Much patience and fat disks will be necessary.
Step 2: Again, run each position through the program at a fixed depth with A/B turned on this time. Have each search access the stored search results to supply "perfect" move ordering. Record the total searched node counts to get a baseline figure, the best which can be had with all else held equal.
Step 3: Repeatedly run each position through the program again at the same fixed depth with A/B turned on while tinkering with the move ordering. Compare the node counts with the baseline (best) figures. Iteratively modify the move ordering code by examining those nodes where the ordering was very different form the baseline ordering. Perhaps an automated tuning algorithm could help with this.
I personally think there is a lot to be had there, but it is not easy since the tree is so incredibly large today. Can't see the forest for all the damned trees.
This simple program played very different than cm5000. Its play was very surreal and beautiful in a strange way. It moved it center pawns first, avoided weak pawns and loved forming pawn chains. It also developed its pieces in attacking positions and would punish a careless human. I ran this on a 50 mHz 80386 so it could not search but a few ply deep. I think that it is time for this to be looked at again.

