Re: Hamsters randomizer in action
Posted: Tue Jun 05, 2007 3:13 pm
OK, that isn't disastrously bad. I was thinking more of trees of 100M nodes. After all, 2M nodes probably represents just half a second of search.
(I am never sure how people count nodes; Joker does about 1.5Mnps, but it does not count nodes that are satisfied from the hash table. And I do suppose you would want to write information for such nodes.)
I guess I would still prefer to simply redo the search a number of times. For a search of the size you mention it would make the requested information almost appear instantly, in a time negligible compared to what you needed to study it. By the time the search is so big that this becomes cumbersome, writing it on disk would take many minutes. Plus, that for each information you want to dig out of the search, you would have to read it back as well.
In the early days of debugging uMax I had really built an interactive tree walker in the search: upon entry of any node below a certain (globally set) level, you would get into a menu were you could tell wat to do from this node. This then set a variable local to the node that controlled all debugging print statements, e.g. to give an overview of the moves and their search scores, to go to the node one of the moves led to, to go back to the parent node, to go to the next IID iteration, etc. It didn't require so much code.
You could not entirely prevent having to re-run a search, though: at some point you arrived at hash hits with a suspicious score, and you would want to examine the search that filled those hash hits. And that of course had happened at a point that you already passed.
(I am never sure how people count nodes; Joker does about 1.5Mnps, but it does not count nodes that are satisfied from the hash table. And I do suppose you would want to write information for such nodes.)
I guess I would still prefer to simply redo the search a number of times. For a search of the size you mention it would make the requested information almost appear instantly, in a time negligible compared to what you needed to study it. By the time the search is so big that this becomes cumbersome, writing it on disk would take many minutes. Plus, that for each information you want to dig out of the search, you would have to read it back as well.
In the early days of debugging uMax I had really built an interactive tree walker in the search: upon entry of any node below a certain (globally set) level, you would get into a menu were you could tell wat to do from this node. This then set a variable local to the node that controlled all debugging print statements, e.g. to give an overview of the moves and their search scores, to go to the node one of the moves led to, to go back to the parent node, to go to the next IID iteration, etc. It didn't require so much code.
You could not entirely prevent having to re-run a search, though: at some point you arrived at hash hits with a suspicious score, and you would want to examine the search that filled those hash hits. And that of course had happened at a point that you already passed.