Page 3 of 3

Re: Hamsters randomizer in action

Posted: Tue Jun 05, 2007 3:13 pm
by hgm
OK, that isn't disastrously bad. I was thinking more of trees of 100M nodes. After all, 2M nodes probably represents just half a second of search.

(I am never sure how people count nodes; Joker does about 1.5Mnps, but it does not count nodes that are satisfied from the hash table. And I do suppose you would want to write information for such nodes.)

I guess I would still prefer to simply redo the search a number of times. For a search of the size you mention it would make the requested information almost appear instantly, in a time negligible compared to what you needed to study it. By the time the search is so big that this becomes cumbersome, writing it on disk would take many minutes. Plus, that for each information you want to dig out of the search, you would have to read it back as well.

In the early days of debugging uMax I had really built an interactive tree walker in the search: upon entry of any node below a certain (globally set) level, you would get into a menu were you could tell wat to do from this node. This then set a variable local to the node that controlled all debugging print statements, e.g. to give an overview of the moves and their search scores, to go to the node one of the moves led to, to go back to the parent node, to go to the next IID iteration, etc. It didn't require so much code.

You could not entirely prevent having to re-run a search, though: at some point you arrived at hash hits with a suspicious score, and you would want to examine the search that filled those hash hits. And that of course had happened at a point that you already passed.

Re: Hamsters randomizer in action

Posted: Tue Jun 05, 2007 3:27 pm
by Uri Blass
hgm wrote:OK, that isn't disastrously bad. I was thinking more of trees of 100M nodes. After all, 2M nodes probably represents just half a second of search.

(I am never sure how people count nodes; Joker does about 1.5Mnps, but it does not count nodes that are satisfied from the hash table. And I do suppose you would want to write information for such nodes.)
I simply increase the node counter by 1 every time that I make a move.

Uri

Re: Hamsters randomizer in action

Posted: Tue Jun 05, 2007 3:46 pm
by hgm
But do you make the move if it leads to a node that will give you a pruning hash hit? Joker doesn't, as it probes the hash before making (normal) moves.

Re: Hamsters randomizer in action

Posted: Wed Jun 06, 2007 12:23 am
by Ron Murawski
Alessandro Scotti wrote:After the comments from Michael and Ron I've managed to spend some time with Hamsters. As a result, my wife won't talk to me.
Hi Alessandro,

Hoo boy. My wife's not thrilled with my chess programming involvement either! Welcome to the doghouse! ;)

In any case my comments about Hamsters not being a steady-enough opponent for my own testing purposes must be taken with a grain of salt. I have only found 5 programs that are steady enough, all the others (about 20 of them) have failed the "steadiness test". I'm very worried that if I make an improvement that knocks one of the existing opponents out of the mix. Where I will find another engine to replace it? My recollection is that it took me about 10,000 games to find 5 opponents. Since that time I've played 500 games vs each of two other opponents and neither one "qualified".

Ron

Re: Hamsters randomizer in action

Posted: Wed Jun 06, 2007 12:33 am
by Uri Blass
hgm wrote:But do you make the move if it leads to a node that will give you a pruning hash hit? Joker doesn't, as it probes the hash before making (normal) moves.
Movei today does not use hash for pruning.
I will try to change it in the next month.

Re: Hamsters randomizer in action

Posted: Thu Jun 07, 2007 12:32 am
by Alessandro Scotti
hgm wrote:OK, that isn't disastrously bad. I was thinking more of trees of 100M nodes. After all, 2M nodes probably represents just half a second of search.
I had the first full version working one minute ago and got the first surprise... the Mac Mini dumped 6+ million nodes with practically no noticeable slowdown, producing a 143 MB file!
I saved only the trees at depth 28 and 29 because that what's giving me trouble, but at this point it seems 100 million nodes is quite feasible after all.

Re: Hamsters randomizer in action

Posted: Thu Jun 07, 2007 2:01 am
by Alessandro Scotti
This behavior:

Code: Select all

28/46 0:03.45 0 3582109 Kb2 Kh8 Kc3 Kg7 Kd4 Kf6 Ke3 Kg5 Kd4 Kf6 
28/46 0:04.22 190 4472111 Kb1 Kg7 Kc1 Kh7 Kc2 Kg7 Kc3 Kh6 Kd4 Kg6 e5 dxe5+ Kxe5 Kf7 Kf5 Ke7 Kxg4 Kd6 Kf5 Ke7 g4... 
29/46 0:04.99 0 5317284 Kb1 Kg7 Kc1 Kh7 Kc2 Kg7 Kc3 Kf7 Kd4 Kf6 Ke3 Kg5 Kd4 Kf6 
30/48 0:06.27 240 6471878 Kb1 Kg7 Kc1 Kh7 Kc2 Kg7 Kc3 Kh7 Kd4 Kh6 e5 Kg6 Ke4 dxe5 Kxe5 Kf7 Kf5 Ke7 Kxg4 Kf6 Kf4 d6 g4... 
31/48 0:07.91 255 8076986 Kb1 Kg7 Kc1 Kh7 Kc2 Kg6 Kd3 Kf6 Kd4 Kg6 e5 Kf7 exd6 Kf6 Ke4 Kg6 Kf4 Kf7 Kxg4 Kf6 Kh5 Kf7 g4... 
is due to the search at depth 29 hitting a 0 stored in the hash table, most probably caused by a path dependent draw.
I've tried a very quick fix: don't store 0 in the hash and make eval() never return 0... silly, but worth a try IMO. So far couldn't reproduce the problem with any setup though of course I need to test much more.

Re: Hamsters randomizer in action

Posted: Thu Jun 07, 2007 10:13 am
by Pradu
Alessandro Scotti wrote:is due to the search at depth 29 hitting a 0 stored in the hash table, most probably caused by a path dependent draw.
Did you try always replace if hashkey==entry->hashkey? Doing this fixed most path-dependent hashtable problems in Buzz.

Re: Hamsters randomizer in action

Posted: Thu Jun 07, 2007 8:01 pm
by Alessandro Scotti
Pradu wrote:Did you try always replace if hashkey==entry->hashkey? Doing this fixed most path-dependent hashtable problems in Buzz.
Short of bugs, that's what I do too!