Well from a time perspective, changing the PV at the root happens less often as the search progresses simply because each ply takes longer to search. From a ply perspective though, I think in positions with multiple reasonable alternatives, it's just as likely to flip-flop on ply 15 as it is on ply 8. For instance, at the initial position, mine will switch between e4 and d4 every couple ply.
I think in your test, you'd get pretty consistent results, assuming you're clearing the hash tables. In games though the hash tables are a fairly large source of search instability and inconsistent results, especially with pondering
fast vs slow games in testing
Moderator: Ras
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: fast vs slow games in testing
I have tried games from 10s+0.1s to 1m+1s, to 5m+5s all the way to 60m+60s, and I've seen this same variability. In fact, if you look at the "go deep" papers from Monty, Heinz, and even older ones from Thompson and then Berliner, you will find that programs are still changing their minds at high depths at a rate of 16%-18% or so, meaning that on the last iteration you do, you have a one in six chance of changing your mind. A few extra nodes each search adds up and produces a different move, and away you go.krazyken wrote:I guess I'm wrong then. Although I'm not sure of your definition of fast and slow. I'm going from my experience of analyzing games with engine assistance, usually the engine will lock on a move and stick with it after an amount of time, I'd say somewhere around a minute a move on fast hardware.bob wrote:Every move you make is the result of a tree that can vary somewhat in size due to timing variables. I've tried the same position played 100 times at fast and at slow time controls, and both exhibit extreme variance, not the same result over and over...krazyken wrote:how many sources of randomness do you have in your chess program? Isn't the main reason you switch moves is from deeper search finding something better? as your search goes deeper you approach a point of diminishing returns, less likely to change moves I would think.Gian-Carlo Pascutto wrote:Why?krazyken wrote:I would expect that longer time controls reduce the variability you would get in results (especially if you are avoiding randomization provided by the opening book).
edit: I'd love to see the results if you have time to run the test again. Something like pick a set of quiet positions all with several possible good moves. run each position for 1 sec second 1000 times recording the moves picked and the number of times picked. repeat for 5, 10, 30, 60, and 120 secs to see if there is convergence.