Perft(13) betting pool

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Daniel Shawul
Posts: 3603
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Summary of Monte Carlo results for perft(12) so far

Here is what I don't get. For now lets forget the second sequence...
Using one sequence of random numbers, we generate sub-tree size estimates, then if I determine weights from it and proportion my visits to nodes according to the tree size it becomes biased (practically guaranteed). If I use result from another sequence it becomes unbiased. Note that I didn't use the results (i.e add them in any way) to the perft computation in _both_ cases.
For Peter2, it is the same thing except that he uses the same sequence to compute his weights which are dynamically updated. I now know he doesn't add the perft values but just does a multiply with the current estimate from a single simulation. I believe some amount of work is lost by deciding not to use all the previous simulations directly. But he still uses them in his weights while his method being unbiased. Mine will become biased for the same condition so this really confuses me...

Daniel Shawul
Posts: 3603
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Summary of Monte Carlo results for perft(12) so far

I am planning to use variance reduction by "anthithetic" variables. If I select move from random number U then I select the other move from 1 - U . This works if the moves are ordered. So with the tapering method, I will use this to gain some more reduction in variance. If I select N moves , then N/2 random numbers generated and the rest will take 1 - U values. Right now the 2 moves are randomly selected so it may happen to be the same move..

Michel
Posts: 2005
Joined: Sun Sep 28, 2008 11:50 pm

Re: Summary of Monte Carlo results for perft(12) so far

I believe some amount of work is lost by deciding not to use all the previous simulations directly. But he still uses them in his weights while his method being unbiased. Mine will become biased for the same condition so this really confuses me...
No I think it does not need to be biased. But your procedure is as follows I guess

(*) Generate proportions.
(*) Update your previous estimates for the leaf nodes using these proportions.

This is wrong. In my post I explain the correct procedure. It is basically Peter2 but replacing weights by proportions.

Michel
Posts: 2005
Joined: Sun Sep 28, 2008 11:50 pm

Re: Summary of Monte Carlo results for perft(12) so far

BTW here is the nice proof that optimal proportions are better than optimal weights.

The variance of Peter style estimation for a sum x1+...+xn using optimal weights is

( sum_i sqrt(mi^2+sdi^2) )^2-(sum_i mi)^2

The variance for optimal proportioning of N estimates is

(sum_i sdi)^2/N.

To normalize the second variance we have to multiply it by N. So we have to prove

(sum_i sdi)^2 <= (sum_i sqrt(mi^2+sdi^2i))^2-(sum_i mii)^2

or equivalently

(sum_i sdi)^2 + (sum_i mii)^2 <= (sum_i sqrt(mi^2+sdi^2))^2

Taking the square roots on both sides this is just the triangle inequality
for the sum of the 2-vectors (mi,sdi).

Daniel Shawul
Posts: 3603
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Summary of Monte Carlo results for perft(12) so far

My method is not like that. What I call proportions do not multiply the perft count unlike Peter2. For instance say I have tree at the root with the following sub-tree size.
200 125 100 50 25
If I am given 100 cycles (visits) to spend then I would proportion the visits like 50 31.25 25 12.5 6.25. So all this does is the bigger node will have more visits thereby reducing its variance. The mean perft estimate is just:
Perft = X1 + X2 + X3 + X4 + X5
If I do this innocent looking procedure with one sequence of random number it becomes biased.

While for the Peter2 method the following happens.
Perft = (w1 * X1 + w2 * X2 ..... w5 * X5) / (w1 + w2 .. + w5).
Note that weights wi are determined from the same sequence of random numbers that we used to produce our Xi estimates.
At any one cycle a single estimate of perft of a move is multiplied by its weight to estimate the perft of the whole tree. I update the perft of child nodes to get the parent estimate but I think both will be the same in the end. Now for the same sequence of random numbers Peter's is unbiased and mine is. I am not saying Peter2 is biased just that I don't understand why mine will be (when I use the same sequence) when it seems I do the same thing more or less. Can you be more specific and point to the thing I am missing ?

Edit: BTW it becomes unbiased even with one sequence if I do it only at the root! So the tree has definitely a say in it. Do you remember my first run with UCT at the root? Those were unbiased even though they were run with one sequence of random numbers.

Daniel Shawul
Posts: 3603
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Summary of Monte Carlo results for perft(12) so far

5-ply result:

Code: Select all

``````3-ply&#58; 6.284980e+016 +- 7.337394e+012  6.292152e+016 +- 3.931730e+013 500232079 cost 1.64e17
4-ply&#58; 6.284581e+016 +- 5.480793e+012  6.272345e+016 +- 3.058148e+013 500916944 cost 1.23e17
5-ply&#58; 6.284583e+016 +- 3.867508e+012  6.107216e+016 +- 2.145204e+013 501118477 cost 8.63e16
``````
Note that the error from the actual mean estimates are getting bad even though the variance is significantly reduced. Maybe it is not worth it to add a selective layer as it seems there is a relation between mean and variance estimation. The previous results were very good I think with the mean estimation. I am not going to run 6-ply simulation as that takes way too much time than I can afford.
Last edited by Daniel Shawul on Tue Aug 16, 2011 5:27 pm, edited 1 time in total.

Daniel Shawul
Posts: 3603
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Summary of Monte Carlo results for perft(12) so far

@Michel,
Regarding your deleted post, here is what I thought was interesting. Loot at my italicized comment.

Anyway I don't use the perft data so we agree it is unbiased.
This is completely wrong. Peter's method is not at all like that.
Ok the formula was wrong because I assumed the Xi_s are estimates of the perft for the parent node. If it is for the child nodes ,as was the case when I wrote my formula, then it would be:

P = 1/w1 * (w1 * x1) + .... + 1 /wn * (wn * xn).
So this becomes same as mine in the end
P = X1 + X2 + .... + Xn
So the difference is that his estimates come from the _final_ cycle multiplied by weights. All the intermediate results were never used so it is unibased but its mean estimate could be very poor ??

Edit : Oops, that may not be the case since the weights are from the same sequence, aren't they ? I am sure of one thing, that I am as confused as hell about this issue. I need to get a paper and think it through a bit.

Michel
Posts: 2005
Joined: Sun Sep 28, 2008 11:50 pm

Re: Summary of Monte Carlo results for perft(12) so far

My method is not like that. What I call proportions do not multiply the perft count unlike Peter2. For instance say I have tree at the root with the following sub-tree size.
200 125 100 50 25
If I am given 100 cycles (visits) to spend then I would proportion the visits like 50 31.25 25 12.5 6.25. So all this does is the bigger node will have more visits thereby reducing its variance. The mean perft estimate is just:
Perft = X1 + X2 + X3 + X4 + X5
If I do this innocent looking procedure with one sequence of random number it becomes biased.
I think this cannot be. Unless of course you reuse the data 200 125 100 50 25.
After having generated the proportions from it you should discard this data
(but you can keep an estimate generated from it, please read my post).
While for the Peter2 method the following happens.
Perft = (w1 * X1 + w2 * X2 ..... w5 * X5) / (w1 + w2 .. + w5).
This is completely wrong.

EDIT: You seem to implement proportions differently from what I thought.
If they are really implemented as visits according to some probability distribution
then I think they are equivalent to weights (and in particular they have no advantage
over weights).

Daniel Shawul
Posts: 3603
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Summary of Monte Carlo results for perft(12) so far

I think what is happening is the following. We both do basically the same thing but Peter collects perft estimates ONLY at the root. So he reuses them there but elsewhere only the weights are adjusted.. He does use the same sequence but as I have already mentioned, if you do it only at the root it is unbiased, at least practically. So I think that summing inside the tree is the problem. This maybe the culprit because for a ply-3 tree the second sequence had a less pronounced bias than I ever saw with deeper searches. And of course for ply-0 second sequence is as good as the first.. What do you think ?

Michel
Posts: 2005
Joined: Sun Sep 28, 2008 11:50 pm

Re: Summary of Monte Carlo results for perft(12) so far

I think I have adequately explained it now. Unless you refute my arguments, I think it is time to stop this discussion.

I apologize for this. I make every attempt possible to understand what you write (which is at times very garbled), but you make _no attempt whatsoever_ to even read what I write.
In this way it is impossible to have a meaningful exchange of ideas.