Discussion of chess software programming and technical issues.
Moderators: hgm, Harvey Williamson, bob
Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.

Daniel Shawul
 Posts: 3498
 Joined: Tue Mar 14, 2006 10:34 am
 Location: Ethiopia

Contact:
Post
by Daniel Shawul » Mon Jul 25, 2011 1:54 am
Michel wrote:It blunders because gives the variance more priority than the mean.
No it does not. It is first and foremost an unbiased estimator. So the mean
has priority over anything else.
After that you can try you can try to reduce the variance. With perfect
information you can (unsurprisingly) bring the variance down to zero.
With imperfect information you can still reduce the variance compared to uniform selection.
What do you mean. You need an estimate of the perft 363. So your result is what ever you estimated it to be in the first place. So how do you plan to get the 363 in that example ?
Say you estimated it as 350 first, how do you intend to converge to 363 if it is unbiased. How do you inted to correct it ?
It is garbage in garbage out.

Michel
 Posts: 1965
 Joined: Sun Sep 28, 2008 11:50 pm
Post
by Michel » Mon Jul 25, 2011 1:58 am
What do you mean. You need an estimate of the perft 363.
Sigh. Where do you get this from?

Daniel Shawul
 Posts: 3498
 Joined: Tue Mar 14, 2006 10:34 am
 Location: Ethiopia

Contact:
Post
by Daniel Shawul » Mon Jul 25, 2011 2:00 am
Michel wrote:What do you mean. You need an estimate of the perft 363.
Sigh. Where do you get this from?
I hate to repeat myself but since you are asking for it. Here is your own example.
Ok I will finish it for you
Code: Select all
Mean = 150/363 * (363/150 * 150)
+ 100/363 * (363/100 * 100)
+ 75/363 * (363/75 * 75)
+ 25/363 * (363/25 * 25)
+ 1/263 * (363/1 * 1)
= 150/363 * (363)
+ 100/363 * (363)
+ 75/363 * (363)
+ 25/363 * (363)
+ 1/363 * (363)
= 363
So clearly which ever move gets selected 363 is the perft estimate ?? This is typical
circular reasoning. Why do I go to the trouble of calculating all this if I knew the _unbiased_ perft value anyway ??
I think it is clear the estimate will be biased or you are a magician to get 363 right If we estimated it as 300 it means we are biased by 63, even though a trillion games are played.
Last edited by
Daniel Shawul on Mon Jul 25, 2011 2:02 am, edited 1 time in total.

Michel
 Posts: 1965
 Joined: Sun Sep 28, 2008 11:50 pm
Post
by Michel » Mon Jul 25, 2011 2:02 am
I already answered that post.

Daniel Shawul
 Posts: 3498
 Joined: Tue Mar 14, 2006 10:34 am
 Location: Ethiopia

Contact:
Post
by Daniel Shawul » Mon Jul 25, 2011 2:04 am
Michel wrote:I already answered that post.
No you did not . How do you get 363 ? If you can't get it right it means it is wrong. So it blunders. Please do care to repeat since I don't see you addressing the elephant in the room . Your debating antics are not going to wear me down, so you might as well forget it and address the issue.

Michel
 Posts: 1965
 Joined: Sun Sep 28, 2008 11:50 pm
Post
by Michel » Mon Jul 25, 2011 2:08 am
Your debating antics are not going to wear me down, so you might as well forget it and address the issue.
I did address the issue but you don't want to read it. So there is no point in discussing further. Sorry.

Daniel Shawul
 Posts: 3498
 Joined: Tue Mar 14, 2006 10:34 am
 Location: Ethiopia

Contact:
Post
by Daniel Shawul » Mon Jul 25, 2011 2:12 am
Whatever. If you can't say I can not get 363 right the first time or the second time, never mind. Of course we all can see a magician is needed here !!. All you achieved here is unnecessarily lengthening the discussion in order to evade the point. Good for you.

hgm
 Posts: 22572
 Joined: Fri Mar 10, 2006 9:06 am
 Location: Amsterdam
 Full name: H G Muller

Contact:
Post
by hgm » Mon Jul 25, 2011 6:23 am
I think we have to step back to examine our goals, because this is not getting anywhere. Note that Michel and I agree for 100%, and that we actually have a hard mathematical prove for how the p_i and w_i should be chosen to get the estimate with the correct mean and the smalles possible variance. (Namely p_i ~ sqrt(x_i^2 + s_i^2) and w_i = 1/p_i.)
Everything hinges on this question:
If the variance is not equal to the (average) error, why do you want it to be small?
Before you answer that, progress will not be possible.

Daniel Shawul
 Posts: 3498
 Joined: Tue Mar 14, 2006 10:34 am
 Location: Ethiopia

Contact:
Post
by Daniel Shawul » Mon Jul 25, 2011 9:01 am
Ok here is my question. I was under the impression that the estimates are going to be biased and the goal is to get some kind of result asap, even though I clearly objected to it at first. That is why I proposed my method.But you crossed it out since it gives wrong results at infinite simulations. That is ok but your method does tend to give bad means too just to lower its variance because of assumptions it makes for the tree. At ad infinitum the variance of all methods will be zero, so surely we are comparing at fixed number of games.
What good is that? It is a fair question because at any time t when you stop the iteration the reliablity of our mean has dropped to get a result with smaller variance.
That is why I don't say your method is biased or unbiased, but it can really blunder because it assumes a whole lot about the tree and will have difficulties to revive from extreme cases.. Its estimate of the mean are going to be bad because of too many assuumptions, solely to get the variance down. You are assuming the heuristic is going to be perfect but it won't , and when it fails its estimate will be worse than a biased estimator like mine.

hgm
 Posts: 22572
 Joined: Fri Mar 10, 2006 9:06 am
 Location: Amsterdam
 Full name: H G Muller

Contact:
Post
by hgm » Mon Jul 25, 2011 9:30 am
Daniel Shawul wrote:... but your method does tend to give bad means too just to lower its variance because of assumptions it makes for the tree.
Not true. The method I and Michel propose will not give "bad means". Their mean is
exactly equal to the true size of the tree, no matter how wrong the assumptions were that went into deriving the p_i. Wrong assumptions just mean that you converge to that correct mean more slowly than with correct assumptions. But you cannot avoid making assumptions. Making 'no assumptions' on the relative subtree sizes is equivalent to the assumption that they are on averageof equal size(for which the optimum would be homogeneous sampling.) That assumption can be just as wrong as any other. And inparticular with LMR'ed trees it is a very inferior assumption.
At ad infinitum the variance of all methods will be zero, so surely we are comparing at fixed number of games.
The
variance will be zero, but not necessarily the
error. You are proposing methods (with p_i*w_i != 1) that in general will give the wrong result even after infinite sampling.
What good is that? It is a fair question because at any time t when you stop the iteration the reliablity of our mean has dropped to get a result with smaller variance.
That is why I don't say your method is biased or unbiased, but it can really blunder because it assumes a whole lot about the tree and will have difficulties to revive from extreme cases.. Its estimate of the mean are going to be bad because of too many assuumptions, solely to get the variance down. You are assuming the heuristic is going to be perfect but it won't , and when it fails its estimate will be worse than a biased estimator like mine.