Perft(13) betting pool

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Perft(13) betting pool

Post by Daniel Shawul »

Not true. The method I and Michel propose will not give "bad means". Their mean is exactly equal to the true size of the tree, no matter how wrong the assumptions were that went into deriving the p_i. Wrong assumptions just mean that you converge to that correct mean more slowly than with correct assumptions. But you cannot avoid making assumptions. Making 'no assumptions' on the relative sub-tree sizes is equivalent to the assumption that they are on averageof equal size(for which the optimum would be homogeneous sampling.) That assumption can be just as wrong as any other. And inparticular with LMR'ed trees it is a very inferior assumption.
No, the point is when you do random move picking the probability you assigned to the moves can only be respected only at _inifinite_ number of games. Until then the estimate will hinge on the fact that how representative a move is picked up in my case (since I use constant multiplier), but in your case it requires perfect estimation of multipliers so it can give bad results.

This is clear with playing one game, I have to pick the average move, and you guys have to get each proportion right. My heuristic is much easier to do than yours because it doesn't ask for too much. If we play only 100 games on 5 moves we can not assume each move to get 20. That only happens on a infinite number of games. For low number of games you must have the correct multiplier all the time. This is what I am objecting too. You can't make such a good estimate at lower depths without a very very good heuristic... A simple example is when you made a terrible mistake in proportioning, say the moves were 100 25 25 100 but you use weights like which assumes an LMR kind of node distribution, clearly it will give bad results. Even when your relative proportions are wrong slightly will still give bad results. So when you stop the simulations abruptly, your estimate will can give very bad results. But if I pick average (representative) values more often, I get good estimates even at very low number of games.
Uri Blass
Posts: 10267
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Perft(13) betting pool

Post by Uri Blass »

Daniel Shawul wrote:
All estimates are unbiased.

My estimate is not the best because of bigger standard deviation but I basically use variables that can get some upper bound for perft or 0.

let denote U to be the upper bound for perft(n) based on multplication of upper bounds to perft(1)

U=U(1)*U(2)*....U(n) when U(1)=U(2)=20

Let denote P to be the probability that a sequence is translated to a legal game when we try to estimate perft(n) and have random sequence 1<=X(i)<=U(i) for 1<=i<=n.

every random game can be translated to a different sequence X(i) but not every sequence X(i) can be translated to a legal game

perft(n)=U*P because number of legal games is exactly the number of sequences that are translated to legal games.

our estimate based on a random sequence X(i) is 0 in case that the game is illegal and U in case that the game is legal.

The probability that the game is legal is P so the expected value of our estimate is exactly U*P that is perft(n) and if we do the average of many estimates we get the expected value for perft(n).
I thought about your method a little more and I came to the conclusion your method is biased at its current state. With a little modification of keeping counts of legal games that finished at each ply and multiplying and adding by that ply's upper bound perft value ,it can become unbiased in which case it becomes almost the same as Peter's method. Right now there is an assumption where all games should reach depth=20 for perft(20).
The reward given to each node at depth 13 is U(1) * U(2) ... * U(13) but to those games that finish at depth 10 , it should be U(1) * U(2) * ... U(10) , so we need a separate count for each of the ply's. The number of early mates is probably very low for pertf(13) on the initial positions but technically the method is biased.

eg.
1st ply: 10 moves with the first 2 having branches but the rest 8 moves do not have sub-trees. (extreme case just to make my point)
2nd ply: first move -> 2 branches
second move -> 3 branches
3rd ply: 2replys, 3replys,3replys,3replys,10replys consecutively for each 2nd ply move

U(1) = 10
U(2) = 3
U(3) = 10

Reward for legal games R = U(1) * U(2) * U(3) = 10 * 3 * 10 = 300;
My argument is the reward for the moves which did not reach ply 3 should have been R1 = 10. And a separate legal game count too.

I noticed that your method gives the same probabilties for all moves at the same ply, calculated by (1 / U(i)). So all ply 1 moves have 1/10, ply 2 have 1/3, and ply 3 have 1/10. So each path has the same probablity of 1/300 of being legal.

Total = 300 * ( 1/ 300 * (2 + 3 + 3 + 3 + 10)) + 300 * (1/10) * 8
= 21 + 240 = 261 !!
That 240 is due to those 8 root moves which do not reach ply 3. The actual perft is 21 + 8 = 29 i.e including all leaf nodes.
A correct estimator would do
Total = 21 + 10 * 1/10 * 8 = 21 + 8 = 29
So it is better in this case not to count games which do not reach the last ply to get an estimate of 21.

So with the correction it will start returning 1 (i.e Reward * probability) for each legal move at any ply and will become exactly like Peter's method. Maybe that will get us a perft estimate of 1.981e18. If intermediate games are counted as legal it will be an upper bound, and if not it is like pruning that part of the tree all in all.

@Sven, can you please modify Knockout and get us results if you don't mind ?
I had no time to read all this thread but
some notes:
1)This old post is simply wrong
In your example perft(3)=21 and not 29
The fact that there are games of less than 3 plies is not relevant and
perft(3) is the number of games with exactly 3 plies.

2)I agree with Michel and HGM about the fact that their esimate is unbiased and I agree that I see no reason for a biased estimate when you do not know how much it is biased and if you know then there is no reason not to correct it to be unbiased.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Perft(13) betting pool

Post by Daniel Shawul »

Uri Blass wrote:
Daniel Shawul wrote:
All estimates are unbiased.

My estimate is not the best because of bigger standard deviation but I basically use variables that can get some upper bound for perft or 0.

let denote U to be the upper bound for perft(n) based on multplication of upper bounds to perft(1)

U=U(1)*U(2)*....U(n) when U(1)=U(2)=20

Let denote P to be the probability that a sequence is translated to a legal game when we try to estimate perft(n) and have random sequence 1<=X(i)<=U(i) for 1<=i<=n.

every random game can be translated to a different sequence X(i) but not every sequence X(i) can be translated to a legal game

perft(n)=U*P because number of legal games is exactly the number of sequences that are translated to legal games.

our estimate based on a random sequence X(i) is 0 in case that the game is illegal and U in case that the game is legal.

The probability that the game is legal is P so the expected value of our estimate is exactly U*P that is perft(n) and if we do the average of many estimates we get the expected value for perft(n).
I thought about your method a little more and I came to the conclusion your method is biased at its current state. With a little modification of keeping counts of legal games that finished at each ply and multiplying and adding by that ply's upper bound perft value ,it can become unbiased in which case it becomes almost the same as Peter's method. Right now there is an assumption where all games should reach depth=20 for perft(20).
The reward given to each node at depth 13 is U(1) * U(2) ... * U(13) but to those games that finish at depth 10 , it should be U(1) * U(2) * ... U(10) , so we need a separate count for each of the ply's. The number of early mates is probably very low for pertf(13) on the initial positions but technically the method is biased.

eg.
1st ply: 10 moves with the first 2 having branches but the rest 8 moves do not have sub-trees. (extreme case just to make my point)
2nd ply: first move -> 2 branches
second move -> 3 branches
3rd ply: 2replys, 3replys,3replys,3replys,10replys consecutively for each 2nd ply move

U(1) = 10
U(2) = 3
U(3) = 10

Reward for legal games R = U(1) * U(2) * U(3) = 10 * 3 * 10 = 300;
My argument is the reward for the moves which did not reach ply 3 should have been R1 = 10. And a separate legal game count too.

I noticed that your method gives the same probabilties for all moves at the same ply, calculated by (1 / U(i)). So all ply 1 moves have 1/10, ply 2 have 1/3, and ply 3 have 1/10. So each path has the same probablity of 1/300 of being legal.

Total = 300 * ( 1/ 300 * (2 + 3 + 3 + 3 + 10)) + 300 * (1/10) * 8
= 21 + 240 = 261 !!
That 240 is due to those 8 root moves which do not reach ply 3. The actual perft is 21 + 8 = 29 i.e including all leaf nodes.
A correct estimator would do
Total = 21 + 10 * 1/10 * 8 = 21 + 8 = 29
So it is better in this case not to count games which do not reach the last ply to get an estimate of 21.

So with the correction it will start returning 1 (i.e Reward * probability) for each legal move at any ply and will become exactly like Peter's method. Maybe that will get us a perft estimate of 1.981e18. If intermediate games are counted as legal it will be an upper bound, and if not it is like pruning that part of the tree all in all.

@Sven, can you please modify Knockout and get us results if you don't mind ?
I had no time to read all this thread but
some notes:
1)This old post is simply wrong
In your example perft(3)=21 and not 29
The fact that there are games of less than 3 plies is not relevant and
perft(3) is the number of games with exactly 3 plies.
If you define perft like so then you should not count games which end before ply 3 is reached. Is that what you did when the perft(13) is reported ?
Nothing is wrong with my post. I clearly said that you shouldn't count shorter games to get a perft estimate 21.
With your higher estimate about 1.997e18 what else could be wrong ?
That 240 is due to those 8 root moves which do not reach ply 3. The actual perft is 21 + 8 = 29 i.e including all leaf nodes.
A correct estimator would do
Total = 21 + 10 * 1/10 * 8 = 21 + 8 = 29
So it is better in this case not to count games which do not reach the last ply to get an estimate of 21.
Finish reading the whole thing.
Uri Blass
Posts: 10267
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Perft(13) betting pool

Post by Uri Blass »

Daniel Shawul wrote:
Uri Blass wrote:
Daniel Shawul wrote:
All estimates are unbiased.

My estimate is not the best because of bigger standard deviation but I basically use variables that can get some upper bound for perft or 0.

let denote U to be the upper bound for perft(n) based on multplication of upper bounds to perft(1)

U=U(1)*U(2)*....U(n) when U(1)=U(2)=20

Let denote P to be the probability that a sequence is translated to a legal game when we try to estimate perft(n) and have random sequence 1<=X(i)<=U(i) for 1<=i<=n.

every random game can be translated to a different sequence X(i) but not every sequence X(i) can be translated to a legal game

perft(n)=U*P because number of legal games is exactly the number of sequences that are translated to legal games.

our estimate based on a random sequence X(i) is 0 in case that the game is illegal and U in case that the game is legal.

The probability that the game is legal is P so the expected value of our estimate is exactly U*P that is perft(n) and if we do the average of many estimates we get the expected value for perft(n).
I thought about your method a little more and I came to the conclusion your method is biased at its current state. With a little modification of keeping counts of legal games that finished at each ply and multiplying and adding by that ply's upper bound perft value ,it can become unbiased in which case it becomes almost the same as Peter's method. Right now there is an assumption where all games should reach depth=20 for perft(20).
The reward given to each node at depth 13 is U(1) * U(2) ... * U(13) but to those games that finish at depth 10 , it should be U(1) * U(2) * ... U(10) , so we need a separate count for each of the ply's. The number of early mates is probably very low for pertf(13) on the initial positions but technically the method is biased.

eg.
1st ply: 10 moves with the first 2 having branches but the rest 8 moves do not have sub-trees. (extreme case just to make my point)
2nd ply: first move -> 2 branches
second move -> 3 branches
3rd ply: 2replys, 3replys,3replys,3replys,10replys consecutively for each 2nd ply move

U(1) = 10
U(2) = 3
U(3) = 10

Reward for legal games R = U(1) * U(2) * U(3) = 10 * 3 * 10 = 300;
My argument is the reward for the moves which did not reach ply 3 should have been R1 = 10. And a separate legal game count too.

I noticed that your method gives the same probabilties for all moves at the same ply, calculated by (1 / U(i)). So all ply 1 moves have 1/10, ply 2 have 1/3, and ply 3 have 1/10. So each path has the same probablity of 1/300 of being legal.

Total = 300 * ( 1/ 300 * (2 + 3 + 3 + 3 + 10)) + 300 * (1/10) * 8
= 21 + 240 = 261 !!
That 240 is due to those 8 root moves which do not reach ply 3. The actual perft is 21 + 8 = 29 i.e including all leaf nodes.
A correct estimator would do
Total = 21 + 10 * 1/10 * 8 = 21 + 8 = 29
So it is better in this case not to count games which do not reach the last ply to get an estimate of 21.

So with the correction it will start returning 1 (i.e Reward * probability) for each legal move at any ply and will become exactly like Peter's method. Maybe that will get us a perft estimate of 1.981e18. If intermediate games are counted as legal it will be an upper bound, and if not it is like pruning that part of the tree all in all.

@Sven, can you please modify Knockout and get us results if you don't mind ?
I had no time to read all this thread but
some notes:
1)This old post is simply wrong
In your example perft(3)=21 and not 29
The fact that there are games of less than 3 plies is not relevant and
perft(3) is the number of games with exactly 3 plies.
If you define perft like so then you should not count games which end before ply 3 is reached. Is that what you did when the perft(13) is reported ?
Nothing is wrong with my post. I clearly said that you shouldn't count shorter games to get a perft estimate 21.
With your higher estimate about 1.997e18 what else could be wrong ?
That 240 is due to those 8 root moves which do not reach ply 3. The actual perft is 21 + 8 = 29 i.e including all leaf nodes.
A correct estimator would do
Total = 21 + 10 * 1/10 * 8 = 21 + 8 = 29
So it is better in this case not to count games which do not reach the last ply to get an estimate of 21.
Finish reading the whole thing.
I did not mean to count games which do not reach the last ply because they are not translated to a long sequence but only to a short sequence and I choose a random long sequence with my non efficient method.

Maybe there was a misunderstanding about my method but I certainly meant that a sequence is translated to a legal game only if all the numbers in it can be translated to legal moves(including the last number in it).
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Perft(13) betting pool

Post by hgm »

Daniel Shawul wrote:No, the point is when you do random move picking the probability you assigned to the moves can only be respected only at _inifinite_ number of games. Until then the estimate will hinge on the fact that how representative a move is picked up in my case (since I use constant multiplier), but in your case it requires perfect estimation of multipliers so it can give bad results.
Again, not true. In my / Michel's method there is no estimation of multipliers at all. They are_calculated_ as 1/p_i, and the p_i, and I know exactly which p_i I am using to select the moves. The only thing I do not know is the x_i, so I do not know if the p_i I amusing are the optimal p_i.

In your method, OTOH, you are dependent on representative moves being picked, while you biased the picking against that, so that the more often you pick, the less likely it will be that the picking will be representative. Only by having detailed advance knowledge of the sub-tree sizes, you can tweek things such that the non-representative picking happens to have the same mean as the value you are after.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Perft(13) betting pool

Post by Daniel Shawul »

hgm wrote:
Daniel Shawul wrote:No, the point is when you do random move picking the probability you assigned to the moves can only be respected only at _inifinite_ number of games. Until then the estimate will hinge on the fact that how representative a move is picked up in my case (since I use constant multiplier), but in your case it requires perfect estimation of multipliers so it can give bad results.
Again, not true. In my / Michel's method there is no estimation of multipliers at all. They are_calculated_ as 1/p_i, and the p_i, and I know exactly which p_i I am using to select the moves. The only thing I do not know is the x_i, so I do not know if the p_i I amusing are the optimal p_i.
[qutoe]
In your method, OTOH, you are dependent on representative moves being picked, while you biased the picking against that, so that the more often you pick, the less likely it will be that the picking will be representative. Only by having detailed advance knowledge of the sub-tree sizes, you can tweek things such that the non-representative picking happens to have the same mean as the value you are after.
In a Monte-carlo simulation you probably never revisit that same node again unless you do a lot of them. The multipliers are dependent on the probablity, which is dependent on sub-tree sizes. Your weights give more priority to the variance than the mean at fixed number of iterations. I prefer to use a simplistic heuristic like picking the middle move (as in LMR example) and multiply by the number of legal moves. If we do a single MC game, I would pick the middle moves at each ply and multiply by number of legal moves, which is a simple heuristic.But you have to determine weights appropriate for each ply depending on sub-tree size.

In the LMR case I don't tweak anything I just pick moves in the middle more often. I don't try to estimate how much their nodes count will be unlike in your case. I know this will give me a biased estimate in the long run but with fixed number of games it can perform better.
In your case the heuristic has to be good very good to estimate the mean with fixed number of games because attention is shifted to reducing variance by allocating more weight to the nodes with bigger sub-trees.

Also when we do LMR perft the tree is tapered to the right. Like an inverted triangle. If you pick the first move and multiply by certain weight, you are _assuming_ this pattern is going to repeat itself along the whole width of the tree within in the same ply. Which is clearly wrong. But my method tends to pick game paths towards the middle of the tree from top to bottom of the tree.
User avatar
Ajedrecista
Posts: 1966
Joined: Wed Jul 13, 2011 9:04 pm
Location: Madrid, Spain.

Re: Perft(13) betting pool

Post by Ajedrecista »

Hi again:

I will use some numbers that were posted here:

http://talkchess.com/forum/viewtopic.ph ... 31&t=39678

Sorry for insisting so much with my polynomial extrapolation. I have done linear extrapolation (instead of quadratic) for Perft(11) estimate, to see if increasing (or decreasing) degrees in [alpha(n)]* and [beta(n)]* affects so much in range width and relative error; in fact, it seems that I was right this time (almost, because there is an issue at the end of the post). I also have done linear and quadratic extrapolations for Perft(13) and range width is reduced if the degree of the adjusting polynomial is increased (as I expected). I can say nothing yet about relative error in Perft(13) estimate because Perft(13) is still unknown. Here are the numbers (rounding up to the closest integer):

Perft(11) estimate using linear extrapolation (remember that n = 2x + 3):

[Alpha(x)]* = 2.2591 - 0.4519x
[Beta(x)]* = 2.1489 - 0.4189x

[Alpha(n = 11)]* = 0.4515 (the true one is more less 0.265031).
[Beta(n = 11)]* = 0.4733 (the true one is more less 0.262037).

{Perft(11), [alpha(n = 11)]*} ~ 2,106,537,035,152,254 ~ [true Perft(11)] + 0.4236%
{Perft(11), [beta(n = 11)]*} ~ 2,107,833,647,107,925 ~ [true Perft(11)] + 0.4854%

Taking the arithmetic average of two values above:

[Perft(11), linear extrapolation]* ~ 2,107,185,341,130,090 ± 0.0308%

The central value of this interval is more less +0.4545% than Perft(11) = 2,097,651,003,696,806. As everybody can see, both range width and relative error are greater than the ones with quadratic extrapolation:

Range width: |±0.0308%| > |±0.0187%|
Relative error: |+0.4545%| > |+0.145%|

So it seems than increasing the degree of the adjusting polynomial leads to less range width and less relative error; for Perft(13) I have found that range width is also reduce when increasing the degree of the polynomial:

[Alpha(x) linear]* = 2.3522 - 0.5078x
[Beta(x) linear]* = 2.2545 - 0.4823x

[Alpha(n = 13) linear]* = - 0.1868 (the true one is still unknown).
[Beta(n = 13) linear]* = - 0.157 (the true one is still unknown).

{Perft(13), [alpha(n = 13) linear]*} ~ 1,992,015,814,368,140,589
{Perft(13), [beta(n = 13) linear]*} ~ 1,992,883,788,621,946,743

Taking the arithmetic average of two values above:

[Perft(11)]* ~ 1,992,449,801,495,043,666 ± 0.0218%

===============================================

[Alpha(x) quadratic]* = 2.0908 - 0.2464x - 0.0523x²
[Beta(x) quadratic]* = 1.9516 - 0.1793x - 0.0606x²

[Alpha(n = 13) quadratic]* = - 0.4487 (the true one is still unknown).
[Beta(n = 13) quadratic]* = - 0.4599 (the true one is still unknown).

{Perft(13), [alpha(n = 13) quadratic]*} ~ 1,984,069,630,911,749,172
{Perft(13), [beta(n = 13) quadratic]*} ~ 1,983,623,854,355,425,180

Taking the arithmetic average of two values above:

[Perft(11), quadratic extrapolation]* ~ 1,983,846,742,633,587,176 ± 0.0112%

===============================================

Range width: |±0.0218%| > |±0.0112%|

But when a cubic function is used for adjusting the data, range width is |±0.013%| > |±0.0112%|. This last issue is what I did not expect... so I could not conclude anything. Anyway, less than |±0.03%| in range width seems pretty accurate for me. And I think that relative errors below |0.3%| are quite good. I can only wait and see.

Good luck with your methods and estimators (biased, unbiased, etc.). It seems great stuff to me (viewing how people implement such methods and obtain such good results!).

Regards from Spain.

Ajedrecista.
Michel
Posts: 2272
Joined: Mon Sep 29, 2008 1:50 am

Re: Perft(13) betting pool

Post by Michel »

In your case the heuristic has to be good very good to estimate the mean with fixed number of games because attention is shifted to reducing variance by allocating more weight to the nodes with bigger sub-trees.


You keep on repeating these same falsehoods over and over again. What
is wrong with you?

The estimator is unbiased. So contrary to what you have now claimed
a zillion of times: THE MEAN TAKES PRECEDENCE OVER THE VARIANCE.

And why do you insist that one needs precise subtree estimates?
We never said that. As my (1/4 1/4 1/8 1/8 1/8 1/8) example shows. THE
HEURISTIC DOES NOT HAVE TO BE VERY GOOD TO GIVE A REDUCTION
OF VARIANCE (without sacrificing unbiasedness).

Sorry for shouting. But you do not seem to read what people write.

It is only natural that a good MC estimator shifts attention to the bigger
subtrees. These require the most work to approximate their perft
value. So more resources should be devoted to them. But the moves
corresponding to small subtrees get a bigger multiplier. So they
are still accounted for correctly.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Perft(13) betting pool

Post by Daniel Shawul »

You keep on repeating these same falsehoods over and over again. What
is wrong with you?
I didn't say your method is unbiased but that it can blunder if you stop it in the middle? You do realize you need very good estimates of proportions and tree size below it to get good estimates right. That is why I have been asking you to tell me how you get 363? On the other hand mine is biased but can give good estimates at low number of iterations and doesn't need good heuristic. Just sample moves in the middle more often, doesn't need you to estimate proportions of tree size or anything. Which was what I considered first, but you failed to mention that you needed an unbiased method, when I was talking about biased methods and their advantage?
Infact if this was a MC simulation on scores , then multiplying the score by a multiplier would be a ridiculous idea. All other MC methods I know of became automatically biased when the move selection becomes biased. That was what I had in mind before I knew of what you needed.
The estimator is unbiased. So contrary to what you have now claimed
a zillion of times: THE MEAN TAKES PRECEDENCE OVER THE VARIANCE.
No that is a wrong conclusion. If you are saying that indeed the mean takes precedence, are those the optimal weights to get good means even at low number of iterations? Biased methods could reach to an estimate of the mean faster. What you did is make the estimation of the mean poorer to get close to 0 variances, which btw will become zero any way at many/infinite number of iterations. The fact that it is unbiased doesn't make it a good estimator when you stop it before reaching infinity. I already showed you after one game mine does not have a need for such a strong heuristic to get good estimates.
The variance becomes zero anyway at infinity. Do you realize we are talking about sampling distribution here and as such variance = sigma^2/N. What is the point of trying to reduce sigma to zero from the get go anyway, when you can sacrifice mean estimates ? My biased estimate converges faster alas to a biased estimate which makes it a good candidate when there is a time constraint. My method also cuts down variance but not as much as yours.
And why do you insist that one needs precise subtree estimates?
We never said that. As my (1/4 1/4 1/8 1/8 1/8 1/8) example shows. THE
HEURISTIC DOES NOT HAVE TO BE VERY GOOD TO GIVE A REDUCTION
OF VARIANCE (without sacrificing unbiasedness).
Sorry for shouting. But you do not seem to read what people write.
Ok may be I was a little aggressive towards you and HG yesterday. Sorry for that. But my point was if you are so bent on crossing my method out because it was biased (i.e its mean estimate is bad at ad infinitum). Then your method should give better estimates of the mean too. What is the point if its estimate is poor (not at infinity) compared to another estimate which gives relatively low errors even at low number of iterations?
All unbiased methods do require correct estimation of proportions of sub-tree sizes at fixed number of iterations. Consider y=x and y=x^2 , consider (0,0) and (1,1) as the start and end and hence both are unbiased estimates to (1,1). Which method do you prefer y = x or y=x^2. How about a y = x + 0.0001 line ? It does tend to give good estimates even though it is biased in the end. I know you don't like it but you see it could be faster always giving better estimates than y=x and y = x^2. I am not saying that this curves are representative infact far from it probably but they demonstrate what I am questioning you.
It is only natural that a good MC estimator shifts attention to the bigger
subtrees. These require the most work to approximate their perft
value. So more resources should be devoted to them. But the moves
corresponding to small subtrees get a bigger multiplier. So they
are still accounted for correctly.
Yes but your multipliers should be perfect for good estimation at fixed number of iterations. Mine doesn't need to be and yet improves the default method
which takes equal weights (which btw is unbiased with a constant multiplier). i.e good estimates at lower number of iterations.
I want to estimate the mean faster at low costs. And you want to reduce the variance, while giving poor estimates of the mean. That fact that it equals the correct value is not a justification when there is a time constraint.
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Perft(13) betting pool

Post by hgm »

All this talk leads nowhere. Let us consider a real example. Suppose I have an LMR tree where each node has 10 moves, and the first 2 are searched at full depth, and the remaining 8 are reduced 1 ply. How would you take the p_i and w_i?