Perft(13) betting pool

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

petero2
Posts: 683
Joined: Mon Apr 19, 2010 7:07 pm
Location: Sweden
Full name: Peter Osterlund

Re: Perft(13) betting pool

Post by petero2 »

Michel wrote:(3) Peter also uses a "fullDepth" search. Perhaps this is a waste of
resources? It is more or less equivalent to a uniform MC search
for fullDepth plies, and we are trying to get rid of uniform searches.
Actually, there is no full depth part in the test I ran. If you look at the code I posted you can see that I call perfTDynWeight(13, 0), so the number of fullDepth plies is 0. I left the fullDepth code in there in case I wanted to test it later, but as you say, this is probably not a good idea.
Michel
Posts: 2271
Joined: Mon Sep 29, 2008 1:50 am

Re: Perft(13) betting pool

Post by Michel »

If you look at the code I posted you can see that I call perfTDynWeight(13, 0), so the number of fullDepth plies is 0
Ok sorry. I don't have much time today so I only read your description
(which was quite clear) and the function headers.

Do you have an idea of the cpu implications of using the hash table?
4000000 entries seems very large. I think that Daniel's intuition
that one should concentrate on positions near the root might be
correct.
petero2
Posts: 683
Joined: Mon Apr 19, 2010 7:07 pm
Location: Sweden
Full name: Peter Osterlund

Re: Perft(13) betting pool

Post by petero2 »

Michel wrote:
If you look at the code I posted you can see that I call perfTDynWeight(13, 0), so the number of fullDepth plies is 0
Ok sorry. I don't have much time today so I only read your description
(which was quite clear) and the function headers.

Do you have an idea of the cpu implications of using the hash table?
4000000 entries seems very large. I think that Daniel's intuition
that one should concentrate on positions near the root might be
correct.
I haven't measured the CPU overhead, but I agree with Daniel that it will be a huge overhead. We are talking about 30 * depth main memory accesses for each random walk, compared to 0 accesses without hash table, assuming your move generator fits in L2 cache.

This was just an experiment to see how far you could get with non-uniform sampling. For the record, I ran a longer test over night and with a bigger hash table. The standard deviation for the average-of-1000 walks seems to have stabilized at about 3.75e16, compared to 1.05e17 for uniform sampling.

I think the UCT-like approach Daniel is using has more promise.
Michel
Posts: 2271
Joined: Mon Sep 29, 2008 1:50 am

Re: Perft(13) betting pool

Post by Michel »

This was just an experiment to see how far you could get with non-uniform sampling. For the record, I ran a longer test over night and with a bigger hash table. The standard deviation for the average-of-1000 walks seems to have stabilized at about 3.75e16, compared to 1.05e17 for uniform sampling.
Yes thanks for the experiment!

I agree that the point was to see how much
better non uniform sampling can be, all performance issues aside.

Your SD reduction seems to be 2.8.

I am curious to see if the node expansion approach can do better.
Theoretically I think it cannot. But who knows.

EDIT:

Just to be absolutely sure. I assume you keep the hash table between the 1000 walk runs? So that the later runs use a hash table with "ideal"
weights (I know the question will sound stupid).
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Perft(13) betting pool

Post by Daniel Shawul »

petero2 wrote:
Michel wrote:
If you look at the code I posted you can see that I call perfTDynWeight(13, 0), so the number of fullDepth plies is 0
Ok sorry. I don't have much time today so I only read your description
(which was quite clear) and the function headers.

Do you have an idea of the cpu implications of using the hash table?
4000000 entries seems very large. I think that Daniel's intuition
that one should concentrate on positions near the root might be
correct.
I haven't measured the CPU overhead, but I agree with Daniel that it will be a huge overhead. We are talking about 30 * depth main memory accesses for each random walk, compared to 0 accesses without hash table, assuming your move generator fits in L2 cache.

This was just an experiment to see how far you could get with non-uniform sampling. For the record, I ran a longer test over night and with a bigger hash table. The standard deviation for the average-of-1000 walks seems to have stabilized at about 3.75e16, compared to 1.05e17 for uniform sampling.

I think the UCT-like approach Daniel is using has more promise.
There is a new PHD thesis out yesterday which has a discussion on UCT in section 2.3, http://www.grappa.univ-lille3.fr/~coulo ... Thesis.pdf
The original invention of the UCB1 formula goes back to 1995, while its application to tree search happened in 2006.

Luckily I have all the code already implemented for Nebiyu checkers, chess what have you, so all I needed is to modify the weights and other stuff specifically for perft. But I still have some doubts on back propagation and other things. If you want to implement UCT may be you could try the implementation with hash tables so that we can compare.

I would like to work on it slowly digesting everything . There will not be quick updates from me due to other engagements as well. It seems I have spent way too much on chess this week :)

cheers
Michel
Posts: 2271
Joined: Mon Sep 29, 2008 1:50 am

Re: Perft(13) betting pool

Post by Michel »

I would like to work on it slowly digesting everything . There will not be quick updates from me due to other engagements as well. It seems I have spent way too much on chess this week Smile
Well ultimately the discussion has clarified a lot of things! MC perft is really
a nice "toy" problem for Monte Carlo search. The sum function is much
easier to understand than alternating min-max.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Perft(13) betting pool

Post by Daniel Shawul »

Michel wrote:
I would like to work on it slowly digesting everything . There will not be quick updates from me due to other engagements as well. It seems I have spent way too much on chess this week Smile
Well ultimately the discussion has clarified a lot of things! MC perft is really
a nice "toy" problem for Monte Carlo search. The sum function is much
easier to understand than alternating min-max.
Indeed this is the best technical discussion I have seen in CCC for some time (despite some heated talks which never seem to involve me ;) ). I hope perft(13) doesn't get computed too soon till we run out of ideas.

Even though most of the methods were in use in some form for tree search, their application for perft was kind of a surprise at least for me. And all the contributions made by the discussants were equally exciting from hocus-pocus to MC. I have 7 perfts now with ideas from this discussion and I am not sure I have them all. First there was one in util.c but later I was forced to move it in search.c because it needed a lot of search stuff.
The sum function is much
easier to understand than alternating min-max.
Hmm for me this was causing a lot of confusion since I was accustomed to the min-max scoring and everyone else seems to see how it works easily but I had to draw trees and all that stuff with probabilities...
Rein Halbersma
Posts: 741
Joined: Tue May 22, 2007 11:13 am

Re: Perft(13) betting pool

Post by Rein Halbersma »

Michel wrote:
I would like to work on it slowly digesting everything . There will not be quick updates from me due to other engagements as well. It seems I have spent way too much on chess this week Smile
Well ultimately the discussion has clarified a lot of things! MC perft is really
a nice "toy" problem for Monte Carlo search. The sum function is much
easier to understand than alternating min-max.
That's why perft is also a good debugging tool! The sum-sum rule does not hide mistakes in the way min-max and min-sum (e.g. proof-number search) do.
Rein Halbersma
Posts: 741
Joined: Tue May 22, 2007 11:13 am

Re: Perft(13) betting pool

Post by Rein Halbersma »

Michel wrote: The optimal weight p_i for move i is proportional to sqrt(x_i^2+s_i^2).
where x_i is the size of the resulting subtree and s_i^2 is the variance
of the estimator measuring that subtree.

This is a mathematical fact. No assumptions are necessary for
deriving this formula.
Relating to your observation that the sum-rule makes reasoning about MC-perft easier than the min-max rules in MC-search, I have a question.

Is it really true that you don't have to make any assumptions to derive the optimal weights? Not even some central limit theorem kind of thing? After all, the CLT is also a kind of sum-rule for independent stochastic variables. And, if that is somehow relevant, could the extreme value theorem (a kind of min/max rule for independent stochastic variables) be used to infer optimal weights for min-max MC-trees? Just speculation, but it would be nice, no?
User avatar
Ajedrecista
Posts: 1966
Joined: Wed Jul 13, 2011 9:04 pm
Location: Madrid, Spain.

Re: Perft(13) betting pool

Post by Ajedrecista »

Hello to everyone:

I redid by hand (only using the calculator of Windows) all the math involving the cubic polynomial adjust of my method because I was afraid that Excel approximations hurted a little my estimate. In fact, range width has been narrowed a little (from ~ |±0.013%| to ~ |±0.01022%|), so now range width of cubic adjust is less than range width of quadratic adjust (as I expected at first).

It is possible than Excel approximations also hurted my quadratic adjust, but I do not know adjusting this case by hand ( :? ). I think, anyway, that range width is truly less in cubic adjust than in quadratic one (I thought that a few days ago). I have omitted all the decimal numbers (circa 30) because it is a nonsense posting such an amount of digits. Just the results (rounding up to the nearest integer):

{Perft(13), [alpha(13) cubic]*} ~ 1,980,265,717,471,381,929
{Perft(13), [beta(13) cubic]*} ~ 1,980,670,474,119,967,566

Taking the arithmetic average of two values above:

=======================================================
[Perft(13), cubic adjust] ~ 1,980,468,095,795,674,748 ± 0.010219756%
[Perft(13), cubic adjust] ~ (1.9804681 ± 0.0002024)e+18
=======================================================

There are some differences comparing these values with the values obtained with Excel help, which are (rounding up to the nearest integer):

{Perft(13), [alpha(13) cubic with Excel]*} ~ 1,980,207,282,099,230,549
{Perft(13), [beta(13) cubic with Excel]*} ~ 1,980,722,673,233,296,107

[Perft(13), cubic adjust with Excel] ~ 1,980,464,977,666,263,328 ± 0.01301187%
[Perft(13), cubic adjust with Excel] ~ (1.980465 ± 0.0002575)e+18

So I managed to reduce a little the range width of my given interval.

---------------------------------------------------------------------------

I am reading new posts of this thread and I like so much last Shawul's method (dividing and giving nice approaches). I do not know if number of games of each move (h2h3, h2h4, ...) is played randomly or follows a pattern. What I suggest (I do not know if it is even possible) is that the percentage of games played by each move (h2h3, h2h4, ...) should be around the same than of Perft(11) case (and not Perft(12), for taking care about the 'odd-even effect', just in case). The numbers are here:

http://www.albert.nu/programs/dperft/perft11.txt

I think the percentage of first move played (h2h3, h2h4, ...) would be similar between Perft(11) and Perft(13), and estimates with this method would be even better (with huge number of games); but this is only a thought and nothing seriously proved, of course.

Going a little off-topic (sorry), I took a look on Reinhard's inverse interpolation and I like too much the idea. I was doing an informal estimate of Perft(25) for the game of checkers (English draughts) and the best interval I was able to get was:

160,778,351,488,836,793 < Perft(25) < 169,385,247,079,213,634

Upper bound was calculated in this way: {log[Perft(25)]}/25 < {log[Perft(24)]}/24

Lower bound was calculated as follows: 24log[Perft(25)]/25log[Perft(24)] > 23log[Perft(24)]/24log[Perft(23)]

For lower bound calculation I used the same ratio that in my Perft(13) estimate for the game of chess. I tried other ratios but they gave worst results. My guess (seeing the trends) was that true Perft(25) for checkers is closer to the lower bound than to the upper one. As I do not know what to do for improving this interval, I used the inverse interpolation that Mr. Reinhard proposed a few days ago; I obtained the following value (thanks Reinhard):

ln[Perft(25)] ~ 39.6236719
Perft(25) ~ 1.6156306e+17

Which is closer to lower bound than to upper one (just like I expected). So, considering the central value of the interval this estimate obtained by inverse interpolation, and the lower bound the same as before:


1.6077835e+17 < Perft(25) < 1.6234776e+17

Perft(25) ~ 1.6156306e+17 ± 0'4857%


All Perft(n) known values for the game of checkers could be found here:

http://www.aartbik.com/MISC/checkers.html

---------------------------------------------------------------------------

A last thing: Mr. Edwards is not updating his results on https://public.me.com/chessnotation ... anyone know if there is anything wrong with him? This is strange that the one that started this topic does not post anymore. And anyone knows if Paul Byrne will finally compute Perft(13) value in less than a month? He said that.

Thanks in advance.

Regards from Spain.

Ajedrecista.