Perft(13) betting pool

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Summary of Monte Carlo results for perft(12) so far

Post by Daniel Shawul »

To give you a perspective of what is happening look at this result for a depth limited version of depth=4. The expansion is already finished in a couple of iterations maybe even in the first 1M. This is just at the 19th iterations out of 100 and it already beats the best I ever had. It is on its way to a 1.1e12 variance. That is for a perft computed 10x less often,from which the ratio of visits to each child is determined, which you would expect to increase variance since I was consuming 2x amount of nodes but the opposite happened.
Either variance back-propagation has problems or this method significantly reduces variance . I am badly in need of confirmation.

I will redo the experiment starting from a fully expanded depth=4 tree to see if the _little_ amount of splitting done here is the problem but I doubt it.

Oh btw using ply < 5 is like not using it at all for a 100mil simulation limit. I am getting almost the same result.

Code: Select all

------------ 19 --------------
0 h2h3   1.789764e+015 +- 1.545989e+012 1.799388e+015 +- 6.372870e+011    562047      8457
0 h2h4   2.597051e+015 +- 1.838673e+012 2.627847e+015 +- 7.750630e+011    795004      9329
0 g2g3   2.474903e+015 +- 1.788144e+012 2.499154e+015 +- 7.428794e+011    751911      9345
0 g2g4   2.195935e+015 +- 1.655045e+012 2.209957e+015 +- 6.872392e+011    644141      9328
0 f2f3   1.531590e+015 +- 1.383416e+012 1.544761e+015 +- 5.679566e+011    450055      8457
0 f2f4   2.027310e+015 +- 1.643477e+012 2.079814e+015 +- 6.873335e+011    635168      8929
0 e2e3   7.137330e+015 +- 3.074284e+012 7.131212e+015 +- 1.238727e+012   2222535     13134
0 e2e4   7.243093e+015 +- 3.081155e+012 7.269241e+015 +- 1.238179e+012   2232483     13160
0 d2d3   4.566226e+015 +- 2.292394e+012 4.586694e+015 +- 9.377974e+011   1235774     11959
0 d2d4   6.307939e+015 +- 2.841795e+012 6.347772e+015 +- 1.149333e+012   1899092     12435
0 c2c3   2.734648e+015 +- 1.902546e+012 2.737344e+015 +- 7.814243e+011    851199      9272
0 c2c4   3.099784e+015 +- 1.997836e+012 3.114230e+015 +- 8.166066e+011    938599      9744
0 b2b3   2.381318e+015 +- 1.765490e+012 2.431944e+015 +- 7.361962e+011    732980      9345
0 b2b4   2.388839e+015 +- 1.767280e+012 2.395950e+015 +- 7.280229e+011    734465      9332
0 a2a3   1.801649e+015 +- 1.556979e+012 1.826304e+015 +- 6.556620e+011    570068      8457
0 a2a4   2.552336e+015 +- 1.824559e+012 2.564241e+015 +- 7.459759e+011    782845      9329
0 g1h3   2.109354e+015 +- 1.653995e+012 2.134619e+015 +- 6.822970e+011    643323      8881
0 g1f3   2.681181e+015 +- 1.811460e+012 2.702902e+015 +- 7.531710e+011    771646      9748
0 b1c3   2.703905e+015 +- 1.936624e+012 2.726999e+015 +- 7.965778e+011    881964      9755
0 b1a3   2.078523e+015 +- 1.681251e+012 2.108154e+015 +- 6.982210e+011    664701      8885
Perft 6.240268e+016 +- 8.988691e+012  6.283853e+016 +- 3.685740e+012
wasted 0.00
6.240545e+016 +- 8.966540e+012  6.283846e+016 +- 3.676064e+012
6.240805e+016 +- 8.944016e+012  6.283841e+016 +- 3.666372e+012
6.241133e+016 +- 8.922104e+012  6.283838e+016 +- 3.656735e+012
6.241562e+016 +- 8.900580e+012  6.283823e+016 +- 3.647164e+012
6.241830e+016 +- 8.878802e+012  6.283805e+016 +- 3.637569e+012
6.242108e+016 +- 8.856980e+012  6.283855e+016 +- 3.628065e+012
Michel
Posts: 2272
Joined: Mon Sep 29, 2008 1:50 am

Re: Summary of Monte Carlo results for perft(12) so far

Post by Michel »

Either variance back-propagation has problems or this method significantly reduces variance .
Is there a moveGen count somewhere in your tables? Without this it is impossible to compare results.

As to the correctness of the standard deviation. I would suggest to simply repeat the test a number of times and to compare with the empirical standard deviation (and check the z value to make sure the estimator is unbiased).

EDIT

I did a random check with your last result
6.283855e+016 +- 3.628065e+012
This has a z value of 4.5. So surely something is wrong with the
way you compute the standard deviation or bias of your estimator.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Summary of Monte Carlo results for perft(12) so far

Post by Daniel Shawul »

Ok I will add a counter to the number of move generators. If I had to guess it would be 100mil * (12 - 4 + 1) = 900mil move gen calls. Full width fw = 4.
The visit proportions doesn't seems suspicious does it ?

The above result goes through all the way to 80th iteration with the mean still at 6.284 so it is no wonder the variance is lower. The mean just did not change that much. But it also means it is biased. But why ? It seems I have to update weights frequently to get an unbiased estimate. Peter can also experiment with the same thing I think. It would be interesting if we observe the same phenomenon.
Michel
Posts: 2272
Joined: Mon Sep 29, 2008 1:50 am

Re: Summary of Monte Carlo results for perft(12) so far

Post by Michel »

Peter can also experiment with the same thing I think.
But why? Peter's estimator is unbiased by _design_. We have been through this a 100 times.

If you cannot _prove_ your estimator is unbiased then it is _very likely biased_ and you should rethink the design.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Summary of Monte Carlo results for perft(12) so far

Post by Daniel Shawul »

The problem is that you guys are stuck on competition and forget that what I am talking about applies to you too! Peter's design is UCT as far as I am concerned. He _splits_ and uses _dynamic_ adjustments, had observed the same problem before etc... And you still didn't answer my question about why it would be biased. If it is implementation problem it will be fixed soon.
I would like to concentrate on working out the details now. Then you can shout I am wrong till you run the test your self and see that I am not :)
cheers
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Summary of Monte Carlo results for perft(12) so far

Post by Daniel Shawul »

And just when you start shooting at me, I may have found the bug. It was just that I need to have a duplicate visits count when I decided to update one of the values less frequently. I mean its count should have been visits / 10 for the less calculated perft. It is an obscure bug ..

The bias test remains valid though because I didn't start using second sequence of random numbers for nothing. You and Peter need to explain how you solve that problem.. It doesn't make sense that I am forced to use that but you guys do not when what you do is exactly the same. So I ask you guys here to show me how you solve the problem
Michel
Posts: 2272
Joined: Mon Sep 29, 2008 1:50 am

Re: Summary of Monte Carlo results for perft(12) so far

Post by Michel »

Well I am not participating in any competition. I just have an academic
interest in knowing which method performs best.
had observed the same problem before etc...
WITH A DIFFERENT METHOD
And you still didn't answer my question about why it would be biased.
The real question is: why would it be unbiased? An estimator is
far more likely to be biased than to be unbiased. So if you cannot
prove your estimator is unbiased then it is very likely biased.

Other people have explained their methods detail but you never have. So you cannot expect people to help you.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Summary of Monte Carlo results for perft(12) so far

Post by Daniel Shawul »

Well then explain the method then we will see how different it is from UCT.
What caused the previous problem and how it got solved will follow next ...
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Summary of Monte Carlo results for perft(12) so far

Post by Daniel Shawul »

This is what I got from Peter about his method and my reply for each of his steps. Now is the time for you to speak Michel. He even said his algorithm is quite similar to UCT...
Well what you describe below is exactly UCT with Michel's formula AFAIAC. I had difficulty testing his formula because it lacked
the "visits" count for a node. Your step (3) decides if to split a node without comparing it to other sibling
nodes. To achieve that you needed more heuristic since you do not compare with other siblings to pick the best.
You just decide if a node is worth splitting by itself.
The basic idea of UCT tree search over plain MC (without tree) is to investigate (split) nodes worth splitting i.e bigger nodes for perft.
So when you do that you gain much of what UCT is meant for. The tree will dynamically grow towards the bigger nodes f.i in an LMR tree.
It will grow towards the best looking move for tree search. There are many methods to control how aggressively you want to control the growth of the tree..
I will go through your steps one by one:
Quote:


My algorithm is quite similar to UCT but it is not exactly like UCT. Here is what I do:
1. Start with an in-memory tree with only one node representing the root position.
I start with the root node expanded (20 moves) but that is no difference. For a UCT search, you
have to return a move to play eventually so it is usual to start with a splitted root node.
Quote:
2. Do a random walk starting from the root, until a leaf node is reached. The probability controlling which child to visit are calculated from data in the child nodes. This probably causes some cache misses, but as a compensation I store moves in the tree nodes too, so I don't have to call the move generator until I reach a leaf node.
This is the monte carlo part. A light MC visits all children uniformly while a heavy MC use different heuristics that usually make it slower but could be better overall.
BTW "heavy" and "light" is not my terminology. It is an accepted terminology in the GO programming field.
Quote:
3. When a leaf node is reached, I decide if it should be split. A node is split if there are fewer than 4M nodes in the tree, the node has been visited at least 20 times, the remaining depth is > 1, and the "cost" for this node is at least 90% of the largest cost for a leaf node seen during the last 1000 random walks. "cost" is defined as sqrt(mean^2 + variance).
This is the formula part that you do differently. You do not compare to other siblings.
-> Your first criterion 4M nodes is to not run out of memory. There are two solution for this when used for tree search (a) Store less visited part of the tree on disk
(b) Prune parts of the tree. The latter is not applicable for perft because we need exact estimates.
-> Your second criterion UCT has it too. Mine is I think at 1000 right now. Lowering that should expand the tree more.
-> Your third criterion is Michel's formula a heuristic to have an idea of the cost of the node sqrt(mean^2 + variance)
Quote:
4. If a leaf node is split, child nodes are created for all legal moves.
Same as UCT and a major component because you decide here to visit the bigger nodes. The tree
will be guided to the bigger nodes.
Quote:
5. If a leaf node is not split, I continue with a uniform probability random walk until the "perft" depth is reached. No hashing or tree node lookup is performed in this step. Just move generation and make/unmake moves.
Same. A light MC search. If you start using your previous static heuristic for captures or the HT method, it will become a heavy MC.
Quote:
6. As the recursion unwinds, the mean and variance estimates are updated for all visited in-memory tree nodes.
Same. I do exactly that and is easy for perft. But this back propagation step is more difficult
for tree search. Remi coulom had a complex Backup operator before UCT was invented.
Quote:
7. A single perft estimate computed from the just completed random walk is logged. If 1000 such values have been accumulated, print the average to standard output.

8. Go back to step 2.

Quote:

I optimized my implementation a bit, and now it takes around 57 minutes on a 2.4 GHz Core2 CPU (using only one core) to generate 100e6 tree walks for perft(12). The code currently spends over 60% of the CPU time in a function that removes illegal moves from a pseudo-legal move list. I think this function can be optimized quite a lot though. Currently it just plays all moves and tests which moves leave the king in check.

The source code is here: http://web.comhem.se/petero2home/PerfTE ... 4.java.txt. If you actually want to run it, you also need BitBoard.java from CuckooChess.

I think comparison by number of simulations should be fine. I was under the impression that it was the HT version you tested, in which case time tests would be more appropriate.
petero2
Posts: 684
Joined: Mon Apr 19, 2010 7:07 pm
Location: Sweden
Full name: Peter Osterlund

Re: Summary of Monte Carlo results for perft(12) so far

Post by petero2 »

petero2 wrote:I am currently running Peter1 with fw=6. I will report on that later when I have enough data.
I have now finished two runs of "Peter1" with fw=6, and two runs of "Peter2" with 8 million tree nodes. Together with data from Michel, the table now looks like this:

Code: Select all

Full width + uniform random walk &#40;Peter1&#41;
perf fw movGen  cycles m           sd        dev   sd*sqrt&#40;movGen&#41;  t &#40;s&#41; sd^2*t &#40;s&#41;
12   3  5.0e8     6210 6.285323e16 8.8445e12 -0.20 1.9777e17          746 5.8356e28
12   3  5.0e8     6210 6.284147e16 8.5806e12 -1.57 1.9187e17          828 6.0963e28
12   4  5.0e8      316 6.285847e16 6.8600e12  0.51 1.5339e17          768 3.6142e28
12   5  5.1e8       15 6.285046e16 4.9581e12 -0.91 1.1239e17          886 2.1778e28 # run 1
12   5  2.0e9       59 6.285395e16 2.8230e12 -0.36 1.2691e17         3496 2.7862e28 # run 1 continued
12   6  1.4e10      19 6.285489e16 7.6647e11 -0.10 8.9600e16        23780 1.3970e28
12   6  1.6e10      22 6.285408e16 5.3089e11 -1.68 6.6781e16        27558 7.7670e27

Tree growth with weighted random walk + uniform random walk &#40;Peter2&#41;
perf fw movGen  cycles m           sd        dev   sd*sqrt&#40;movGen&#41;  t &#40;s&#41; sd^2*t &#40;s&#41;
12   4M 5.00e8   69715 6.285204e16 4.6363e12 -0.63 1.0367e17         1357  2.9169e28
12   4M 5.00e8   69711 6.285458e16 4.6198e12 -0.08 1.0330e17         1442  3.0775e28
12   8M 5.00e8   70843 6.285537e16 4.5940e12  0.09 1.0273e17         1424  3.0071e28 # run 1
12   8M 1.83e9  263038 6.285148e16 2.2094e12 -1.58 9.4539e16         5047  2.4635e28 # run 2
12   8M 4.38e9  631478 6.285437e16 1.3806e12 -0.43 9.1392e16        12091  2.3047e28 # run 2 continued

Daniels UCT based method
perf fw movGen  cycles m           sd        dev   sd*sqrt&#40;movGen&#41;  t &#40;s&#41; sd^2*t &#40;s&#41;
12   -  -            - 6.284728e16 3.8377e12 -2.00 -                -     -

HGMs full width + uniform move acceptance probability 1/32
perf fw movGen  cycles m           sd        dev   sd*sqrt&#40;movGen&#41;  t &#40;s&#41; sd^2*t &#40;s&#41;
12   3  5.00e8   14022 6.286672e16 1.4147e13  0.83 3.1635e17        -     -
12   4  5.01e8     588 6.284380e16 1.0986e13 -1.02 2.4565e17        -     -
12   5  5.01e8      24 6.286130e16 7.4099e12  0.85 1.6569e17        -     -
12   6  5.10e8       1 6.284515e16 6.7244e12 -1.46 1.5036e17        -     -

Michel &#40;Peter1&#41;
perf fw movGen  cycles m           sd        dev   sd*sqrt&#40;movGen&#41;  t &#40;s&#41; sd^2*t &#40;s&#41;
12   4  5.08e8       - 6.284887e16 6.6194e12 -0.92 1.4917e17         1383 6.0598e28
Some observations

8 million tree nodes is more efficient than 4 million nodes, but the time it takes to learn the optimal weights is significant. After 5e8 moveGens, there is no significant advantage compared to 4M tree nodes.

The "Peter1" method with fw=6 seems to be the most efficient method so far, even though the uncertainty is high because of the small number of cycles. I suppose "Peter2" could get similar results, but it probably requires many more tree nodes than I can fit in memory, and a very long learning time.

It currently seems like the "Peter1" method is more promising than "Peter2". I think my next test will try to improve "Peter1" by using a non-learning non-uniform weight function, something like this: http://talkchess.com/forum/viewtopic.ph ... 4&t=39678&.