Sorry but too much sour grapes! I have already heard from both you and Michel how tree growth is "not workable", "not worth it" etc... So let me get things straight once and for all.petero2 wrote:I tested this method and a method that has the much simpler node expansion criteria "ply<5". This means that the in-memory tree will very quickly grow to a tree with exactly perft(5) leaf nodes. Here are the standard deviations I measured after 100e6 random walks:Michel wrote:My conjecture is that the correct measure to decide splitting is what"cost" is defined as sqrt(mean^2 + variance)

I call the "defect".

sqrt(mean^2 + variance)-meanCode: Select all

`split condition stddev sqrt(m^2+var) 3.9826e+12 sqrt(m^2+var)-m 3.9805e+12 ply<5 3.9642e+12`

Maybe starting from a more tactical position would be different.

I don't know if the differences are significant, but it does seem to support your idea that tree growing is overkill for this problem, at least when you start from the initial chess position.

Even _dynamic_ weighting is not your or Michel's idea let alone the tree growing algorithm. You ,Michel and every one else was engrossed in static way of determining weights for your MC part of the tree while I consistently were saying a magician is required. Well we tried with static capture heuristic and didn't get much,did we? Then I explicitly replied to you mentioning _twice_ how I can get better results with dynamic proportioning and UCT. How else can you get dynamic weights if you don't store the result of the tree itself somehow? Hashtable,tree etc is all semantics! So indeed dynamic weights are also an idea derived from UCT.

This post is in reply toThe reason why I wanted you to start from the root is because within the tree we are using _uniform_ sampling. One way to decrease the variance could be using upper confidence bound by saving the first 1-3 plies in memory. I mentioned the following formula a while back that is used forUCT, I am sure it can be modified to give much better results than any heuristic since this onelearns it dynamically. But it requires complicated programming..Select the node with maximum X value. The second term which is the variance is meant for a mean score between 0 < X < 1 but I am sure it can be modified for our case. Anyway even using simple proportioning based on mean nodes count at the tree, thisCode: Select all

`X = Xmean + Constant * sqrt( log(parent_nodes) / child_nodes))`

_dynamic_ adjustementshould be better than any heuristic.

**you**regarding your bad results with static capture heuristc.

So please tell me if you come up with the dynamic weighting by yourself??

Then I followed it up with the whole description in my next post. Again you were still in static tests.

We gave you credit with modifying HG's method to a pure MC , even though MC is a well known algorithm. This is an achievement as it made things simpler at least for me...Here the idea for this kickass algorithm. You can say it is UCT for perft but I haven't actually implemented it so it can be all for nothing if I made some mistake in planning somewhereSo this algorithm explores the bigger nodes more often, while the smaller nodes will have their estimates from small number of simulations. No need for heuristic, no need for fixed depth expansion unlike the one we are doing i.e ply 3-6 fixed with uniform sampling.Code: Select all

`a) Start from the root and do 10 Monte carlo simulations for each move b) Pick the best node to expand using UCB formula I provided above c) Expand the selected node which will be one of the biggest nodes with the right formula. d) Do this recursively for the children of that big node`

If I have to write out my imagination, this algorithm will be so good as to be the best. But I could be wrong somewhere. Opinions ?