Rein Halbersma wrote:Sven Schüle wrote:
If I assume your "depth" is the maximum search depth of an iteration and "ply" the distance of a node to the root node then I can follow you, except for one question: What does "overall EBF of a tree" mean for you?
Yes, depth = maximal remaining depth, i.e. the iterative deepening parameter, and ply = distance to root. With "overall EBF" I simply mean EBF(depth) = nodes(depth) / nodes(depth-1), i.e. the overall growth rate in nodes going from one depth iteration to the next. You can also define share(depth-1, ply) = nodes(depth-1, ply) / nodes(depth-1) as the share of nodes at a given ply level for the previous depth iteration, and leafs(depth) as the number of nodes encountered at the maximal ply level during the current depth iteration. It's then a simple exercise to show that the overall EBF decomposes into
Code: Select all
EBF(depth) = leafs(depth) / nodes(depth-1) + sum_{ply = 0 to depth-1} EBF(depth, ply) * share(depth-1, ply)
This means that the overall growth rate in nodes (EBF) is the sum of the ratio of leaf nodes to previous total nodes, plus the weighted EBF at all ply levels above the leafs, where the weight per ply is the share of each ply level in the previous total nodes. For a completely regular tree, the EBF per ply is equal to 1 (nothing changes from one depth iteration to the next), and the shares all add to one. So for a regular tree, the overall EBF is simply 1 plus the ratio of leaf nodes to previous total nodes.
Obviously qsearch and search extensions complicate this formula a bit, since e.g. leaf nodes occur at different ply levels, but the principle remains the same: EBF decomposes into growth of leafs and "inner growth", aka "widening" or "thickening" depending on your taste
Furthermore, I repeat a proposal that was
made by Gerd Isenberg few months ago, to use an improved definition for EBF that tries to address the "odd/even ply" properties of an alpha-beta search tree and to avoid jumping up and down of EBF numbers between subsequent iterations:
Code: Select all
EBF(iterationNo) := sqrt(nodes(iterationNo) / nodes(iterationNo - 2))
Another recent interesting thread about EBF was
this one.
The previous formula for the decomposition of EBF has an economics analogue in the GDP growth index from one year to the next, which decomposes into the weighted growth indices of goods that already existed the previous year, plus a term accounting for the availability of new goods (i.e. the leafs in tree search). The square root correction of Gerd is an example of seasonal adjusting. E.g. GDP growth indices are often corrected for the same quarter or month (i.e. the depth-2 comparison to correct for the same side to move). This makes them more reliable but also complicates their decomposition a bit.
Back to your summary of arguments, you wrote:
The claim is that for fixed ply, EBF(depth, ply) is increasing in depth, simply because LMR and NMR are reducing more heavily at lower remaining depth.
Are you sure about NMR in this case? Don't we reduce null moves more at higher remaining depth? Basic Stockfish formula (taken from 2.0.1), for instance, is
Code: Select all
// Null move dynamic reduction based on depth
int R = 3 + (depth >= 5 * ONE_PLY ? depth / 8 : 0);
which means R=3 for "remaining depth < 5" and R=3+"remaining depth" / 8 otherwise, which is more reduction at higher remaining depth.
Maybe I misunderstood your point?
Sven
Ah but no! You have to look at remaining depth-R, not just R.
Code: Select all
nullValue = -search<NonPV>(pos, ss+1, -beta, -alpha, depth-R*ONE_PLY);
If you look carefully at the Stockfish formula, then going from say remaining depth = 7 to depth = 8, the null move search actually stays the same! However, there is a strong effect from the verification search which gets skipped for remaining depth < 6, so this makes NMR reduce more heavily at lower remaining depth.
Anyway, my point is that I agree with Bob that EBF is a simple statistic, and I find it strange that people can get confused about it. Where I disagree with Bob is that I think that EBF is not a useful statistic because of the high degree of irregularity of search trees, and that having EBF(depth, ply) available at all ply levels in the tree is much more informative, because it explains that the effects of reductions such as LMR occur in different parts of the tree.
Rein
I am not sure whether EBF is useful or not. But it is a measure, and it offers some interesting data. For example, suppose you search plies 1 to 24 and for each succeeding ply your EBF is 2.0. But at 25 it goes beyond 2.0, before you even finish. What does this suggest? Something pathological is happening in the tree. Move ordering is breaking down, perhaps, because suddenly you are deep enough to expose a tactical issue that is (a) difficult to defend against and (b) you could not see it at the previous depth.
Hsu mentioned something similar to this that he used to trigger "panic time" in Deep Thought / Deep Blue. I just never ran into a time where I had a chance to ask him about the idea in more detail, although any sudden shift in the shape of the tree suggests that something is up...
And yes, the tree is _highly_ irregular. Perhaps a couple of descriptions.
(1) take a position where pawns are locked (all 8 for both sides). All white pawns are on white squares, black pawns are on black squares. White has a king and 3 white-squared bishops. Black has a king and 3 black-squared bishops. So no checks. No extensions. No captures possible. The search tree will end precisely at the advertised search depth. If you do a 40 ply search in that position, no branches will extend beyond 40, as there are no extensions that apply, and no q-search as no captures are possible. If you draw that tree, root at the bottom, you get an inverted pyramid shape. But the density falls off near the tips, because at the last N plies (4 for Crafty) futility pruning outright throws moves away that look hopeless. In this position, however, there is no futility pruning possible. So why does the density drop off? The reductions shorten many branches to well below nominal depth.
What happens if you go a ply deeper? Some of the branches go deeper as well, some do not. The top edge of the pyramid gets wider because it gets taller and this is a tree.
(2) A more typical tree sees what looks like two pyramids grafted together base-to base. The root is on the bottom, the tree widens up to the typical nominal depth, then narrows sharply since there are far fewer captures/checks than there are regular moves. A few branches go way out. A few stop at the nominal depth (no captures are possible) and some go a few plies out. And for each distinct path from root to tip, each path can be a different length, and be comprised of different numbers of normal nodes and capture-only nodes.
I don't like the "widening" term as it is being used, because ANY tree with a branching factor > 1 is going to get wider as it goes deeper. This is an exponential process. The normal use of "wider" refers to width, which is the W part of W^D. Going wider implies that you somehow look at more moves than before, _NOT_ because you are just adding another exponential layer of growth on to the tree, but because somehow you are looking at more moves at each ply. That is not happening in any program I have seen. Yes, when you go from depth 20 to depth 21, one ply in the tree becomes "wider" because you push the forward pruning one ply deeper, making the ply before where the pruning kicks in switch from selective to full-width.
That's not traditional "widening" in any sort of AI / tree-search context I am aware of, that is just a property of growing a tree, where as the tree grows taller, it also grows wider as a result. Until you get the real BF (not EBF) to exactly 1 (or even less). We are not there in general, although I have certainly found positions where the EBF goes below 1.0. Fine 70 is an example, since you can eventually trace every path to an EGTB hit and at that point, the tree stops growing, no matter how many additional plies you add.