The PV array is used to seed the TT at the start of every iteration. But if something doesn't fit and gets overwritten, that would be a deal-breaker.Zenmastur wrote:What would be the point? The root move will not change after ply 26 if your q-search is working properly. The q-search should discover that the f5 pawn is free at ply 26, there is no larger piece that can be captured and no earlier search can force the loss of pawn. Therefore nothing but an erroneous search result can change the root move after ply 26.bob wrote:Why are you stopping at ply=26. Feel free to go to ply 50.
Just as I suspected. At 26 plies you have 36,840 stores and only 2,697 of them are over-writes. This means that AT MOST, the engine saw 6,793 positions. Not exactly what I would describe as highly over-subscribed TT. In fact, most moves made during TCEC suffer from a greater over-subscription factor than this. (i.e. 4,096/6,793 = ~1.66 )bob wrote:Here are the actual counts.
ply=26 hast stores done = 36840 Positions overwritten = 2697
ply=30 stores=66755 overwrites=6817
It's actually very unlikely that the engine saw 6,793 unique nodes. It probably saw about 5,200 just like I said. The rest “appear” to be unique as I explained here:
It probably saw approximately 5,200, just like I said, but only had room for 4,096. About 1,100 or so were over written during the course of 26 iterations and any re-searches done due to a fail-high/low condition. When the nodes that were over-written, were seen again in the next iteration, they “appeared” to be unique data because they didn't match anything in the bucket they were being written to. Just like I said!Zenmastur wrote:...Third, in an over-subscribed TT, entries will be overwritten. If the same position is searched again, a very likely proposition in an iterative deepening search, it will be written to the TT again, but it will appear as if it's unique data when in fact it isn't. This becomes a huge problem for over reporting unique writes...
I highly doubt your last statement. As I recall you have a triangular array that stores the PV at every iteration. This array is undoubtedly larger than a 4,096 entry TT. While I haven't looked to see exactly how and when it's used my suspicion is that if the entire TT were cleared between iterations the previous iteration's PV would still be in the triangular array to guide the search. This array constitutes a large secondary cache of useful data for the engine to draw upon.bob wrote:...(note, overwrites means storing on top of an entry where signatures do NOT match).
ply=35 stores=184757 overwrites=64631
ply=40 stores=1000538 overwrites=654789
ply=45 stores=4729806 overwrites=3698845
ply=50 stores=42M overwrites=36M
Does that prove a thing? Only that my tree is not blowing up, as the branching factor remains consistent. If the TT were getting swamped by shallow entries the search would die.
After ply 26 nothing should ever cause it to change it's root move. So as far as I'm concerned it makes no difference if you end the search at 26 plies or continue for ever. The game should be decided by the q-search at ply 26 in which it should be determined that white wins a black pawn without compensation.
bob wrote:Somehow you are overlooking some SERIOUS math flaws.
I think it's more likely that you're misinterpreting the data you presented. My counts at 26 plies might not be perfect, but I don't think they're in serious error.
I think you are confusing a shallow position with a leaf node. A shallow position is near the root not near a leaf. So in a search a shallow position would be from ply 1 to some number less than half the search depth. You seem to be thinking I meant a node near to the leaf nodes.bob wrote:The shallowest positions are not "the fewest in number". This is an exponential tree. Those nodes near the tip are the huge majority of total positions searched. How does this slip by your analysis???
I don't want you to go to the trouble if you have other things you would rather attend to. Even so, I don't think it can be proven, but if you attempt to prove it I would love to see your data as well as the conditions under which the experiment(s) were performed.bob wrote:My point is this, and only this. You fix the boundary between the two tables wherever you want. I claim, and can probably prove with some coding changes, that wherever you fix that boundary, you prevent transpositions from occurring across that boundary.
I've been aware since the very beginning that various reduction can greatly affect the depth of searches in different branches. This does not worry me in the slightest. I doubt that this will be a problem since these are reductions to the search depth. I believe this will be handled nicely, but just in case it's a problem, I have an easy fix worked out.bob wrote:And I will remind you these trees are NOT searched to a fixed depth. In Crafty, null-move R can range up to 8 or beyond, and the LMR reductions are about the same. So a path can have only 6 moves in it and represent a 30-40 ply search from the root. And it can transpose easily...
Regards,
Zen
I don't think anyone really needs a proof to show that a position from the first three plies can be reached via a transposition at the last three plies. I can do one of those by hand. And when you figure that many of the searches are quite shallow, not allowing transpositions across the boundary is a problem.
But more significantly, the OTHER direction is just as important with grafting, which is how programs find the solution to fine #70 well before they have actually searched 26 plies deep.
For fine 70, here is what happens. (note that if you have REALLY good move ordering this will not happen and you will require the full 26+ plies to solve this position).
You search starting with Kb1, and your opponent plays poor moves, allowing you to reach a position somewhere deeper where you force the win of the pawn. Now your opponent plays a better defense, and you eventually reach that same position, but too deep in the tree to be able to see the forced win of the pawn, UNLESS you fetch it from the TT. This tree grafting is an important part of our programs' ability to solve really deep endgames. It is reasonably important that the ENTIRE tree be visible through the tt, throughout the entire tree. Reasonable replacement strategies make this work extremely well.
If you believe there is a flaw in current programs, that's a task you need to take on to prove. It is so counter to what I know about tree searching (and apparently others as well since they seem to agree with my methodology) that it really doesn't attract my interest for testing...