bob wrote:I think that is the crux of the problem. Fail highs are very "local" in nature most of the time, as in refuting a blunder. So counting them leads to large numbers in the history table yet those numbers don't mean anything except in the local area of the tree where they were set.
On the other hand, counting only PV nodes (in a PVS framework) has exactly the opposite problem. There are so few PV nodes, particularly when things are going well with move ordering, that there would be no useful information in the history counters either.
Hence my decision to stop using them. I even kept up with when a supposedly good move didn't fail high, but that did not seem to help much either.
Avoiding reducing killer moves gets the same flavor into the search, but with data that is considerably more "local" and therefore probably more accurate.
Has there been any experiments using neural networks to also account positional similarity? Perhaps this would fix the problem? Apparently it has been used successfully in the LOA (Line of Actions) game and outperformed the history heuristic by 50%:
Just throwing an idea out, because it didn't work for me:
Instead of maintaining a sum for each history element, maintain the last time the move failed high. This makes the history table local rather than global.
Zach Wegner wrote:Just throwing an idea out, because it didn't work for me:
Instead of maintaining a sum for each history element, maintain the last time the move failed high. This makes the history table local rather than global.
My attempt to make them more lacal in nature is to only allow the number of updates to reach 200 before dividing that entry in half. Seems to have helped a lot.
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through