Skipping duplicat moves

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

AlvaroBegue
Posts: 931
Joined: Tue Mar 09, 2010 3:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: Skipping duplicat moves

Post by AlvaroBegue »

In RuyDos I just fish out the hash move after generating captures (if the hash move was a capture) or after generating non-captures (if the hash move was not a capture).

For killers, I actually generate the non-captures first, then find them in the list and move them to the front.

I believe this scheme is fast enough. Obsessing over speed is probably a bad idea.
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Skipping duplicat moves

Post by hgm »

Well, anythig you don't have to do at all is a gain both in terms of speed and simplicity. And when the projected speed is around 300 nodes/sec, speed is reasonably important.
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Skipping duplicat moves

Post by Sven »

Considering that the hash move already produces a cutoff in many cases (say, at 90% of all full-width nodes) which means that there will be no move generation at all with that "try hash move before generating moves" approach, the time spent to remove the hash move from the move list is really negligible. It is like doing 20 comparisons on average at about 10% of all full-width nodes (or all nodes including qsearch if you use the TT hash in qsearch as well), so something like 2 extra comparisons per node on average. That *is* something you can ignore safely since you will have a hard time to find any faster alternative with the same functionality.
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Skipping duplicat moves

Post by hgm »

OK, that is a good point. All-nodes will in generally have no hash move, and the problem would not arise. So only the cut-nodes have to be considered. The calculation hinges on postponing the hash-move removal until after the hash move has been searched, though.

This suggests the best approach would be to explicitly test whether the next move in line happens to be a duplicat of the hash move, and skip it if it is. Then the test overhead is maximally delayed, and every cutoff occurring before it would relieve you of making the test for the remaining moves. The only downside is that you would then also make this test on all moves in all-nodes (where it always fails, because the hash move was invalid), unless you use different search routines for all-nodes and cut-nodes (or actually for nodes with hash move and without). You would not have that problem if you would locate it as a separate loop through the move list in the part of the code that searches the hash move, after you established that it did not cut. (And have a call to move generation from there.)

This still would not solve the problem with the IID, however. If during one of the IID iterations a particular move was deepened beyond the iteration depth, because it seemed to be a cut move, but in the end did not live up to the promise, it would still rely on the hash hit to prevent duplicat searches of that move in later iterations. Which is usually fine if you have a hash table, but rather costly if you have none (or a too-heavily-loaded one). If you would have a method for solving that, it would solve the duplicat hash and killer problem as a free side effect.

So I still like the idea of 'caching' all daughter hash entries in the local variables of the node, where they can never be overwritten. Keeping this info can be very affordable even in situations where memory is too cramped for a general TT. It only takes something like 256-512 bytes per ply. It seems important to keep at least the cut-move of the daughters for all daughters, because it is even important when it was obtained at a lower depth than the now requested one, for use as hash move. So even if in one IID iteration in an all-node did not search any of the moves deeper then necessary to establish their fail low, the cut-moves would still be used in the next iteration.

In fact you could get even more mileage out of this: very often the situation will occur during search of late moves in an all-node at ply N, the moves have similar refutations (e.g. through the same killer at ply N+1). This means that the all-node granddaughters at ply N+2 will be searched immediately after each other, at that level, and that the second one would thus inherit the cached info on the daughters of its cousin. So counting from the node at ply N, there would be a counter-move table for the move sequences

pointless - killer - X - anti-X

and we are now in a position

pointless' - killer - X

It seems there is an appreciable chance here that the move anti-X would do a good job at refuting X here as well. I am not sure it deserves to be tried as hash move, but it should definitely be eligible as killer. This seems to be dependent on the move at ply N+1 being the same. So it seems a good idea to store with the counter-move table which move was used to reach the position that created it. If a cousin then inherits that table, and finds it was reached by the same move, as for which the table was made, it could try to use the moves in that table, passing the relevant counter move for what it tries to its daughter. This is a sort of 'hyper-killer' scheme that inherits killers not from sibblings but from grand-cousins (is that the proper term?), in a way that keeps the two preceding ply equal (and only the ply before that different). This could be useful even in engines that do have a hash table, as normal hash probing would not provide such information. (Of course 'analogy probing', where you keep probing the tree of a sibbling late move in addition to your own, could provide it, even for deeper plies.)
Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 3:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: Skipping duplicat moves

Post by Michael Sherwin »

To get deeper faster there are really only two considerations. The first is what can be done to minimize the number of qsearch calls and two is having the fastest possible qsearch for your needs. Everything else has very little impact. Searching duplicate moves is bad because it increases noticeably the number of qsearch calls. Better move ordering reduces dramatically the number of qsearch calls. Any reasonable amount of extra work done to reduce the number of qsearch calls will have very little impact on nps.
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through