Bug of the week

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

User avatar
hgm
Posts: 27702
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Bug of the week

Post by hgm »

Fairy-Max used 8334 nodes. But it cheats a bit, as it prints the nodes at the time it finds the PV, and might search other, low-failing moves after it. That this doesn't consume too many nodea can be seen from the 2-ply search, which is only 268 nodes more.

Code: Select all

  2	+0.76 	8602    	0:00.01	d5e4 f3e5
  1	+0.77 	8334    	0:00.01	d5e4
User avatar
hgm
Posts: 27702
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Bug of the week

Post by hgm »

Henk wrote:By the way what margin can you give in qsearch for futility. A capture may end in a fork or a connected dangerous passed pawn pair that may promote into a queen soon. So you never know if a capture is futile.
That is not the point. Captures are futile because you know the opponent would stand pat on them. And he would even do that if there was such a fork against his dangerous passers. So searching the futile move would achieve nothing, other than wasting time. It would still fail low, no matter how good it actually is. You just haven't enough depth to see it is good.
Henk
Posts: 7210
Joined: Mon May 27, 2013 10:31 am

Re: Bug of the week

Post by Henk »

hgm wrote:
Henk wrote:By the way what margin can you give in qsearch for futility. A capture may end in a fork or a connected dangerous passed pawn pair that may promote into a queen soon. So you never know if a capture is futile.
That is not the point. Captures are futile because you know the opponent would stand pat on them. And he would even do that if there was such a fork against his dangerous passers. So searching the futile move would achieve nothing, other than wasting time. It would still fail low, no matter how good it actually is. You just haven't enough depth to see it is good.
If opponent stands pat then a fork and a connected passer can still be evaluated if you don't do delta pruning. But engine must support fork recognition in evaluation.
User avatar
hgm
Posts: 27702
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Bug of the week

Post by hgm »

I guess in theory it could. But it is very hard to get that right statically. (E.g. the forking piece could be soft-pinned by a Rook attacking a Knight protected only by a Knight that is overloaded by protecting another Rook against a Queen attack.) And getting it not exactly right usually does more harm than good.

But if your evaluation does contain terms like that, and they are large, you would efinitely need a large futility margin. That margin basically represents the maximum change in eval that could be brought about by move in addition to the material it captures (which you can takeinto account explicitly in a trivial way).

Of course you can use the margin as a tunable parameter, not really setting it to the worst-case eval change, but to an empirically determined optimal value. This might than prune a very small fraction of nodes that would have failed high in reality, but so many extra nodes that would have failed low that you get to search an extra ply. So that you would seach the nodes that otherwise would have unjustly been futility-pruned (plus a lot of others) at d=1 now at d=2.
Henk
Posts: 7210
Joined: Mon May 27, 2013 10:31 am

Re: Bug of the week

Post by Henk »

hgm wrote:I guess in theory it could. But it is very hard to get that right statically. (E.g. the forking piece could be soft-pinned by a Rook attacking a Knight protected only by a Knight that is overloaded by protecting another Rook against a Queen attack.) And getting it not exactly right usually does more harm than good.

But if your evaluation does contain terms like that, and they are large, you would efinitely need a large futility margin. That margin basically represents the maximum change in eval that could be brought about by move in addition to the material it captures (which you can takeinto account explicitly in a trivial way).

Of course you can use the margin as a tunable parameter, not really setting it to the worst-case eval change, but to an empirically determined optimal value. This might than prune a very small fraction of nodes that would have failed high in reality, but so many extra nodes that would have failed low that you get to search an extra ply. So that you would seach the nodes that otherwise would have unjustly been futility-pruned (plus a lot of others) at d=1 now at d=2.
Maybe best is to do the move and evaluate it but then stop and not search it further if move is futile. But you have to do that for all captures to be certain so perhaps not much gain.

[At least looks like SEE pruning in qSearch doe not work for Skipper. Perhaps because it's implementation is too slow. If it is not promising I remove the code. I am not interested in long test runs or small elo gains]
Henk
Posts: 7210
Joined: Mon May 27, 2013 10:31 am

Re: Bug of the week

Post by Henk »

Henk wrote: [At least looks like SEE pruning in qSearch doe not work for Skipper. Perhaps because it's implementation is too slow. If it is not promising I remove the code. I am not interested in long test runs or small elo gains]
Ouch just find out that tests are running only the same engine ( same source code x same source code). No wonder that nothing seems to give much improvement last weeks (or perhaps last months). So before doing any test first test your test.

Wouldn't it be fun if you are testing for twenty years with a bad bug in your testing environment.
User avatar
hgm
Posts: 27702
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Bug of the week

Post by hgm »

Henk wrote:At least looks like SEE pruning in qSearch doe not work for Skipper. Perhaps because it's implementation is too slow. If it is not promising I remove the code. I am not interested in long test runs or small elo gains]
SEE only makes sense if it is much faster than search. Otherwise you might as well run QS.

A poor-man's (but fast) alternative to SEE is BLIND, where you basically just check if the piece you are going to capture is more or less valuable than the one you capture it with, and if less valuable, whether it was protected or not. Captures of a lower, protected piece are then 'suspect', and their search could be postponed. If you would know what the (lowest) protector is, and whether you have a second attacker, you can even recognize cases where SEE might still be positive, (protector + original victim worth more than the initial capturer, e.g. RxN, BxN, QxB), and postpone those until after the good captures, but prune others in QS.

The information you need to make that decision is not very expensive to get as a side effcet of move generation. For any move to an occupied square (also those rejected because they contain an own piece) you can increment a counter for that square, to get the number of attackers, and the preceding null-move search would have done this for the opponent's moves. So that you would know the number of attackers and protectors to each squareat the time you have to sort the moves you want to try as alternatve for null. You could also set a bit flag identifying the piece that could go there, for every square, so that you can know the lowest attacker.

In fact you could do both at once, by making a table indexed by piece type, that contain 1024 plus a piece-type-dependent constant (P=1, N=4, B=16, R=64 and Q=256) for every type. When you will then add that to the count for the to-square, you can test for multiple attackers by the count being > 2048, and get the lowest attacker by extracting the least-significant set bit through the usual method.