zd3nik wrote:Ordinarily I would have better things to do than banter with you over these mostly unimportant details. But since I don't want to do anything too heavy (like compiling and running epd tests) while the round-robin is running I'll kill some time volleying with you.
You would indeed have spent your time better re-reading what I actually wrote, than shooting off your mouth about what you imagine I wrote. Especially after I pointed out that you did not understand a word of it...
I think you very clearly overlooked the negation. I am *not* pruning futile moves when in check.
Which is of course exactly what I said. And that seems WRONG, which is why I brought it up. When in check, futile moves deserve to be pruned just as much as when not in check.
hgm wrote:You are absolutely correct that testing for (alpha <= -WinningScore) is redundant. But is the routine going to be so much slower because of this that it causes it to play 100+ elo weaker? I don't think so. I use this simple pattern (alpha or beta compared to abs(winningScore)) in places where I want to avoid triggering code during a mate search. It's not going to wreck the whole thing having a redundant test.
It will not cause a 100-Elo slowdown. But note that it is not merely an inefficiency because it executes some pointless instructions, but actually prevents futility prunings that should logically have been made.
hgm wrote:Indeed, when (alpha >= WinningScore) most moves will simply be refuted in the child node - in qsearch. That's why it doesn't drop into qsearch. Instead it proceeds with search at normal depth without pruning so it can have some chance of finding a move that has a score better than the current alpha.
And that is a pure waste of time, because the 'chance' that any non-checking move will score above WinningScore is an exect mathematical zero...
Sure, it will be slower than simply saying "A winning PV has already been found, so I'm just going to bail out". But when a winning PV has already been found there's no need to prune, we can waste time looking for a better move and still win.
Except of course that this winning PV might disappear again because the opponent switches away from the blunder that would get him mated in a node closer to the root, and you would have to fight hard for a draw after having wasted all this time on searching futile moves in an irrelevant branch...
hgm wrote:I'm using a very standard scale. Pawn=100, minor pieces=320-350, rook=500, queen=950 (or something like that).
Well, then Michel's so far unwithspoken remark that 800 is "only about the value of a Bishop" doesn't cut wood, and I should revive the original criticism: why do you use such a ridiculously large margin before deciding that non-captures are not going to get even in a single ply (i.e. at d=1)? Do you really expect there to be non-captures that up the evaluation by 799 centi-Pawn? If so, what eval term would be responsible for this?
I only have the deltas set to such large (e.g. conservative) values because in prior tests where I have used a static delta of 150 (or 200, or 250, or 500 - yes I've tried many different static delta values) the results were just as bad. So I am just trying something different - a more conservative delta at the beginning of each qsearch branch, with that delta getting logarithmically less as the branch gets longer (because along the branch more pieces should be coming off the board reducing the volatility of the position, making smaller deltas less risky).
Well, obviously letting the margins approach infinity should make all effects of the pruning go away, as it would be never done. If you would still see a 100-Elo decrease in strength by adding non-functional code, that would be very fishy. Note that it is not completely impossible; when I was writing qperft at some point it contained some statements that were redundant, and could not be executed anymore. (I verified the unreachability by incrementing a global counter variablein it, and asserting that it was still at zero after peft(8).) And when I deleted those unreachable statements, qperft ran 20% slower! So yes, adding code that is never executed can have an impact on the speed of code that is executed, simply by taking up space, and moving the executed code to other addresses, which happen to be more favorable (e.g. better alignment of branch targets with cache lines). I would only expect that to matter much in very tight loops, however.
hgm wrote:Simply not true, for the reason I've already stated: checks.
Well, as I explained for checks you should have a separate generator. As long as you have that, futility pruning cannot be expected to work, so that it should not come as a surprise to you that it indeed does not work.
hgm wrote:I don't allow stand pat when in check. I don't understand the point you're trying to make here.
The point is that the conditions for futility pruning should be an exact match to the conditions of your stand-pat. They should not be based on vague notions like "this move could lead to mate" or "I have already a mating score in the current PV". When the daughter can stand pat you should prune. When it cannot, you should not prune. E.g. when you are in check now, and evade, the daughter can stand pat after the evasion. So you should prune when in check.
hgm wrote:I disagree. As with all other forms of pruning the point is to prune entire branches of the search tree, not simply avoid doing makes and/or evals here and there.
Hard fact is that at d=1 your "entire branches" are never more than a single node. If you would have not prunded the node, the daughter would have stand pat based on the lazy eval. That is a
fixed amount of work that you might be able to save by pruning that node if the pruning decision itself was not more expensive. Imagining that you could ever gain more is a delusion.
But you and I obviously have our minds made up on this point so I see little reason to continue discussing it further.
Well, if you are not willing to learn, posing questions here is pretty useless, I would think.
Again, you dismiss the possibility of checks. There is no way to "correctly predict which lines would fail low due to stand pat" when the move gives check.
I did not mention checks at all here. If a move delivers check, you can correctly predict it wil NOT fail low due to stand pat. This is why you should NOT prune checks; they are NEVER futile. Nothing of what I said contradicts this. You seem to have a false association between doing making moves and searching checks. When I say you cannot expect any gain from futility pruning when you make all pruned moves, it in no way implies that you should not search checks. It just means that you should determine what are checks without making any moves. And if your engine is not at the stage yet where it can do this, you are certainly not in a position where you should attempt futility pruning. Even without any futility pruning you should have a huge speedup from not making/unmaking all non-captures in QS.
As for the nps rate being misleading, all my engines calculate node rate from the number of Exec(move) calls that are made, not the number of times search is called. So the node rates reported by my engines are effected by activity in qsearch, not just what happens in the primary search function. The node rate will also be properly effected by all the makes/unmakes that you're talking about avoiding.
Well, in that case you would see the drop in nps that accompanies the speedup only after you found out a way to do futility pruning and QS without making all moves (and in particular non-captures) that you are not going to search.
Though I can tell you I have not seen a noticeable node rate drop in the instances with alpha pruning enabled. They search a little deeper but play a lot weaker and they do it all at the same node rate.
This is because you count makemoves, rather than nodes. This will cause your nps to drop only when eleminiting redundant makemoves. But that should of course not deter you from doing so.