Nullmove vs classic selective search

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

User avatar
Rebel
Posts: 6946
Joined: Thu Aug 18, 2011 12:04 pm

Re: Nullmove vs classic selective search

Post by Rebel »

mar wrote:
Rebel wrote: My NPS is a bit misleading, besides counting real nodes I also count every succeeded lazy_eval and every succeeded futility pruning. Counting the number of eval's IMO would give a better picture of the real speed of the program, usually a factor of 3-5 less. Random example:

Nodes Searched : 143837572
Positions Searched : 41083651
That's interesting Ed. I count every node (including qsearch) except for depth 0 nodes to not count them twice. Some count make move instead.
In that case I apologize to Matthew as I didn't know how exactly you're counting.
Yeah, we all do it different and yet we tend to draw conclusions from it. In my case the NPS contaminated more and more over time. 6-7 months ago I moved from 2 to 6 plies futility pruning -> more NPS contamination.
User avatar
Rebel
Posts: 6946
Joined: Thu Aug 18, 2011 12:04 pm

Re: Nullmove vs classic selective search

Post by Rebel »

Don wrote:Hi Ed,

I'm definitely interested in this technique. My programs used static selectivity long before null move pruning. Rexchess is one example and at the time people thought Rex was a super fast program.

We did not do the "dumb" thing which involves just applying some safety margin, we actually have a routine which statically determines the maximum threatened piece. In Rex I think that was applied to the last 3 ply of the search and used rules similar to what you propose.

We have never had success pushing this too far - but I think you can if you use margins in addition. When you are 4 or 5 ply back and nothing is obviously threatened and you take a cutoff because the evaluation score is greater than beta - that just isn't reasonable. Now you are missing simple positional threats.

Our current rules are based on a combination of all 3 things - static threat detection, null move pruning and margins. It's probably more complicated than it needs to be and there is probably more that could be done to improve it.

Don
Hi Don,

Of course you have the advantage having experience with static selective search. If you wish I can make an updated document with all the exception situations (things to check) before you reduce or prune, the web-page is a bit outdated and also not exactly complete, the latter not to make things too complicated.

So now what, are we going back to the 80's ? :wink:
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Nullmove vs classic selective search

Post by Don »

diep wrote:
Uri Blass wrote:
Rebel wrote:
mar wrote:
ZirconiumX wrote:ProDeo will never be hugely fast due to doing a full evaluation every node.
In fact, I followed some of its games in Division 3. And I have to say it was faster in terms of n/s than most of the other programs (64-bit ones). It was even twice as fast in some cases.
So I wonder where your conclusion that it's slow came from.
And how do you know that it does full eval every node?
My NPS is a bit misleading, besides counting real nodes I also count every succeeded lazy_eval and every succeeded futility pruning. Counting the number of eval's IMO would give a better picture of the real speed of the program, usually a factor of 3-5 less. Random example:

Nodes Searched : 143837572
Positions Searched : 41083651
I think that basically you should count number of moves that you
make.

The number of positions that you do full evaluation is not relevant for nodes per second.
Nah this is different from program to program.

In Diep i count positions i take a very good look at. The NPS is too unstable if i'd count all moves made (which sometimes is factor 2 higher than the number of nodes per seconds Diep actually gets).

For example, i can make a move. Then the incremental attacktables that get updated in makemove, they give me the king can get captured, so you move it back. It's not a real node. It's an illegal move.
We have tried doing this various ways, but now we simply count how many moves we actually make. There is some logic to this because it's reported as nodes per second.

Even though a move is not technically a node, it does GENERATE a new node in every case. So if your program computes nodes per second it is actually lying unless it counts every new node generated - i.e. every move played. There is also the terminology, "position evaluated per second" which is probably what Diep is actually computing - but that is not nodes per second.

There are some cases where Komodo does not play a move but determines the value of the resulting position - at least as an approximation to achieve a cutoff. We don't count that as a node because it's not a real node. One could make the argument that it should be counted since it is effectively being evaluated but we don't do so.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Nullmove vs classic selective search

Post by diep »

Don wrote:
diep wrote:
Uri Blass wrote:
Rebel wrote:
mar wrote:
ZirconiumX wrote:ProDeo will never be hugely fast due to doing a full evaluation every node.
In fact, I followed some of its games in Division 3. And I have to say it was faster in terms of n/s than most of the other programs (64-bit ones). It was even twice as fast in some cases.
So I wonder where your conclusion that it's slow came from.
And how do you know that it does full eval every node?
My NPS is a bit misleading, besides counting real nodes I also count every succeeded lazy_eval and every succeeded futility pruning. Counting the number of eval's IMO would give a better picture of the real speed of the program, usually a factor of 3-5 less. Random example:

Nodes Searched : 143837572
Positions Searched : 41083651
I think that basically you should count number of moves that you
make.

The number of positions that you do full evaluation is not relevant for nodes per second.
Nah this is different from program to program.

In Diep i count positions i take a very good look at. The NPS is too unstable if i'd count all moves made (which sometimes is factor 2 higher than the number of nodes per seconds Diep actually gets).

For example, i can make a move. Then the incremental attacktables that get updated in makemove, they give me the king can get captured, so you move it back. It's not a real node. It's an illegal move.
We have tried doing this various ways, but now we simply count how many moves we actually make. There is some logic to this because it's reported as nodes per second.

Even though a move is not technically a node, it does GENERATE a new node in every case. So if your program computes nodes per second it is actually lying unless it counts every new node generated - i.e. every move played. There is also the terminology, "position evaluated per second" which is probably what Diep is actually computing - but that is not nodes per second.

There are some cases where Komodo does not play a move but determines the value of the resulting position - at least as an approximation to achieve a cutoff. We don't count that as a node because it's not a real node. One could make the argument that it should be counted since it is effectively being evaluated but we don't do so.
I count nodes i take a look at. Only a small fraction of that gets evaluated. So hashtable cutoffs, they count as a node, as i did take a look at that position.

This nps is a stable nps. Just makemoves gives an instable nps that fluctuates WILDLY from second to second and from core to core.

If you have worlds best and largest chess evaluation you would of course expect some benefit somewhere that the beancounters do not have. This is one of them...

Even then realize doing a makemove already is more expensive thanks to the incremental attacktables, than a node of the rybka clones.

Incremental attacktables REALLY slow down the nps, but it's worth using it, i wouldn't want to do without them!

I have done big effort creating them, many years ago. Requires special tables to do them reasonable fast.

If i would search without Evaluation, diep still is a slow program, with some optimization you get to a 1.5 mln nps then in an attempt i did do in 2005. That's at a k7 2.1ghz

Not a very fast processor.

An optimized version without incremental attacktables gets 2.5 million nps a cpu of the same k7 2.5Ghz. That includes a small evaluation function though. It's without PGO though, so maybe that squeezes out something more. Actually i would expect a 20% boost there. Note this 2.5 million nps is not 100% optimized, as it still uses slow Diep SEE functions.

Ed's lookup table would already considerable speed it up there for SEE and also it's generating some checks in qsearch first 2 ply (non losing checks).

If i compare it to the normal Diep, the qsearch is really inefficient as the slow Diep function to select which move to try in qsearch, i'm not using that one of course as it uses lots of chess knowledge which slows down.

The move generator of that thing getting 2.5 million nps, i am spreading for free.

Note that i was pretty amazed at the time that the framework is far above 1 million nps default without evaluatoin and 1.5 mln nps with some optimization this at a k7, yet that in order to get to above 2 million nps i had to do THAT much optimization (months of fulltime work).

In the end every long branch i removed really was noticable in speed. One branch i removed there really was like 30+ cycles on average or so (showing how bad K7 predicted specific branches flipping from taken to not taken, a logical result of side to move flipping).

The reason i had programmed the fast 2.5 mln nps searcher for Diep is because i wanted to combine diep's slow search with the fast searcher.

Idea is that if we're 3 pawns ahead or something you could of course use futility there or razoring, yet in Diep i'm not doing this expensive evaluation in order to then compare moves based upon some simplistic rules that avoid you from using the benefit of your evaluation function.

In short i want more tactical and positional safety around throwing away moves.

I want to revive the idea these weeks again and retest whether i can get it to work, be it on a more modest scale. I really wanted it to win me 2 to 3 plies back then. Right now i'm happy if it speeds me up 30% or maybe even gives a full ply, as direct razoring let alone futility doesn't work for Diep at all, though i will retest that as well at the nodes here in a systematic manner at rapid time controls.

Of course i had to aim high as i was supposed to play with Erdo's book world champs 2005 and Erdo's book simply requires an engine that tactical outsearches its opponent. Zappa was a better match for that as history has proven...

So i really had to win many plies with diep back then somehow. If i retest the idea now in a more constrained manner trying to win little with it, maybe i'll get it to work :)

Of course it's a trick only a slow searcher like Diep can use, as diep is 100k nps a core.

That's all chessknowledge causing that.

This search replacement idea is not similar to your futility approximation i assume, yet it's a similar idea in 'not counting it'.

At a 100k nps a core with diep i'm sure i'll notice replacing a bunch of nodes with a fast beancounter search.

The idea right now is using it to replace the qsearch nullmove of the last 3 to 4 plies. So that should work for up to 75% out of 90% of the nodes.
Say roughly a tad over 60% of all nodes in theory in a middlegame.

Obviously only when evaluation is above beta. After that still trying normal qsearch nullmove with the full slow evaluation if the fast searcher fails. I tend to agree that calling this function isn't a node if you don' want to count it. If you do it in a razoring manner it is a node though.

In Diep that entire beancounter-qsearch is not counted. As it gets called on top of search in diep obviously THAT is counted as a node.

It doesn't mean diep needs to evaluate there at all. Roughly 50% of the positions where diep evaluates it gets from the eval hash or from normal hashtable (either side or xside as eval isn't asymmetric).

Of course Diep only evaluates where needed. I'm not doing any kind of lazy evaluation though. So where i do a qsearch nullmove i somehow *do* need the evaluation score - not an approximation. The cutoff always must be caused by a full eval somehow, not by some sort of approximation.

The beancounter qsearch is just to make sure no big apocalypse happens in this position.