Adaptive R in null move

Discussion of chess software programming and technical issues.

Moderator: Ras

MattieShoes
Posts: 718
Joined: Fri Mar 20, 2009 8:59 pm

Adaptive R in null move

Post by MattieShoes »

I know some engines change the value of R in nullmove based on remaining depth. I was wondering if anybody has experimented with changing the value of R based on how far down move list you are in the parent node. That is, if this is the first move searched at the parent node, either don't use nulls or use them with a conservative R, but if the current node is from the 20th move examined at the parent node, use a more aggressive R.

Does that make sense? One could perhaps base it on the move ordering score of move you just made rather than the number of moves searched instead. Has anybody done this?
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Adaptive R in null move

Post by diep »

MattieShoes wrote:I know some engines change the value of R in nullmove based on remaining depth. I was wondering if anybody has experimented with changing the value of R based on how far down move list you are in the parent node. That is, if this is the first move searched at the parent node, either don't use nulls or use them with a conservative R, but if the current node is from the 20th move examined at the parent node, use a more aggressive R.

Does that make sense? One could perhaps base it on the move ordering score of move you just made rather than the number of moves searched instead. Has anybody done this?
Google for Vincent Diepeveen and nullmove in RGCC (rec.games.chess.computer). Around 1995-1996 i posted a lot on this how i did do in diep different forms of modifying R based upon different conditions.

Also what worked at the time very well at testsets was modifying R from 3 to 2 and then to 1.

Vincent
MattieShoes
Posts: 718
Joined: Fri Mar 20, 2009 8:59 pm

Re: Adaptive R in null move

Post by MattieShoes »

I looked, found some interesting discussions but nothing regarding this... Just a bunch of stuff about using two nulls in a row, and whether nullmove is okay at all, null vs FHR, etc. Oldest posts from rgcc in google groups is mid 1995, so perhaps it's from before this :-/

It's a bit surreal reading about a whopping 10k nps and whether 4 meg is the optimum hash table size. :-) Found some post where it was nearly inconceivable to overflow an unsigned int storing node count. Of course, at 10k nps, that'd be several days to do it...

It seems like higher R factors are more acceptable now that engines search deeper, so it makes me wonder how valid conclusions from tests done in 1995 would be on modern hardware... Of course, 1995 tests are better than none, and I can test it myself if I really want to know... :-)
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Adaptive R in null move

Post by bob »

I did some testing on this several months back. First thing I discovered was that once I added a single layer of checks in the q-search, getting rid of adaptive null-move and just using R=3 produced the best game results. I did not test the parent node moves remaining idea, but in my case I do not believe it would make any sense to do this, since once I am thru the killer moves, the rest of the moves are not ordered in any way.
MattieShoes
Posts: 718
Joined: Fri Mar 20, 2009 8:59 pm

Re: Adaptive R in null move

Post by MattieShoes »

Hmm, all my moves are ordered, not just the top ones. Judging by my results with move ordering, that may be wasteful though since 99% of the cutoffs happened in the first few moves. I also don't have LMR which is the real reason I was pondering this, as perhaps an alternate to LMR for further reducing moves I'm fairly certain are bad. I think I'll have to run some tests. :-)