Hello - as it's obvious i won't have any time for my chess engine Diep in future.
Maybe interesting to leave some notes on search how i saw it when i was busy with it back in 2012.
For a few years then i was slowly busy with a few new algorithms (sometimes).
SICFM and sortscore.
SICFM is pretty complex seemingly but also very simple. That's correct the repetitionscore which messes up hashtable.
Now i cannot claim any credit on this observation - Johan de Koning back in the 1990s already mentionned 'repetition scores messing up hashtables'.
Yet you can write an algorithm to correct it by keeping track in a simple table for each move you played in makemove the status with respect to
repetition. If you backtrack you can notice when such score no longer influences the best score you found. And a cutnode that returns the repetitionscore obviously was influenced by it.
It's some work on paper to work out. Then you can store it in hashtable. And use it there. Yet it requires also extra code in hashtable as sometimes you still can give the cutoff there as you can prove it is at the same point you give the cutoff.
This is actually pretty important to get right for next feature as well. Which is the 'big deal'.
Sortscore.
It's stored in hashtable - yet that doesn't really reflect the work done for it.
I toyed with this more in several ways. My observation it was that the objective best move hardly ever made it to the best move in <= alfa nodes.
Especially nullmove is a big killer there.
Yet you can do extra work, in fact do extra searches to correct that sortscore and get in <= alfa nodes better move ordering.
This is pretty crucial thing to do. If you get it correct there is no reason to not use huge reduction factors for LMR type reduction schemes there.
Now LMR would not be a good name. In itself 'inventing' reductions is total trivial. Every 'commercial' programmer did.
Singular extensions can improve already move ordering a tad there especially at <= alfa nodes - yet doing specific effort to improve ordering there and there is many ways to do this and i experimented with quite some ways. Then huge reduction factors would be possible there.
And i really mean searching in most positions just 1 move without reducing it and the rest reduce it bigtime.
So what i would be interested in if i had time, which i never will have again, for computerchess to 'improve' diep. Would reverse the way how it searches.
That's do utmost effort, even do extra searches, even if it costs FACTOR 3 MORE NODES, to get in <= alfa nodes the sorting correct.
as then nearby root do huge reductions and nearby quiescencesearch - no nothing forward pruning except for last 3 plies with nullmove.
This will get huge search depths meanwhile it behaves more similar to how leila0 from GCP searches.
Yet then without missing anything and without the huge overhead neural networks give.
The disadvantage will be that you will need quite some search depth for this search strategy to work. In short one would need to search tens of millions of nodes for this to play strong chess. Yet it might result in a far better sort of engine overall.
I'm pretty confident it will.
Obviously a very good qsearch also with some giving checks moves done (not all, just a few).
An approach like this would each additional ply get a better branching factor.
Search
Moderator: Ras
-
- Posts: 1822
- Joined: Thu Mar 09, 2006 11:54 pm
- Location: The Netherlands
Re: Search
postscriptum: so what influences the search score is many factors. In case of Diep that is:
a) nullmove
b) hashtable (but i store it in hashtable so solved)
c) quiescencesearch
In fact only an open window search would ever completely correct the score. Yet we're not looking for the 'best way to correct it', we're looking for a quick way. So in qsearch wasting a few more nodes there or in my case i also experimented with a static function that estimated how much material one specific side potentially could win in a given position P. That function clearly wasn't a success here.
You do not need to correct that much above beta obviously for the sortscore to get corrected a lot further down the root.
Another factor in Diep - but i'm not sure it's any relevant if the sortscore works well is:
d) singular extensions
If sortscore gets corrected well i doubt whether singular extensions still are worth it - but that would be an interesting subject on its own.
The window used to detect singular extensions both is a blessing for doing reductions as well as a big curse if you try to correct the sortscore.
edit2: So the reduction dependant upon the depthleft and increase it slowly with bigger depthleft, as well as increase slowly nullmove as well.
Experiments in Diep here are very positive yet it's tough to correct the move ordering well at <= alfa nodes. IID nor singular extensions simply do not cut the job there. They improve some (if diep runs with singular extensions i can turn of basically IID) yet not enough.
You really want to search 1 move in each position and reduce the rest bigtime. Nullmove obviously has to have a far larger reduction factor than the reduction you give to the "LMR" scheme. Now i know some 'commercial programmers' from the past didn't like Tord claim this algorithm, as it's trivial to invent and many used it already - he did publicly write about it - which makes is completely legimit to me. So i refer to it as that whereas what i describe here is trivially a different idea.
Tests in diep with a very bad corrected sort score and larger reductions for 'lmr' didn't give higher elo at slower time controls. At blitz all sorts of things is possible there as diep not being a fast searcher of course anything getting you deeper works at blitz.
Yet the concept here is simply adjust all algorithms such that nearby root mainly 1 move (if non-pv node) gets searched without reduction and the rest reduced major league. All this works fantastic well if you have some good odds ordering in <= alfa nodes the position very well as your hashtable works well because of the sortscore algorithm.
a) nullmove
b) hashtable (but i store it in hashtable so solved)
c) quiescencesearch
In fact only an open window search would ever completely correct the score. Yet we're not looking for the 'best way to correct it', we're looking for a quick way. So in qsearch wasting a few more nodes there or in my case i also experimented with a static function that estimated how much material one specific side potentially could win in a given position P. That function clearly wasn't a success here.
You do not need to correct that much above beta obviously for the sortscore to get corrected a lot further down the root.
Another factor in Diep - but i'm not sure it's any relevant if the sortscore works well is:
d) singular extensions
If sortscore gets corrected well i doubt whether singular extensions still are worth it - but that would be an interesting subject on its own.
The window used to detect singular extensions both is a blessing for doing reductions as well as a big curse if you try to correct the sortscore.
edit2: So the reduction dependant upon the depthleft and increase it slowly with bigger depthleft, as well as increase slowly nullmove as well.
Experiments in Diep here are very positive yet it's tough to correct the move ordering well at <= alfa nodes. IID nor singular extensions simply do not cut the job there. They improve some (if diep runs with singular extensions i can turn of basically IID) yet not enough.
You really want to search 1 move in each position and reduce the rest bigtime. Nullmove obviously has to have a far larger reduction factor than the reduction you give to the "LMR" scheme. Now i know some 'commercial programmers' from the past didn't like Tord claim this algorithm, as it's trivial to invent and many used it already - he did publicly write about it - which makes is completely legimit to me. So i refer to it as that whereas what i describe here is trivially a different idea.
Tests in diep with a very bad corrected sort score and larger reductions for 'lmr' didn't give higher elo at slower time controls. At blitz all sorts of things is possible there as diep not being a fast searcher of course anything getting you deeper works at blitz.
Yet the concept here is simply adjust all algorithms such that nearby root mainly 1 move (if non-pv node) gets searched without reduction and the rest reduced major league. All this works fantastic well if you have some good odds ordering in <= alfa nodes the position very well as your hashtable works well because of the sortscore algorithm.
-
- Posts: 1822
- Joined: Thu Mar 09, 2006 11:54 pm
- Location: The Netherlands
Re: Search
So to summarize the 'sortscore' idea first and if this works well then the resulting new way to search.
sortscore: try to improve at <= alfa nodes stored as such the sortscore you store and therefore a DIFFERENT move might make it to the hashtable in <= alfa nodes than otherwise would've been stored to hashtable being the best move.
Global search idea: get more selective when depthleft increases (obviously this happens with additional iterations) and do not mix in too many other algorithms yet very important: not be so selective nearby leaves of the search.
If you do both - AND be selective even forward prune 9+ plies nearby leaves AND very selective nearby root - this cross mixes scores comparing different depths with each other. Great for bullet and blitz maybe - not great if you know in advance you're gonna search dozens of millions of nodes or even way more.
If sortscore gets corrected pretty well it's pretty safer to use huge reduction factors for <= alfa nodes. Obviously means nullmove needs even bigger reduction factors - which is why nearby leaves i would want to be less selective than most engines of today do - because you want to always make a few moves to have some sort of estimate on what sort of score a seemingly stupid move is, whereas the forward pruning last 9 plies or so this is alfabeta dependant.
sortscore: try to improve at <= alfa nodes stored as such the sortscore you store and therefore a DIFFERENT move might make it to the hashtable in <= alfa nodes than otherwise would've been stored to hashtable being the best move.
Global search idea: get more selective when depthleft increases (obviously this happens with additional iterations) and do not mix in too many other algorithms yet very important: not be so selective nearby leaves of the search.
If you do both - AND be selective even forward prune 9+ plies nearby leaves AND very selective nearby root - this cross mixes scores comparing different depths with each other. Great for bullet and blitz maybe - not great if you know in advance you're gonna search dozens of millions of nodes or even way more.
If sortscore gets corrected pretty well it's pretty safer to use huge reduction factors for <= alfa nodes. Obviously means nullmove needs even bigger reduction factors - which is why nearby leaves i would want to be less selective than most engines of today do - because you want to always make a few moves to have some sort of estimate on what sort of score a seemingly stupid move is, whereas the forward pruning last 9 plies or so this is alfabeta dependant.