As we have seen, SF seems to have reached its tipping point where anything new throws its selective search widely out of scope. So I conducted a simple thought experiment to see, not how selective a search is, but how correct it is (i.e. how much valuable stuff is initially thrown out).
Let's take a simple alpha-beta search with null move and optimal move ordering.
For each nullmove fail low I increment NullFailLow, and for each fail low I increment ActualFailLow. Let NE equal NullFailLow divided by ActualFailLow.
A Nullmove fail low may indicate either an All Node or a PV node so our optimum number is not going to be a perfect 100%.
NE > 100% indicates nullmove restrictions are too relaxed, and is slowing down the search due to suboptimal nodes being searched.
NE < ~99% indicates nullmove restrictions are too tight and is slowing down the search due to a large branching factor.
Plainly, if all new search pruning conditions are tuned to this theoretically safe principle we will have something which will be slow, but due to its close to optimal search it will be able to play on the same level as the modern day engines do, because it will not have missed anything important.
Maybe Stockfish could improve if it focussed on correctness, rather than speed.
Of course, this is just my 2 cents.
Matthew:out
A selectivity thought experiment
Moderator: Ras
-
ZirconiumX
- Posts: 1361
- Joined: Sun Jul 17, 2011 11:14 am
- Full name: Hannah Ravensloft
A selectivity thought experiment
tu ne cede malis, sed contra audentior ito
-
Daniel Shawul
- Posts: 4186
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: A selectivity thought experiment
I am not sure I understand your point correctly but aren't we using many restrictions before trying a null move? Your ratio NE will most probably be much less than 1 in that case. For a hypothetical case of trying null move all the time, then the ratio should be greater than 1, since then it would be (ALL or PV) / (ALL). Ofcourse nullmove alone can't do all the cutoffs and there are many cases where we fail high after making a move. So it seems a bit complex to tell if the engine is being conservative with pruning from the ratio NE...
-
ZirconiumX
- Posts: 1361
- Joined: Sun Jul 17, 2011 11:14 am
- Full name: Hannah Ravensloft
Re: A selectivity thought experiment
In layman's terms: I personally think that a theoretically correct engine will be more likely to dominate the next decade or so than an extremely selective engine of today because it misses less. I present a method which may be useful for optimizing an engine for correctness.
I think Dan Homan did some experiments and found that EXChess with just alpha-beta beat normal EXChess easily at fixed depth but lost against normal EXChess on time.
I am talking about an engine which theoretically misses nothing, but easily prunes the All nodes, etc. being able to beat a slim tree searcher.
Matthew:out
I think Dan Homan did some experiments and found that EXChess with just alpha-beta beat normal EXChess easily at fixed depth but lost against normal EXChess on time.
I am talking about an engine which theoretically misses nothing, but easily prunes the All nodes, etc. being able to beat a slim tree searcher.
Matthew:out
tu ne cede malis, sed contra audentior ito