Reaching high search depths

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Reaching high search depths

Post by niel5946 »

As of today, my engine reaches a depth, from the starting position, of around 13-14 plies in 10 seconds, and I would like this to be higher. I just don't understand how modern engines get such low branching factors that they're able to see 20-30 plies deep in a given position.
What are the methods used to get to these depths? I read that LMR (maybe in conjunction with LMP) can usually get the branching factor down to below 2 for a chess engine, but my implementation only gets me to around 3.5 (of course with a bunch of other pruning methods).

How do modern chess engines get so low branching factors, and as a result so deep search depths, without a massive loss of accuracy?
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
User avatar
mvanthoor
Posts: 1784
Joined: Wed Jul 03, 2019 4:42 pm
Location: Netherlands
Full name: Marcel Vanthoor

Re: Reaching high search depths

Post by mvanthoor »

niel5946 wrote: Mon Mar 01, 2021 1:37 pm How do modern chess engines get so low branching factors, and as a result so deep search depths, without a massive loss of accuracy?
Pruning. Alpha_Beta is exact: MiniMax and Alpha_Beta should give the same result. On top of that, you can do:

- PVS (Principal Variation Search: optimization of Alpha/Beta)
- Aspiration Windows (Also an optimization)
- Null Move pruning ("If I do nothing in this position and I'm STILL better, I can cut the entire subtree off")

AFAIK, this should still be exact; but after that, you get into the realm of "real" pruning: discarding moves you think will *probably* not be good... but this does indeed take some risks. (It is somewhat akin to a strong chess player saying: "That move doesn't look good... it feels wrong, but I don't know exactly why.")
Author of Rustic, an engine written in Rust.
Releases | Code | Docs | Progress | CCRL
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Reaching high search depths

Post by hgm »

Just reduce more. That is guaranteed to give you higher depth. It doesn't necessarily make the engine stronger, though. For that you would have to be very specific in what you reduce, and what not.

One approach could be this: label moves as 'new' when their path intersects the from- or to-squares of the previous two ply, and reduce moves that are not new one ply extra in LMR. Reduce non-captures more than captures, and within the captures, reduce 'obvious gains' (L x H or capturing unprotected pieces) less than other captures. If most moves suffer more LMR, the reduction of the null move can also be increased.
Dann Corbit
Posts: 12537
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Reaching high search depths

Post by Dann Corbit »

I don't know if it is still true, but for some time the engine with the smallest branching factor was ExChess.
It was not a top ten engine, but pretty strong.
But the point here is that pruning like mad is not good enough for strength.
It is like Kenny Rogers said,
"Every gambler knows
That the secret to survivin'
Is knowin' what to throw away
And knowin' what to keep"
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Sesse
Posts: 300
Joined: Mon Apr 30, 2018 11:51 pm

Re: Reaching high search depths

Post by Sesse »

mvanthoor wrote: Mon Mar 01, 2021 3:04 pm - Null Move pruning ("If I do nothing in this position and I'm STILL better, I can cut the entire subtree off")

AFAIK, this should still be exact; but after that, you get into the realm of "real" pruning: discarding moves you think will *probably* not be good... but this does indeed take some risks. (It is somewhat akin to a strong chess player saying: "That move doesn't look good... it feels wrong, but I don't know exactly why.")
Null-move is not always exact; if there are zugzwangs, doing strict null-move-pruning will make your engine blind to them. (I think I've read somewhere that the primary gains of null-move pruning come from putting lots of good moves in the hash table, improving move ordering!) But you can do null-move reduction instead of null-move pruning, ie., reduce search depth by a lot whenever null-move fails. This will make your engine see the zugzwangs eventually, just later in the search than you would otherwise do.
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Reaching high search depths

Post by hgm »

'Exact' is an ill-defined term. If it is taken to mean that despite the pruning, the returned score and move should always be the same as that of a fixed-depth search of the same nominal depth, then only alpha-beta pruning would qualify. Any shaping of the search tree, even by reductions rather than hard permanent pruning, would make you see some moves earlier, (in times of searched number of nodes, as for a shaped tree 'depth' becomes a meaningless concept), but other moves later.

You could also take 'exact' to mean "when searching sufficiently deep, nothing can be overlooked". Most reductions provide that. Null-move pruning does not, because of the zugzwangs. But the "verified null-move pruning" as described by Sesse would. Even methods that are exact in this sense will necessarily see some tactics only later. That is the entire goal: search branches that have a higher likelihood to be of interest more thoroughly, at the expense of the branches with a low likelihood to be at interest. Sometimes the latter will offer a surprise.
niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Re: Reaching high search depths

Post by niel5946 »

hgm wrote: Mon Mar 01, 2021 3:04 pm Just reduce more. That is guaranteed to give you higher depth. It doesn't necessarily make the engine stronger, though. For that you would have to be very specific in what you reduce, and what not.

One approach could be this: label moves as 'new' when their path intersects the from- or to-squares of the previous two ply, and reduce moves that are not new one ply extra in LMR. Reduce non-captures more than captures, and within the captures, reduce 'obvious gains' (L x H or capturing unprotected pieces) less than other captures. If most moves suffer more LMR, the reduction of the null move can also be increased.
I have experimented with making LMR more aggressive in my engine today and yesterday, but it loses around 300 elo (tc: 0/5s+0.1s for 300 games). The only conditions for the LMR is that depth > 2 and moves searched > 1 (4 if I'm in a PV node). Then I increase/decrease that reduction based on different criteria, like for example: Decrease if we're in check or giving check, decrease if it is a killer move, increase for under-promotions and bad captures, etc.. The initial reduction is based on the formula: R(depth, moveCount) = (log(2*depth)*log(2*moveCount))/2.75, and I do a re-search at full depth if the reduction raises alpha of course.

I just don't see how I can do this without making the engine tactically unstable and as a results lose elo... Is the key to getting aggressive LMR, that is still good tactically, just tweaking what to reduce and how much or is there something I'm missing? Some conditions that shouldn't let LMR happen?
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Reaching high search depths

Post by hgm »

I must admit I don't have very much experience with these aggressive reductions. I suppose you do sort by history? One explanation could be that your history heuristic is not working good enough to use as a basis for determining the increasing reduction on later moves.

The main problem with large reduction is that it sort of makes the engine blind to the opponent 'sneaking up on him' with quiet moves. This was why I suggested that sequences of related ('tactically connected') moves should perhaps be reduced less. Then coordinated plans (such as moving a piece away to allow another piece to pass it, or relocating a Knight in two or three moves) won't suffer so much reduction even when they stay entirely 'under the (alpha) radar', while haphazard random shuffling of pieces still is very much economized on.
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Reaching high search depths

Post by cdani »

niel5946 wrote: Tue Mar 02, 2021 1:48 pm I have experimented with making LMR more aggressive in my engine today and yesterday, but it loses around 300 elo (tc: 0/5s+0.1s for 300 games). The only conditions for the LMR is that depth > 2 and moves searched > 1 (4 if I'm in a PV node)...
The strongest your engine is, the more you can reduce. In Andscacs I increased lmr a lot of times as it progressed. Also I increased a lot of times many of other pruning / reducing stuff. Adding and tweaking only this stuff I have done probably a few thousands of versions.
So maybe your engine is weak to support big reductions for the moment, probably due to lack of other knowledge that let it order better the moves. If your moves are ordered badly, you are reducing good moves, and this drives crazy the engine.
niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Re: Reaching high search depths

Post by niel5946 »

cdani wrote: Tue Mar 02, 2021 7:54 pm
niel5946 wrote: Tue Mar 02, 2021 1:48 pm I have experimented with making LMR more aggressive in my engine today and yesterday, but it loses around 300 elo (tc: 0/5s+0.1s for 300 games). The only conditions for the LMR is that depth > 2 and moves searched > 1 (4 if I'm in a PV node)...
The strongest your engine is, the more you can reduce. In Andscacs I increased lmr a lot of times as it progressed. Also I increased a lot of times many of other pruning / reducing stuff. Adding and tweaking only this stuff I have done probably a few thousands of versions.
So maybe your engine is weak to support big reductions for the moment, probably due to lack of other knowledge that let it order better the moves. If your moves are ordered badly, you are reducing good moves, and this drives crazy the engine.
I see that bad move ordering will result in LMR not working properly, but the thing is that I already have hash move, IID, MMV/LVA, killer moves, countermoves heuristic and history heuristic (sorted in that order), and I get a beta cutoff on the first move around 85%-90% of the times I get cutoffs. So i wouldn't suspect this to be the problem, but perhaps it is my (very) simple evaluation consisting of only material, PSQT and pawn structure / passed pawns? Would you suspect LMR to work better once I add a mobility and king safety term to the eval?
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |