Bigger steps when branching factor < 2?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
marcelk
Posts: 348
Joined: Sat Feb 27, 2010 12:21 am

Bigger steps when branching factor < 2?

Post by marcelk »

Has anyone experimented with larger increments in iterative deepening once the engine's branching factor drops well below 2.0?

It seems like such a waste to keep incrementing depth by 1...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Bigger steps when branching factor < 2?

Post by bob »

marcelk wrote:Has anyone experimented with larger increments in iterative deepening once the engine's branching factor drops well below 2.0?

It seems like such a waste to keep incrementing depth by 1...
I have. And while the idea seems completely logical, I simply could not get it to work, in terms of reducing the nodes spent to get to a particular depth. In fact, it is even possible that increments of less than 1 will work better. The less "new" stuff you expose in the next iteration, the lower the probability that the tree will blow up. Partial iterations are problematic for my code, however, but I did some testing that suggested that it was not a waste, although there is the usual gain in some positions, losses in others...

There certainly might be a way to get the benefit of both. Obviously if you reduce the increment, you reduce the effective branching factor since you are not going a full ply deeper. Perhaps one might try setting a target EBF, and if above that, reduce the increment, if below that, increase it... I have not tried that, but have a note in my to-do list to think about it at some point...
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Bigger steps when branching factor < 2?

Post by diep »

bob wrote:
marcelk wrote:Has anyone experimented with larger increments in iterative deepening once the engine's branching factor drops well below 2.0?

It seems like such a waste to keep incrementing depth by 1...
I have. And while the idea seems completely logical, I simply could not get it to work, in terms of reducing the nodes spent to get to a particular depth. In fact, it is even possible that increments of less than 1 will work better. The less "new" stuff you expose in the next iteration, the lower the probability that the tree will blow up. Partial iterations are problematic for my code, however, but I did some testing that suggested that it was not a waste, although there is the usual gain in some positions, losses in others...

There certainly might be a way to get the benefit of both. Obviously if you reduce the increment, you reduce the effective branching factor since you are not going a full ply deeper. Perhaps one might try setting a target EBF, and if above that, reduce the increment, if below that, increase it... I have not tried that, but have a note in my to-do list to think about it at some point...
Around 1999 i extensively investigated this when Diep was using reductions in search, nowadays some version of that you call LMR. Back then LMR of course didn't work, as Diep didn't get enough nps and more importantly didn't get through the tactical barrier. You typically see now LMR to be working for most engines as they get through the tactical barrier where all the forward pruning hardly really improves the tactical strength of engines, in fact the type of forward pruning used just reduces its tactical strength and lets it more look like a Monte Carlo search nowadays where things could easily flip in some other way around.

It could be this phenomena of the selectivity missing things that takes care that always doing increment by 1 worked best for Diep in my experiments, as it is a kind of 'worst case elimination', whereas bigger increments are very tricky.

Now i'm of course not speaking about skipping first few plies which you already get out of your hashtable because you searched deeper than that anyway, nor am i speaking about types of search where more than 1 ply reduction occurs; in such cases it might work for the first few plies.

Mathematically proving, or better by means of statistical data, and using that as an explanation why increment 1 to be working better is not so easy of course. It is one of those things that is always interesting to research.
Uri Blass
Posts: 10282
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Bigger steps when branching factor < 2?

Post by Uri Blass »

LMR works at all time controls including blitz and bullet so
I do not accept the claim that LMR did not work becuase of some tactical barrier.

You probably did not implement LMR correctly in 1999 and this means that you did not try reducing with the right conditions.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Bigger steps when branching factor < 2?

Post by diep »

Uri Blass wrote:LMR works at all time controls including blitz and bullet so
I do not accept the claim that LMR did not work becuase of some tactical barrier.

You probably did not implement LMR correctly in 1999 and this means that you did not try reducing with the right conditions.
You are wrong. Back then hardware was single cpu 450Mhz or so with bad evaluation functions. Not like today ones. Especially bad with passed pawns most evaluations were back then and most had near to no kingsafety, not the succesful stuff in eval some have nowadays in minimal effort.

If you look objective back then, the best bullet engines online were Ferret, Hiarcs and also Diep did do relative ok in bullet.

All this were engines that did do very efficient searching and no reductions at all.

So back then reductions only 'worked' for those who did not know how to search, and even with them, no one got through tacitcal barrier.

That started to change a few years later with DeepSjeng. If i remember well he's one of the first who did do what we call LMR now with a well tuned engine. Yet even at slow time controls he got like 16-17 ply or so maximum also forward pruning last 2 ply.

So only in non-bullet conditions he got through tactical barrier.

You get instantly 20 ply now or so with the fast engines and they don't get mated at all, whereas back then missing tactics meant you would get mated in every game by some patzer moves of opponents.

That just didn't happen back then, you cannot compare todays engines with what we had back then.

Back then the strongest bullet engines focussed upon being stronger in tactics, and that was everything in bullet and blitz.

Todays LMR is tactical real weak, and the forward pruning makes it even weaker.

[d]r2qrb1k/1p1b2p1/p2ppn1p/8/3NP3/1BN5/PPP3QP/1K3RR1 w - - bm e5; 5

Todays engines like SF which is high rated need like 31 ply to find e5 and over an hour at my old quad core machine. At same machine. Diep needs a bit more than 20 minutes. That's without having it upgraded to new search and its evaluation function is still pretty much like 2006.

Hiarcs is fastest and needs 20 minutes here.

Back then in 1999 it took Ferret a whole 24 hours to find e5! here and it was only engine on planet finding it back then.

No nothing else found it. Ferret found it at like 19 ply or so.

If you would have had engines back then that would have needed 31 ply to find e5 here, you would have lost everything with it and YOU would have laughed for it saying: "tactical not strong enough", and as you denied back then the tactical barrier existed, just like a lot of others here did do,
you would have laughed even louder for porducing engines that just focused upon seeing things positional deeper instead of just tactical.

So the evidence for tactical barrier is very very evident.

Thank you,
Vincent
User avatar
Zlaire
Posts: 62
Joined: Mon Oct 03, 2011 9:40 pm

Re: Bigger steps when branching factor < 2?

Post by Zlaire »

bob wrote:In fact, it is even possible that increments of less than 1 will work better.
This sounds quite intriguing, never thought of it before.

Most engines use some sort of fractional ply for extensions. Has this been tested by someone in iterative deepening?

I guess this could work in both directions, say 0.5 or 1.5 plies for every iteration. Would need an engine that could handle them (and not just integer divide them both to 1).
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Bigger steps when branching factor < 2?

Post by diep »

Zlaire wrote:
bob wrote:In fact, it is even possible that increments of less than 1 will work better.
This sounds quite intriguing, never thought of it before.

Most engines use some sort of fractional ply for extensions. Has this been tested by someone in iterative deepening?

I guess this could work in both directions, say 0.5 or 1.5 plies for every iteration. Would need an engine that could handle them (and not just integer divide them both to 1).
Yes of course this was trivial to try. Around 1999 i tried next approach:

1 ply = 2 units
reduced moves = 4 units

You can mathematically prove that it is optimal to reduce bad moves by 1 ply. Reductions of more than 1 full ply are very very tricky. Not impossible, yet very tricky. Now if we extend a move we divide number of units by 2.

So every move always is at least 1 unit. No never we extend something more than that.

Now based upon a rule set or backtracking, we can still extend a reduced move by 1 unit, making it 3 units effectively. Say if the move is "a little bad".

The idea of such fractional system is that this works better for hashtable.

Note that in most cases fractional depths were used for extensions. My take on that is that if an extension doesn't work to extend a full ply, it is not a good idea to have the extension in the first place, so extending by just 0.5 ply i'm not a big fan of.

Unreducing a just done reduction is however another issue.

A lot of other variations other than this i tried as well.

In such schemes it makes sense to have iterations increase by 1 unit each ply. Another iteration has a cheap price thanks to hashtable reusage, meanwhile it might see you your mainline 1 ply deeper.

The whole problem is that you can mathematically prove that reducing by 1 ply is most effective form of reduction. Any other form of reduction, like reducing by 2 plies, takes a much bigger risk meanwhile you don't win much plydepth. Junior seemed to do that back then, nevertheless very tricky.

Nowadays it's easier to experiment with all this as you get through tactical barrier. Back then doing reductions more than 1 ply, forget it, was pretty useless.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Bigger steps when branching factor < 2?

Post by bob »

Zlaire wrote:
bob wrote:In fact, it is even possible that increments of less than 1 will work better.
This sounds quite intriguing, never thought of it before.

Most engines use some sort of fractional ply for extensions. Has this been tested by someone in iterative deepening?

I guess this could work in both directions, say 0.5 or 1.5 plies for every iteration. Would need an engine that could handle them (and not just integer divide them both to 1).
I tried both directions. But it was a "casual test" quite a few years back. At the time, prior to the "reduction era" the EBF was a lot higher, and the issue I was looking at was those positions where you go rip-rip-rip going thru the plies, and then suddenly hit a wall and the next iteration takes 10x or 20x longer than expected. I never really experimented very much at all with > 1.0 iterations, Tony Scherzer (BeBe) used to do 1-3-5-7-etc... back in the days when the searches were more uniform in depth. He wanted to eliminate the "odd-even" score jumps that could hurt the effective branching factor if you bounce between an aggressive move (odd plies) and a passive move (even plies). I experimented with that and threw it out quickly. We were doing more selective stuff at the time, and more extensions, and jumping by 2 can cost you a ply at times because you completed 7 pretty quickly, could complete 8 (if you did it) but you have no chance at 9.

But I did not test it in endgames only, where that might begin to work...

The change was not significant to make Crafty do this. I just passed the initial Search() a value of N * increment where N is the integer iteration number, increment is the iteration increment (say 0.5 if you wanted to do 1/2 ply iterations). The output was a bit screwed up as I did not take the time to fix all of that for the test, but I saw some good and some bad. But I did NOT try the increment < 1.0 for hard plies (middlegame) and increment > 1.0 for easy plies (endgame). That might have a chance...