When does a cut-node became an all-node?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: When does a cut-node became an all-node?

Post by lkaufman »

BubbaTough wrote:
lkaufman wrote:At least, if this is not so, then Rybka, Ippo, Ivanhoe, Houdini, and Critter are all doing something silly. I don't believe this.
Your conclusion may (or may not) be right, but I abhor your reasoning. You give much too much respect to the techniques used in the top couple of engines in my opinion. I think its more useful, and more accurate, to assume almost everything in the top engines can be improved until you prove to your satisfaction otherwise. If you look at the best programs 10 years ago, and identify the things they got perfect, you will find very few. There is no reason the same won't be true 10 years from now.

-Sam
But this is exactly what we do. In Komodo, we never assume that any idea or any numerical value from another program is correct, we only assume that the ideas from other top programs are worth testing. But I do think it's reasonable to assume that they are not just adding complexity for no benefit. In the present instance, I don't believe that Vasik would have put the CUT/ALL distinction into Rybka without proof that it helped, and in fact our own testing in Komodo does show a slight benefit for making the distinction, although not the severe distinction made by those programs. My own opinion is that both Ivanhoe and Stockfish (the two top open-source engines) each do a LOT of things wrong (Ivanhoe more so than Stockfish, I think), and so I don't disagree at all with your statement.
BubbaTough
Posts: 1154
Joined: Fri Jun 23, 2006 5:18 am

Re: When does a cut-node became an all-node?

Post by BubbaTough »

lkaufman wrote:
BubbaTough wrote:
lkaufman wrote:At least, if this is not so, then Rybka, Ippo, Ivanhoe, Houdini, and Critter are all doing something silly. I don't believe this.
Your conclusion may (or may not) be right, but I abhor your reasoning. You give much too much respect to the techniques used in the top couple of engines in my opinion. I think its more useful, and more accurate, to assume almost everything in the top engines can be improved until you prove to your satisfaction otherwise. If you look at the best programs 10 years ago, and identify the things they got perfect, you will find very few. There is no reason the same won't be true 10 years from now.

-Sam
But this is exactly what we do. In Komodo, we never assume that any idea or any numerical value from another program is correct, we only assume that the ideas from other top programs are worth testing. But I do think it's reasonable to assume that they are not just adding complexity for no benefit. In the present instance, I don't believe that Vasik would have put the CUT/ALL distinction into Rybka without proof that it helped, and in fact our own testing in Komodo does show a slight benefit for making the distinction, although not the severe distinction made by those programs. My own opinion is that both Ivanhoe and Stockfish (the two top open-source engines) each do a LOT of things wrong (Ivanhoe more so than Stockfish, I think), and so I don't disagree at all with your statement.
Great :). I guess its a stylistic thing. In a lot of your posts you start out by asking why things are done a certain way, and saying it must be very smart since the top programs are doing it. As long as that's just politeness, and you understand its highly likely that whatever they are doing can be improved, I am happy :).

On the " I don't believe that Vasik would have put the CUT/ALL distinction into Rybka without proof that it helped" well, yes. Most of the strong programs only add things that, AT THE TIME THEY ADD THEM, seem to test well. However, few have the time or testing power to constantly retest "proven" features, and additional features are added that interact such that old "proven" code because useless or even harmful. Thus, as programs evolve I suspect all but the most streamline programs become littered with non-contributing code. I make no claim as to whether this fall into that category (it probably doesn't).

-Sam
sedicla
Posts: 178
Joined: Sat Jan 08, 2011 12:51 am
Location: USA
Full name: Alcides Schulz

Re: When does a cut-node became an all-node?

Post by sedicla »

bob wrote:
sedicla wrote:
bob wrote:
sedicla wrote:According to chess wiki programming is when after all candidate moves are tried, but fruit does it after first move is tried.

I thinking of trying doing things differently according node type, for example:
cut-node - use static null move (like stockfish) and don't do razoring (eval + margin < beta).
all-node - no static null move, try razor, no ordering for quiet moves...

What you guys think is a good time to switch a cut-node to all-node?

Thanks.
First, why does it matter? Are you somehow trying to type a node as PV, CUT or ALL, and then do things differently if it is CUT as opposed to ALL? A
"CUT" node becomes an ALL node when there is no cutoff to be found. Most common reason is poor move ordering. This is not the best move here although we thought it was, so another move will cause a cutoff later. On occasion, it is not ordering, but is a depth issue. Suddenly you see deep enough to see that the score is worse than you expect, no matter which
move you try, so none cause a cutoff...
Well, thats what im trying to figure it out, if it matters or not. I can see that is your opinion that it does not matter.
My idea is exactly that, do different things at all and cut nodes.
Here's the BIG question:

Why?

More commonly, people seem to be doing different things at PV vs non-PV nodes. I don't follow that reasoning either... the idea of the search is to find the best move. If you spend less effort on non-PV nodes, you will miss better moves. But for ALL vs CUT, the only advantage I see is the one I used in DTS, that is you want to split at an ALL node, never at a CUT node. Beyond that, I can't think of any reason why I might prune diffierently, reduce differently, or extend differently, just because a node was ALL rather than CUT or vice-versa...
Because I think it makes sense and like the idea of specialized methods.
tpetzke
Posts: 686
Joined: Thu Mar 03, 2011 4:57 pm
Location: Germany

Re: When does a cut-node became an all-node?

Post by tpetzke »

More commonly, people seem to be doing different things at PV vs non-PV nodes. I don't follow that reasoning either
I think the idea behind this is that an error in a PV Node has more likely influence on the root score than an error in a Non-PV node, so people are more careful what they do in PV nodes

I think you do an IID in crafty only in PV nodes, so you also differentiate them
just because a node was ALL rather than CUT or vice-versa
If you suspect a CUT node, you suspect there is a move that produces a cutoff. If you don't have a hash move or winning capture move (the easy candidates) you might want to spent a little effort to find that move and try it first because it saves more than it costs. In an ALL node you wont find such a move, so don't start looking for one.

So just like you do an IID in PV nodes, I do an IID also in suspected CUT Nodes (I just use a much smaller depth than in PV nodes) if I don't have a hash move. I use the best move returned from IID as the first move that gets searched to the regular depth, chances are it is at least a good one.

For me it pays off, the node prediction is cheap and in 9 out of 10 cases correct. With it I was able to drive my 1st move cutoff rate to 95%.

Thomas...
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: When does a cut-node became an all-node?

Post by diep »

BubbaTough wrote:
lkaufman wrote:At least, if this is not so, then Rybka, Ippo, Ivanhoe, Houdini, and Critter are all doing something silly. I don't believe this.
Your conclusion may (or may not) be right, but I abhor your reasoning. You give much too much respect to the techniques used in the top couple of engines in my opinion. I think its more useful, and more accurate, to assume almost everything in the top engines can be improved until you prove to your satisfaction otherwise. If you look at the best programs 10 years ago, and identify the things they got perfect, you will find very few. There is no reason the same won't be true 10 years from now.

-Sam
The reason why todays top engines basically do some trivial things like toying with how much to reduce, which is trivial to experiment with - and as i understand many experiments get carried out that simply are never within a mathematical optimum is a time reason.

It's easy and braindead to experiment with, any other algorithm that's promising yet no one really got it well to work, such as multicut you hear no one about that. Not sure i saw it correct, could be stockfish is using it.

If you have a new algorithm no one ever has tried before, and i really mean ALGORITHM, not so much modifying a few parameters which is what happens here and what the discussion was about - such research eats massive time. fulltime time i'd argue.

As for the engines of 10 years ago, you should give them a tad more credit. It was difficult to test anything back then, as very few had enough hardware to do so. There was a split between the engines back then. A few focussing upon searching deeper by means of cutting the branching factor, and a number that combined all this meanwhile trying to find the maximum amount of tactics.

If one thing was really well optimized in most engines, it was finding the maximum amount of tactics. Todays engines need 2 plies to find what some engines back then saw within 1 ply, especially near the leaves.

An engine really well optimized to find tactics by then was Rebel. It's entire pruning system based upon not missing tactics; maybe Ed wants to show some gems there, showing how efficient Rebel searched tactics back then.

What most here don't realize is that Ed had invented for this his own searching algorithm, maybe Ed wants to comment on that, as his homepage of a few years ago shows the correct tables yet doesn't clearly describe the algorithm, last time i checked (which is well over a year ago).

When i analyzed his algorithm a decade ago, it soon showed the brilliancy of Ed end 80s, start 90s as it prunes way more than any of todays reduction systems meanwhile not missing tactics. No todays reduction system can do that for you. Todays reduction systems are very easy to experiment with. No high IQ needed.

This in contradiction to what Ed has been doing. It's not a trivial algorithm to invent. Fact that we are over 25 years further now and no one ever posted something similar to it is already selfexplaining.

However it has 2 achillesheels and that's that it doesn't work very well together with hashtable and though nullmove is possible it isn't as efficient as in normal depthlimited search. Basically todays searching systems are totally based upon hashtable storage; without efficient hashtable you won't be getting any close to 30 ply.

We are 20 years later now however than times that Ed's algorithm was dominating computerchess. Maybe Ed should ask someone who is good in scientific publications, to write down the algorithm carefully, as Rebel dominated computerchess from end 80s well up to 1998.

In Diep, hashtable always has played a crucial role, so it can't do without.

You cannot tactical rival Ed's algorithm the first 100k nodes you search.

If you'd implement that into Stockfish and go play superbullet games single core, it'll win probably everything.

Only when you use a bunch of cores and a bit more serious time control it will lose of course.

This tells you more about the superbullet testing than anything else.

Vincent
rbarreira
Posts: 900
Joined: Tue Apr 27, 2010 3:48 pm

Re: When does a cut-node became an all-node?

Post by rbarreira »

diep wrote:
rbarreira wrote:
diep wrote: Also all this selective searching can of course only work using my lemma of 90s, which was disputed at the time, that is that there is a tactical barrier above which only positional searching deeper is interesting.
What is this tactical barrier? Are you saying that beyond a certain search depth, the search will stop picking up "tactics"? So you're saying that the positions deep in the search tree don't have deep tactics waiting to be found any more?

That sounds strange and IMHO unlikely...
The religion in the 90s, though not shared by all chess programmers,
a good example there is Don Dailey, was that every additional ply lineair scaled further tactical.

So a program searching 6 ply would always beat a 5 ply program,
just like a 17 ply program would beat with the exact same percentage
a 16 ply program.

Realize how difficult it was back then to test, so all sorts of lemma's that look silly nowadays, they were used by even some strong top programmers.

They said the next thing simply: "each year we won X elopoints".
Every ply would be 70 elopoints for example.

Now in 1997 we got like 10 ply or so. Some a bit more some a bit less.

So todays 30+ ply search depths then would be 20 * 70 = 1400 elo
above.

Software improvements not counted!!!!!
Just search depth based elo!!!!!

It was the time of the superbeancounters.

Basically there was only 2 voices against the above lineair scaling.
That was Don and that was me.

Known experts against, that's a list so big i can keep typing. Most well known as he did write an article on it, that was Ernst A Heinz.

With something dubious as the number of fail highs that crafty would get at a bigger plydepth he hoped to prove this lineair scaling. Entire names got invented.

Don did do some tests to prove that lineair scaling didn't exist.

I designed the tactical barrier for that.

I've never done a hard formulation on what it was, but i intended with it several things.

a) basically the observation that in grandmaster chess most tactics is a ply or 12+ and players not giving away material, games get mostly decided then not by tactics, yet by chessknowledge, positional patterns and strategy.

In short the notion that above a certain search depth improving the evaluation function is more important than getting yet another ply.

This above explanation is what i posted most back then.

I think i twas Bob who led the pack attacking that. He attacked it by saying that under no circumstance a better evaluation function was worth more than or equal to 2 ply of additional search depth.

Online back then games of engines getting 6 ply versus opponents getting 8, that was not a contest. 8 ply always won.

That would lineair scale also to bigger search depths. I denied that.

Nowadays no one is busy with that, as 30 ply engines play 17 ply engines and the 17 ply sometimes wins (not much though), which in that eloscaling wold be impossible of course as 30 ply would be 13 * 70 = 910 elopoints stronger, which means that you have a hard 0% chance to ever win a game, as you just can't test enough games for that luck once as that would need to happen long after the Sun has become a supernova.

The next deduced interpretation of that tactical barrier is the notion of course, that once your program picks up a lot of tactics, that it becomes more important to search positional moves than ONLY tactical moves - again i don't mean to say you should ditch all tactics, i just say it's less relevant.

Please see that editted 'only'. As the notion of the 80s and 90s was to do something to ONLY see tactics and ONLY pick up tactics. In my interpretation, tactics is one positional aspect of the game, an important one, but not the only aspect. Not more important when we search that deep than STRATEGICAL considerations. In the end majority of tactics, just like other positional plans (such as trying to move a knight from f3 to d5 via a manoeuvre) that is short term plans. Strategy is a LONG term plan.

I hope i write that down in understandable manner; as the idea and notion of it is total derived from myself playing chess of course.

I also use this explanation to explain why for Diep futility doesn't really work. Let's start saying that futility needs a pretty narrow window. You can't do futility at 50 pawns, that's not so useful. If we do it however at say 1 pawn margin, then with diep's huge evaluation it's nearly impossible to predict which moves suddenly need to get evaluated lazy and which do not.

futility then effectively has the effect that it gets rid in quite some positions the best positional move(s) meanwhile not losing the tactical moves.

So at testsets that scores a lot more elopoints, yet in playing games it loses more games as it plays a lot weaker.

Now most beancounters here have a real simple evaluation, the few exceptions such as passers are easy to put in the lazy consideration;
that means that the odds you miss good positional moves suddenly
is very small and explains why futility works for them.

I argue that for such programs futility would also work without being combined with other forms of selectivity, such as bigger R's for nullmoves and/or bigger selectivity.

This to refute the criticism of Tord that one needs to go through a few hills and valleys in order to get it to work.
I haven't done any experiments of this, so I'm sorry that I can't give a meaningful reply as lengthy as yours.

But I should point out that this part of your post was misleading:
Nowadays no one is busy with that, as 30 ply engines play 17 ply engines and the 17 ply sometimes wins (not much though), which in that eloscaling wold be impossible of course as 30 ply would be 13 * 70 = 910 elopoints stronger, which means that you have a hard 0% chance to ever win a game, as you just can't test enough games for that luck once as that would need to happen long after the Sun has become a supernova.
These 30-ply searches of today are using much more selective search than the 12-13 ply searches of old programs. It's not surprising that the "plies" of today don't give as much strength.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: When does a cut-node became an all-node?

Post by diep »

rbarreira wrote:
diep wrote:
rbarreira wrote:
diep wrote: Also all this selective searching can of course only work using my lemma of 90s, which was disputed at the time, that is that there is a tactical barrier above which only positional searching deeper is interesting.
What is this tactical barrier? Are you saying that beyond a certain search depth, the search will stop picking up "tactics"? So you're saying that the positions deep in the search tree don't have deep tactics waiting to be found any more?

That sounds strange and IMHO unlikely...
The religion in the 90s, though not shared by all chess programmers,
a good example there is Don Dailey, was that every additional ply lineair scaled further tactical.

So a program searching 6 ply would always beat a 5 ply program,
just like a 17 ply program would beat with the exact same percentage
a 16 ply program.

Realize how difficult it was back then to test, so all sorts of lemma's that look silly nowadays, they were used by even some strong top programmers.

They said the next thing simply: "each year we won X elopoints".
Every ply would be 70 elopoints for example.

Now in 1997 we got like 10 ply or so. Some a bit more some a bit less.

So todays 30+ ply search depths then would be 20 * 70 = 1400 elo
above.

Software improvements not counted!!!!!
Just search depth based elo!!!!!

It was the time of the superbeancounters.

Basically there was only 2 voices against the above lineair scaling.
That was Don and that was me.

Known experts against, that's a list so big i can keep typing. Most well known as he did write an article on it, that was Ernst A Heinz.

With something dubious as the number of fail highs that crafty would get at a bigger plydepth he hoped to prove this lineair scaling. Entire names got invented.

Don did do some tests to prove that lineair scaling didn't exist.

I designed the tactical barrier for that.

I've never done a hard formulation on what it was, but i intended with it several things.

a) basically the observation that in grandmaster chess most tactics is a ply or 12+ and players not giving away material, games get mostly decided then not by tactics, yet by chessknowledge, positional patterns and strategy.

In short the notion that above a certain search depth improving the evaluation function is more important than getting yet another ply.

This above explanation is what i posted most back then.

I think i twas Bob who led the pack attacking that. He attacked it by saying that under no circumstance a better evaluation function was worth more than or equal to 2 ply of additional search depth.

Online back then games of engines getting 6 ply versus opponents getting 8, that was not a contest. 8 ply always won.

That would lineair scale also to bigger search depths. I denied that.

Nowadays no one is busy with that, as 30 ply engines play 17 ply engines and the 17 ply sometimes wins (not much though), which in that eloscaling wold be impossible of course as 30 ply would be 13 * 70 = 910 elopoints stronger, which means that you have a hard 0% chance to ever win a game, as you just can't test enough games for that luck once as that would need to happen long after the Sun has become a supernova.

The next deduced interpretation of that tactical barrier is the notion of course, that once your program picks up a lot of tactics, that it becomes more important to search positional moves than ONLY tactical moves - again i don't mean to say you should ditch all tactics, i just say it's less relevant.

Please see that editted 'only'. As the notion of the 80s and 90s was to do something to ONLY see tactics and ONLY pick up tactics. In my interpretation, tactics is one positional aspect of the game, an important one, but not the only aspect. Not more important when we search that deep than STRATEGICAL considerations. In the end majority of tactics, just like other positional plans (such as trying to move a knight from f3 to d5 via a manoeuvre) that is short term plans. Strategy is a LONG term plan.

I hope i write that down in understandable manner; as the idea and notion of it is total derived from myself playing chess of course.

I also use this explanation to explain why for Diep futility doesn't really work. Let's start saying that futility needs a pretty narrow window. You can't do futility at 50 pawns, that's not so useful. If we do it however at say 1 pawn margin, then with diep's huge evaluation it's nearly impossible to predict which moves suddenly need to get evaluated lazy and which do not.

futility then effectively has the effect that it gets rid in quite some positions the best positional move(s) meanwhile not losing the tactical moves.

So at testsets that scores a lot more elopoints, yet in playing games it loses more games as it plays a lot weaker.

Now most beancounters here have a real simple evaluation, the few exceptions such as passers are easy to put in the lazy consideration;
that means that the odds you miss good positional moves suddenly
is very small and explains why futility works for them.

I argue that for such programs futility would also work without being combined with other forms of selectivity, such as bigger R's for nullmoves and/or bigger selectivity.

This to refute the criticism of Tord that one needs to go through a few hills and valleys in order to get it to work.
I haven't done any experiments of this, so I'm sorry that I can't give a meaningful reply as lengthy as yours.

But I should point out that this part of your post was misleading:
Nowadays no one is busy with that, as 30 ply engines play 17 ply engines and the 17 ply sometimes wins (not much though), which in that eloscaling wold be impossible of course as 30 ply would be 13 * 70 = 910 elopoints stronger, which means that you have a hard 0% chance to ever win a game, as you just can't test enough games for that luck once as that would need to happen long after the Sun has become a supernova.
These 30-ply searches of today are using much more selective search than the 12-13 ply searches of old programs. It's not surprising that the "plies" of today don't give as much strength.
You are wrong here as well. When thoroughly applying Ed's algorithm, it will prune more than any of todays searchers is doing.

Most posters here just know 1 or 2 algorithms how to search - what happened back then was pruning way way more than you can imagine.

Sure it'll lose :)

When you'd redo hashtable in deepfritz6 and update its evaluation to todays evaluation function, i bet it searches 40 ply.

Heh even CSTAL was searching really deep back then end 90s. This with a nps you'll laugh for.

The were pruning far more back then than todays engines, just made some exceptions to find tactics which todays engines aren't doing.

In those days, mid 90s up until end 90s, there was are more attempts getting done than today to deviate from the path that others have travelled.

Right now most new engines basically are some sort of twin of a rybka derivative.

So the first 3000 elo they get for free so to speak.

There was more creativity back then, but also there was money to make by sales and if you'd copy someone you would get big courtcases, unlike today. Back then there were more high IQ-ed people busy than today; right now what got there new is some brain dead saluting type civil servant with focus upon not thinking. The high IQ'ed left, just a few remained, they get there when there is money to make by WINNING, not by selling sneaky services.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: When does a cut-node became an all-node?

Post by Daniel Shawul »

I'm not so sure that your reasoning is correct. Just because you are (let's say) on the 7th move at an expected CUT node which will probably turn out to be an ALL node, it does not follow that the previous node (or the one before that) was mis-typed; the next move tried on the previous ply may produce the cut.
So what? You search the current move you are searching as leading to an ALL node. When you back up and continue on the previous node you make the assumption again. If your move ordering is any good at all, you should get a fail high with in the first three moves. Of course any other move that fails in that 1% could cause a cutoff. We are taking chances ( a good chance) when we do pruning decisions. Using bayesian inference, you collect evidence against a node being a CUT node as you search each move. Failing to give a cutoff reinforces the belief that it is an ALL node and this has nothing to do with searching another move in the previous ply as we are making CUT/NODE distinction only for the current node. The upper node will have its own decision once three moves are searched..
So it may well be that reducing the 7th move more (or less) at a CUT node than at an ALL node is still justified based on the earlier node type. At least, if this is not so, then Rybka, Ippo, Ivanhoe, Houdini, and Critter are all doing something silly. I don't believe this.
As Sam pointed out,your reasoning is highly skewed by what you think works for the best engines. This is simply wrong and what you should do is unit testing. For example in this case, you can try to reduce CUT and ALL nodes by the same amount (which I do btw) and play many games and post back results...
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: When does a cut-node became an all-node?

Post by lkaufman »

Daniel Shawul wrote:
I'm not so sure that your reasoning is correct. Just because you are (let's say) on the 7th move at an expected CUT node which will probably turn out to be an ALL node, it does not follow that the previous node (or the one before that) was mis-typed; the next move tried on the previous ply may produce the cut.
So what? You search the current move you are searching as leading to an ALL node. When you back up and continue on the previous node you make the assumption again. If your move ordering is any good at all, you should get a fail high with in the first three moves. Of course any other move that fails in that 1% could cause a cutoff. We are taking chances ( a good chance) when we do pruning decisions. Using bayesian inference, you collect evidence against a node being a CUT node as you search each move. Failing to give a cutoff reinforces the belief that it is an ALL node and this has nothing to do with searching another move in the previous ply as we are making CUT/NODE distinction only for the current node. The upper node will have its own decision once three moves are searched..
So it may well be that reducing the 7th move more (or less) at a CUT node than at an ALL node is still justified based on the earlier node type. At least, if this is not so, then Rybka, Ippo, Ivanhoe, Houdini, and Critter are all doing something silly. I don't believe this.
As Sam pointed out,your reasoning is highly skewed by what you think works for the best engines. This is simply wrong and what you should do is unit testing. For example in this case, you can try to reduce CUT and ALL nodes by the same amount (which I do btw) and play many games and post back results...
I think that the theory is that the expected node type matters even if it is wrong because the consequences of being wrong differ in the two cases. At expected CUT nodes missing a good move is a speed issue, while at expected ALL nodes it is a quality issue.
I'm not sure whether you are proposing making these changes to our own program or to Ivanhoe. In Komodo we make only a minor distinction, and undoing this would just produce a minor elo loss that would take many thousands of games to prove. In Ivanhoe the distinction is a large one, so it would be interesting to change the CUT reductions to match the ALL reductions. I'm not sure that I have the technical skills to do this myself as I've never worked on any code other than Komodo; if anyone wants to run this test it would be interesting. It will still require some thousands of games so a fast time control is necessary. Note that the ALL reductions should be left alone, as changing them has far greater consequences than changing the CUT reductions. This alone does suggest that the distinction makes sense.

Larry
User avatar
Rebel
Posts: 6995
Joined: Thu Aug 18, 2011 12:04 pm

Re: When does a cut-node became an all-node?

Post by Rebel »

diep wrote:
lkaufman wrote:
diep wrote: 2. It's not a proof of anything of course, pure mathematical seen. It's a simple calculation, yet quite lengthy. Realize very popular also back then was fractional plydepths, so when calculating the optimum we c an use broken depths as well. That makes it however a lineair programming problem which as we know is solvable even without complex numbers.

The answer to this problem is not 42 in this case, yet 1, assuming the normal reduction for the depthlimited search also is 1. So that makes the total reduction 2 in c ase of having areduction and 1 in case of not reducing or research. Note back in 1999 i didn't research, in case of a fail high.

Vincent
Not re-searching after a fail high is a huge difference. I wouldn't want to reduce more than one ply either in that case.
Look there is no disagreement here for the current.

However we speak 1999 now.

Larry i need to quote Ed Schroder there. Back then search depths were not so huge. Researching really hardly mattered back then. With todays search depths that definitely is different. Also realize that for LMR doing a research is more crucial, than if you reduce based upon chessknowledge, which is what i did do.

Ed reported also that he didn't research back then and that he had tested it carefully (in contradictionto me) and that it won him no elo to research, just lost him nodes.
You must mis-remember :wink:

I always re-search.

What you are possibly hinting at is the discussion that sometimes the search produces a false fail-high and then you as a programmer are stuck what to do with that. You can either keep the old best move (it's what I always have done) or decide to make the false fail-high the best move after all.

The latter is still available as an option in ProDeo and at the time thoroughly testing showed no improvement and funny enough no downfall in strength also. This of course may differ from program to program.