Lazy eval

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Lazy eval - test results

Post by Don »

Houdini wrote:
Milos wrote:
Houdini wrote:
rvida wrote:Also, when the absolute value of positional component from ply-1 eval is greater than 150 cp, lazy eval is not used.
What a coincidence, Houdini does exactly the same...
LOL.
If you use a pure positional value that's a bit too restrictive. I would lower the margin for that condition and use positional-PST instead of just positional.
Also the margin should be dependent on if the position is quiet or not.
You may or may not be right, but that wasn't my point.
It's just funny to see how much of Houdini ideas have already made it into other engines...
The pot calling the kettle black.
jdart
Posts: 4367
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: Lazy eval

Post by jdart »

I do a modified lazy eval if the score is used for razoring, static null pruning, or variable null reduction. I set the low bound below alpha by the amount of the futility margin, and the upper bound to somewhat above beta.

--Jon
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Lazy eval - test results

Post by lkaufman »

I posted the speedup numbers above (first post with - test results in subject). Roughly 33% faster in middlegame, less in opening, much less in a king and pawn only ending...

Our cutoff bound is dynamic, but is typically between a minor piece and a rook, 300 - 500, for the first cutoff which is right at the top of evaluate. If that doesn't work, we hit the pawn evaluation (and passed pawn evaluation) and then try another lazy eval cutoff. The second cutoff uses a dynamic value, but it is roughly 1.5 pawns...[/quote]

Thanks for the data. However it would be much more informative to run the searches to a fixed depth rather than for 30 seconds. The point is that lazy eval seems to expand the tree, so although you may get 33% more NPS (which is very good), much or even all of this could be wasted if you need more nodes for a given depth, as everyone seems to be reporting. Today we did get a decent NPS speedup (nothing like yours though), but it mostly went away when we look at nodes to complete N ply.
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Lazy eval - test results

Post by Don »

mcostalba wrote:
lkaufman wrote: I note that SF rejected lazy eval. So at least one other top program failed to demonstrate a benefit from it. Why didn't it help Stockfish? I'm sure they would not have rejected a 10% nearly-free speedup.
For a very selective search you may want to have accurate eval info to push pruning to the limits. In case of Crafty, where the pruning is far lower I guess given the 300 ELO gap or so, my guess is the speed up plays a factor more than accurate eval. Another point are the spikes in king evaluations, present in SF and, as you mentioned also in Komodo, these are filtered out with lazy eval, but play an important role for pruning decision.

I still should have the SF lazy eval patch somewhere, I can post it if someone is interested in experimenting with this.
Komodo pushes the selectivity pretty hard and we DO get a decent nodes per second increase with Lazy evaluation but the problem is that we get a big increase in nodes. It is this way because we assume we will not make scout if the guesstimate is too low. So we miss some of the beta cutoffs, you cannot have your cake and eat it too. Komodo's positional component can vary enormously so we do take a lot of damage positionally when we use lazy margins.

If I do it the safer way, we do not get much of a speedup at all but it doesn't hurt the strength much either. The safer way is to take the cutoff if the pessimistic estimate is still above beta, otherwise call the evaluation and try to get the cutoff.

I don't think it is going to work for us because we push the futility and margins and tricks too hard and our pawn structure and king safety are aggressive. If we had done lazy evaluation in the Doch days, it probably would have worked well and our program would have evolved differently.

I don't see anything that cannot be explained for us here. It's pretty easy to see what is happening - lower margins, more nodes search because more missed cutoffs and compromised evaluation function takes away some ELO. Not a good tradeoff.

I think it's partly because Komodo is a lean program. We went through a phase after we saw how slow Komodo's evaluation was where we tried hard to avoid moving to the next stack frame (where evaluation would be called as one of the first things) by doing lazy-like things.
Uri Blass
Posts: 10297
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Lazy eval - test results

Post by Uri Blass »

lkaufman wrote:
mcostalba wrote:
lkaufman wrote: Do you remember how much of a NPS gain you got from lazy eval? If not we could test it with your patch.
I have pushed the lazy_eval branch that I have resurrected from the last spring and rebased above current master:

https://github.com/mcostalba/Stockfish/ ... /lazy_eval

I did some experiment and some tweak but with no success.
OK, we tried a more aggressive version of lazy and we did get the big NPS gains of 10% or more. However the number of nodes increased and the quality went down, so whatever margin we use the tradeoff is bad. So the mystery is why lazy eval works so well for Critter and for all the Rybka and Ippolit-related programs, but fails in both Stockfish and Komodo? Both SF and Komodo use pretty aggressive king safety scores, but I don't think they are drastically higher (when scaled based on the average value of an extra pawn) than these Ippo- related programs, at least not so drastically higher as to turn a free 10% speedup in Critter into a clear net loss in both SF and Komodo. Can you think of anything that would make the ippo programs behave so differently from SF in this matter? Or could we both be missing some crucial implementation detail?
Maybe you need better lazy evaluation or better rules when to use it.

I think that the relevant question is if you have often small positional scores that you need long time to calculate and you can find fast before calculating them that they are small without finding their exact value.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Lazy eval - test results

Post by bob »

lkaufman wrote:I posted the speedup numbers above (first post with - test results in subject). Roughly 33% faster in middlegame, less in opening, much less in a king and pawn only ending...

Our cutoff bound is dynamic, but is typically between a minor piece and a rook, 300 - 500, for the first cutoff which is right at the top of evaluate. If that doesn't work, we hit the pawn evaluation (and passed pawn evaluation) and then try another lazy eval cutoff. The second cutoff uses a dynamic value, but it is roughly 1.5 pawns...
Thanks for the data. However it would be much more informative to run the searches to a fixed depth rather than for 30 seconds. The point is that lazy eval seems to expand the tree, so although you may get 33% more NPS (which is very good), much or even all of this could be wasted if you need more nodes for a given depth, as everyone seems to be reporting. Today we did get a decent NPS speedup (nothing like yours though), but it mostly went away when we look at nodes to complete N ply.[/quote]

Here goes: (I had looked at the output to verify that the tree shape was not changing significantly, however) But this pretty well shows that:

starting position:
log.001: time=42.77 mat=0 n=65951949 fh=91% nps=1.5M
log.002: time=37.17 mat=0 n=65945250 fh=91% nps=1.8M

MG:
log.001: time=52.16 mat=0 n=121221442 fh=95% nps=2.3M
log.002: time=40.20 mat=0 n=121147943 fh=95% nps=3.0M

EG:
log.001: time=11.36 mat=1 n=42658591 fh=91% nps=3.8M
log.002: time=10.33 mat=1 n=42613588 fh=90% nps=4.1M
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Lazy eval

Post by bob »

Milos wrote:
Engin wrote:..and i am using the eval score in pvs search before searching moves to razoring, static eval before null move pruning and null move pruning decide if eval_score >=beta prune
Nobody in their right mind would use lazy eval in PV nodes... :roll:
Most of us do, in fact. Just look at some sources...
wgarvin
Posts: 838
Joined: Thu Jul 05, 2007 5:03 pm
Location: British Columbia, Canada

Re: Lazy eval - test results

Post by wgarvin »

Houdini wrote:
rvida wrote:Also, when the absolute value of positional component from ply-1 eval is greater than 150 cp, lazy eval is not used.
What a coincidence, Houdini does exactly the same...
LOL.
Coincidence? What does Robbolito do?

...Ahhh.
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Lazy eval - test results

Post by lkaufman »

bob wrote:
lkaufman wrote:I posted the speedup numbers above (first post with - test results in subject). Roughly 33% faster in middlegame, less in opening, much less in a king and pawn only ending...

Our cutoff bound is dynamic, but is typically between a minor piece and a rook, 300 - 500, for the first cutoff which is right at the top of evaluate. If that doesn't work, we hit the pawn evaluation (and passed pawn evaluation) and then try another lazy eval cutoff. The second cutoff uses a dynamic value, but it is roughly 1.5 pawns...
Thanks for the data. However it would be much more informative to run the searches to a fixed depth rather than for 30 seconds. The point is that lazy eval seems to expand the tree, so although you may get 33% more NPS (which is very good), much or even all of this could be wasted if you need more nodes for a given depth, as everyone seems to be reporting. Today we did get a decent NPS speedup (nothing like yours though), but it mostly went away when we look at nodes to complete N ply.
Here goes: (I had looked at the output to verify that the tree shape was not changing significantly, however) But this pretty well shows that:

starting position:
log.001: time=42.77 mat=0 n=65951949 fh=91% nps=1.5M
log.002: time=37.17 mat=0 n=65945250 fh=91% nps=1.8M

MG:
log.001: time=52.16 mat=0 n=121221442 fh=95% nps=2.3M
log.002: time=40.20 mat=0 n=121147943 fh=95% nps=3.0M

EG:
log.001: time=11.36 mat=1 n=42658591 fh=91% nps=3.8M
log.002: time=10.33 mat=1 n=42613588 fh=90% nps=4.1M[/quote]

Thanks. So at least based on these positions it seems that your implementation of lazy eval is essentially a virtually free, significant speedup. What margins was this based on, and which speedups are you currently using on the last four plies? I'm sure you do futility, do you also just look at the best N moves on these plies, and do you do "static null move" where you just take the cutoff if above beta by more than N centipawns on those plies? All of the top programs do the above. If you also do them, it becomes difficult to understand why you get so much out of lazy eval while Stockfish (and Komodo) get nothing.
User avatar
Houdini
Posts: 1471
Joined: Tue Mar 16, 2010 12:00 am

Re: Lazy eval - test results

Post by Houdini »

wgarvin wrote:
Houdini wrote:
rvida wrote:Also, when the absolute value of positional component from ply-1 eval is greater than 150 cp, lazy eval is not used.
What a coincidence, Houdini does exactly the same...
LOL.
Coincidence? What does Robbolito do?

...Ahhh.
Robbolito doesn't do this.

...Ahhh.