Lyudmil Tsvetkov wrote:Dann Corbit wrote:Lyudmil Tsvetkov wrote:Dann Corbit wrote:Lyudmil Tsvetkov wrote:Dann Corbit wrote:It is also true that better evaluation will reduce branching factor, principally by improvement in move ordering (which is very important to the fundamental alpha-beta step).
There are other things that tangentially improve branching factor like hash tables and IID.
It is also true that pure wood counting is not good enough. But examine the effectiveness of Olithink, which has an incredibly simply eval. It has more than just wood, but an engine can be made very strong almost exclusively through search. I guess that grafting Stockfish evaluation into a minimax engine you will get less than 2000 Elo.
I guess that grafting Olithink eval into Stockfish you will still get more than 3000 Elo.
Note that I did not test this, it is only a gedankenexperiment.
so, no search without eval.
I guess you are grossly wrong about both the 2000 and 3000 elo mark.
wanna try one of the 2?
Olithink eval into SF will play something like 1500 elo, wanna bet?
I guess it is time to change gedankenexperiment for realitaetsueberpruefung...

From CCRL 40/40;
216 OliThink 5.3.2 64-bit 2372 +19 −19 48.3% +12.5 25.6% 1011
With a super simple eval and a fairly simple search, it is already 2372.
Adding the incredible, sophisticated search of Stockfish will lower the eval by more than 872 points?
of course, it is all about tuning.
we are not speaking here of downgrading SF, leaving all its search and using just a dozen basic eval terms, in which case SF will still be somewhat strong, but of patching an entirely alien eval onto SF search.
as the eval and search will not be tuned to each other, you will mostly get completely random results.
You are mostly right about that.
While good programming technique demands encapsulation, it is so ultra tempting to pierce that veil and get chummy with other parts of the program and show them your innards that virtually all programs do it.
I must mention Bas Hamstra's program, which was so beautifully crafted. But that is neither here nor there.
I guess that point I wanted to make is that branching factor (DONE PROPERLY) is the golden nail to better program success.
You point to eval. And eval has its place. But once (for instance) the fail high rate goes over 95% on the pv node, the rest is fluff, as far as BF goes. Now, there can be things to aim the engine better, I think everyone agrees on that. But if you are going to shock the world (and look at every world shocker) it is BF gains that drop the jaws and make the eyes bug out.
As I have said elsewhere, you are an interesting person and you know a lot about chess. But until you understand the complete implication of the branching factor, you cannot properly advice chess programmers.
The branching factor is the golden nail upon which all the kings will drape their mantles.
Mark my words,
Marking your words.

but before that, I will have to take a course on colloquial American.
neither me, nor you are chess programmers, as none of us has written/published a fully-fledged and functional chess engine from scratch.
I am not advising anyone, just sharing some thoughts.
with all your words and behaviour you want to deliver a single message:
'search is more important than evaluation what concerns performance of chess engines.'
and you drive your point home each and every time.
but this is simply not true.
just about everything in a chess engine revolves around eval, a specific estimate for each and every node:
- you call a function called eval() or something similar each and every node, unless you have a hash move
- to do move ordering, you are using hash moves, where the score is based on eval; killer moves are based on eval
- referring to main search functions, alpha is an evaluation estimate, beta too
- going to search routines, be it for futility pruning, razoring, null move reductions or something else, you are always using some kind of evaluation seed
- LMR/LMP, again, you need to order moves first, for which you use eval one way or another, and then reductions specifics again work only within a
particular evaluation framework
so that, to start a chess engine, after doing the move stack and generating the moves, the first thing you need is some kind of evaluation. search only comes in second.
of course, as a matter of fact, both are inseparable, but if I had to pick a more important factor, that would be evaluation.
and indeed, you can build a one-ply engine with some basic evaluation that will still pick some reasonable moves, while it is almost impossible to do the same with the most sophisticated search, provided the program does not know what the pieces are worth.