Uri Blass wrote:
My posts are not in order to convince the stockfish team to do something different.
Then I wonder what the purpose of your posts are. It's always the same. You make suggestions like "we" should do this, "we" should do that. It would be interesting to study the effect of this and that. Yet, when it comes to writing the code, you never do it, unless it is completely trivial. And when it comes to spending CPU resources, you never give your own CPU time. This kind of attitude is not welcome in open source projects.
Academic masturbation leads nowhere. All the theoretical stuff in computer chess has been discovered 20+ years ago. The answer to your initial question is that there is no magic recipe in SF. As Pavel Koziol explained, it's all about fine tuning and synergy between things. There's no fundamental difference between SF and an "average amateur engine", apart from the absence (or rareness) of bugs, and much better fine tuning. They all have the same fundamentals in (alpha beta, null move, qsearch, iid, pvs, hash table, etc.)
I disagree that all the the theoretical stuff has been discovered
20+ years ago.
For example I do not think that in 1994 there were programs that did heavy pruning like stockfish(maybe there are programs that used LMR at that time but I do not believe that there were programs that used the big reductions that stockfish use)
I also think that knowledge how much elo you can expect from something
(assuming some previous stuff) is important information because it means that you probably have some bugs if you implement it and get a significantly smaller improvement.
lucasart wrote:
Academic masturbation leads nowhere. All the theoretical stuff in computer chess has been discovered 20+ years ago. The answer to your initial question is that there is no magic recipe in SF. As Pavel Koziol explained, it's all about fine tuning and synergy between things. There's no fundamental difference between SF and an "average amateur engine", apart from the absence (or rareness) of bugs, and much better fine tuning. They all have the same fundamentals in (alpha beta, null move, qsearch, iid, pvs, hash table, etc.)
It's an attempt to estimate the real minimal search graph that could be obtained with perfect move ordering, getting the cheapest cutoff everywhere along the PV etc. Obviously it's an unreachable goal for any engine because the real minimal graph uses 20/20 hindsight. But it would be interesting to repeat those measurments with a modern chess engine like Stockfish.
"One mustn't be afraid to dream a little bigger, darling"
All the theoretical stuff in computer chess has been discovered 20+ years ago.
There are still plenty of things that are not understood from a theoretical point of view.
(1) It there a theoretical explanation why minimax search works so well in Chess? The research into game tree pathologies appears rather unsatisfying to me.
(2) Why is difference in engine strength roughly additive? In other words why is there something like the elo model?
(3) Are there convincing examples of non-transitivity in Chess? Of course you can easily make up pathological examples but this is not what I mean.
(4) Why does self play magnify elo differences? In other words how does the "mind reading effect" _really_ work?
I could go on for a while...
Typical strawman argument!
Your philosophical questions are interesting but have nothing to do with the discussion.
We are talking about chess algorithm. Stuff that you can use to make an engine better..Nothing else.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
Well understanding the answer to some these questions might it make possible to give answers to more concrete questions relevant to engine strength. Like "Can contempt ever make an engine stronger against an equal engine?".
You will probably say: we can find this out by testing. But then you would only be testing two specific engines. After a small change you would have to test again, if you have no theory to rely on. If you have been following the contempt discussion on fishtest, you know that this last scenario is very real.
Let me mention something else: people like Peter Osterlund have been tuning their evaluation functions in such a way that the value gives the best possible prediction of the outcome of a game against an equal engine. From a tree search perspective this _seems_ like the optimal thing to do. It would be very nice to understand if this approach must work, or else: what prohibits it form working.
Again one can of course test, but one would only be testing one specific engine. It is much better to understand first and _then_ to confirm by testing.
Ideas=science. Simplification=engineering.
Without ideas there is nothing to simplify.
Well understanding the answer to some these questions might it make possible to give answers to more concrete questions relevant to engine strength. Like "Can contempt ever make an engine stronger against an equal engine?".
You will probably say: we can find this out by testing. But then you would only be testing two specific engines. After a small change you would have to test again, if you have no theory to rely on. If you have been following the contempt discussion on fishtest, you know that this last scenario is very real.
Let me mention something else: people like Peter Osterlund have been tuning their evaluation functions in such a way that the value gives the best possible prediction of the outcome of a game against an equal engine. From a tree search perspective this _seems_ like the optimal thing to do. It would be very nice to understand if this approach must work, or else: what prohibits it form working.
Again one can of course test, but one would only be testing one specific engine. It is much better to understand first and _then_ to confirm by testing.
You are again trying to broaden the scope of the discussion to pass another message. Your frustration of engineering versus science, or whatever it may be.
The question Uri is asking is whether there is in SF a magic trick worth lots of elo, that no one else has tried. And the answer is no. The last magic trick was null move invented over 20 years ago. After that it's all fine tuning. Even LMR is only finetuning. The idea of reducing moves is very old. As Pavel explains, it's about fine tuning and synergy: for LMR, it's all the move sorting tricks improved that made it work, when it perhaps did not work well 30 years ago.
Your point about self testing is again irrelevant. The proof is in the pudding, whether you understand it theoretically or not. Self testing has allowed us to make 3300+ elo engines.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
Well understanding the answer to some these questions might it make possible to give answers to more concrete questions relevant to engine strength. Like "Can contempt ever make an engine stronger against an equal engine?".
You will probably say: we can find this out by testing. But then you would only be testing two specific engines. After a small change you would have to test again, if you have no theory to rely on. If you have been following the contempt discussion on fishtest, you know that this last scenario is very real.
Let me mention something else: people like Peter Osterlund have been tuning their evaluation functions in such a way that the value gives the best possible prediction of the outcome of a game against an equal engine. From a tree search perspective this _seems_ like the optimal thing to do. It would be very nice to understand if this approach must work, or else: what prohibits it form working.
Again one can of course test, but one would only be testing one specific engine. It is much better to understand first and _then_ to confirm by testing.
You are again trying to broaden the scope of the discussion to pass another message. Your frustration of engineering versus science, or whatever it may be.
The question Uri is asking is whether there is in SF a magic trick worth lots of elo, that no one else has tried. And the answer is no. The last magic trick was null move invented over 20 years ago. After that it's all fine tuning. Even LMR is only finetuning. The idea of reducing moves is very old. As Pavel explains, it's about fine tuning and synergy: for LMR, it's all the move sorting tricks improved that made it work, when it perhaps did not work well 30 years ago.
Your point about self testing is again irrelevant. The proof is in the pudding, whether you understand it theoretically or not. Self testing has allowed us to make 3300+ elo engines.
I do not agree that there are no magic tricks and I think that there are magic tricks when many programmers know part of them but not all of them.
There is no different explanation for the fact that in 2004 no program was significantly better than fruit2.1 when today many programs are better than it including your program.
Today programmers know things that they did not know in 2004.
You are again trying to broaden the scope of the discussion to pass another message. Your frustration of engineering versus science, or whatever it may be.
So where did you get that from ?
You could have simply voiced the opinion that Uri's proposal may not yield results that are all that useful. I am neutral on this. Since it would not happen on fishtest anyway the issue is largely irrelevant.
However you (and also Marco in fact) grabbed the occasion to use extremely aggressive language, strongly implying that research is by definition useless ("Academic masturbation" was your precise terminology).
So it was _you_ that widened the discussion.
Ideas=science. Simplification=engineering.
Without ideas there is nothing to simplify.
Uri Blass wrote:Today programmers know things that they did not know in 2004.
Not really. Most of the core mechanics of Stockfish are based on, in some cases, very old (mathematical) theories.
The difference between SF and other engines is not nearly as great as people think it is and the differences that do exist are subtle. SF's true strength lies in its optimization of these theories. SF is an extremely well optimized piece of software. That is why it is so strong. So, if you're going to try and look for some magic under the hood, you won't find it.
All of SFs code combined makes it so strong. Not any individual piece. In fact, some of its individual elements could be made to be far stronger with the help of chess professionals (GMs/Super GMs).