* if it's caused by groupthink, the players must therefore be relying more on the computers (or the high draw ratio in completed games would have been there previously)towforce wrote: ↑Sun Nov 15, 2020 7:05 pmmwyoung wrote: ↑Sun Nov 15, 2020 5:13 pm"You haven't addressed the issue of knowledge which I raised"towforce wrote: ↑Sun Nov 15, 2020 11:48 ammwyoung wrote: ↑Sun Nov 15, 2020 4:51 amThis is were some do not understand the problem.towforce wrote: ↑Sun Nov 15, 2020 1:53 am For this mini-thought experiment, please assume that chess is drawn (I know it's not proven yet):
* losses strongly correlate with blunders
* the deeper the search, the fewer the number of blunders
Unfortunately, not all engines measure depth in the same way. However maybe we can come up with a "reasonable guess" based on experience.
Another complicating factor: some positions would require a prohibitively deep search to uncover the blunder. In these cases, knowledge would be needed: the eval would need to be able to avoid blunders that search cannot reach. The good news regarding this is that, thanks to NNs, engines are also getting cleverer now, as well as just faster. Again, exactly how "smart" a NN is is difficult to say - but again, we can have a go.
So if chess is drawn (which I believe it is), then the time to perfect chess engines depends on the shape of the 3-dimensional chart that plots blunders against depth and knowledge.
Edit: here's a simplistic view of what the 3d graph might look like (X = depth, Y = knowledge, Z = blunders. Simple expression produces a plane. Drag with the mouse to rotate up/down/left/right to see clearly) - link.
"The deeper the search, the fewer number of blunders."
The problem is the type B search. A type B search is fine playing scrub humans, and other scrub engines. It gives us a great approximation.
The issue is you are making billions of guesses as to what lines to cut to achieve the great search depths we see today. And you only need to be wrong once against perfect play.
And no amount of search in a type B search can ever achieve perfect play.
This is why we see the errors as shown here in this thread. And why Stockfish fails in the examples against perfect play.
You haven't addressed the issue of knowledge which I raised (see above quoted text). You appear to be saying that the 3 dimensional chart should have a long tail on the way to Z=0 (if you're willing to assume that chess is a draw without a blunder). Maybe you could come up with your own mathematical expression and redraw my chart? "A picture is worth a thousand words".![]()
In this post Albert Silver told us that in top level correspondence chess (TLCC), wins are rare in completed games. Let's consider some candidate reasons why this might be so (my preferred choice is option 1 - that TLCC is the cutting edge, and is almost there in terms of error-free chess).
1. Chess is a draw, a win requires a blunder, and TLCC has almost eliminated blunders
2. Chess is a draw, a win requires a blunder, blunders occur in TLCC, but TLCC suffers from groupthink, and hence the players fail to find each other's blunders
3. Chess is a win, but TLCC players are not good enough to find the available wins
Which of the above 3 choices do you prefer?
Yes, I have many times. And in the knowledge standard you are asking for only exist in one form. As I said before Chess is a 100% tactical game...
And I will take option 4. Chess is either a win or a draw, but it does not matter, as humans are a type B searcher, and the computers they using are a type B searcher. Even in correspondence chess, and hence the players fail to find each other's blunders.
"Another complicating factor: some positions would require a prohibitively deep search to uncover the blunder. In these cases, knowledge would be needed: the eval would need to be able to avoid blunders that search cannot reach."![]()
And it is above that tells me you have no idea what you are talking about. You are just putting words together that you think make sense. But are logically flawed. Not only do you not know the rules of chess, but you are clueless as to how a type B search works.
If you had an eval that could "avoid blunders that search cannot reach."
If you had this type of evaluation. Do you know what would not be needed........A search of any kind.![]()
![]()
![]()
Here is a simple test to see if you have an evaluation that meets your standard. If your STATIC EVALUATION outputs anything other then the 3 true evaluations of chess, and it is not correct 100% of the time. Your evaluation is flawed.
And yes this type of knowledge does exist in only one form, and it is called a table base.![]()
You're basically right - but if a position was won, you'd want one more thing from the eval - distance to mate. If you had a choice of winning moves, your preference would be for the one that reaches mate first.
To summarise your answer as to why the draw ratio is so high in completed TLCC games: the players all use similar computers for analysis, and this is causing groupthink.
I cannot prove that you're wrong, but here's a bit of evidence against that assertion:
* TLCC having such a high draw ratio in completed games is relatively recent
* if it's caused by groupthink, the players must therefore be relying more on the computers (or the high draw ratio in completed games would have been there previously)
* therefore, one would expect the computers playing each other to also have high draw ratios
* we're not (yet) seeing such a high draw ratio in computers playing each other
If the high draw ratio in completed games in TLCC is actually a reflection of the fact that a blunder is required for a win in chess, and there aren't many blunders in TLCC these days, then the above problem doesn't arise.
If I understand your question correctly. You are still comparing a type B search to type B search. And as I have said, you still can improve a type B search. But it is not a perfect search, and can never rise to the level of a perfect search with a type B search. Even given unlimited time for the search with a type B search.