Thank you for contributing your knowledge/expertise. I am ultimately looking for a better understanding of the situation, and good contributions are helpful in that respect.
MonteCarlo wrote: ↑Thu Feb 18, 2021 1:09 am
There are some weak links in that chain; I'm not saying the idea is necessarily wrong, but the line of reasoning getting there is a bit tenuous.
First, saying that LC0 is not especially good at the endgames and humans are is already suspicious at best.
Sure, in endgames requiring very precise calculation SF is better than LC0, but SF is a very high bar there.
Also, humans, even the top ones, are also not particularly strong in the ending from an objective standpoint. While one can still find the occasional position where humans understand an evaluation/plan that is challenging for engines to find, this is hardly the rule.
I would not expect a strong human come close to beating LC0 on decent hardware in a match starting from some suite of complex endings.
Even relatively (emphasis on "relatively") simple theoretical endings like KRBkr are botched by a couple very strong GMs each year.
A perfect player needs a better role model than humans
I got the information that LC0 is "weak in endgames" from here -
link.
It sounds as though your knowledge is more accurate than mine, so I'll concede that LC0 is probably better at playing endgames than strong humans.
However, it is said that LC0 has a rating of around 2300 at ply 1. This is very good (and waaaaay stronger than me). However, given that, in training, LC0 will have seen a lot more positions than any human, one would expect that, other things being equal, it should be better than a strong human.
The endgame is especially interesting because it's the clearest differentiator between players of different strength: after a simul with GM Danny King, I asked him how he knew his position was won. To my amazement, he set up the endgame position from our game on a board in seconds, then just looked at me as if to say, "How can you not see something so obvious?"
Look at the way a GM plays a simul: they get to your board, you make a move, and the GM responds and moves on - mostly in less than 10 seconds!
Somehow, whether it's knowledge of the world gathered from outside chess or a different NN structure from LC0, they seem to have knowledge of deeper and more complex patterns than LC0, despite having seen a lot fewer games.
Even if we ignore the above, though, the bit about how special human understanding must be is far from clear.
I won't deny that there seems to be something different about the human approach to chess, but that it is more efficient cannot be inferred merely by noting how relatively little mental work we are conscious of doing.
We are in general conscious of very, very little of what is actually happening in the brain, so we don't really know how efficient our cognition is.
One thing we know is that somebody who has done something a large number of times is likely to have a "fast neural path" for doing it.
Finally, while it is true that humans seem to learn relatively well from far fewer samples than any machine learning system, even this is not necessarily so clear.
It might be that we learn relatively quickly in new domains because of how much we've learned in general, and the brain is able to leverage that existing learning from other domains, in which case it is again unclear just how efficient we actually are overall.
To reiterate, I'm not saying that your idea is necessarily wrong. I'm just saying that "Humans have a much more efficient approach to learning and playing chess than current engines" is not the only plausible explanation of your observations.
Cheers!
Well, fruit flies have many complex behaviours, and they manage this with just 120,000 nerve cells - so it's not necessary to have a big brain for all complex behaviours: their brain is highly optimised to generate the behaviours they need with minimal hardware. The evidence suggests to me that you don't need the 10^15 synapses the human brain has to play near perfect chess.