Easy for humans, very hard for engines

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

Uri Blass
Posts: 11170
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Easy for humans, very hard for engines

Post by Uri Blass »

Viz wrote: Sat May 25, 2024 9:04 am This is quite literally not chess.
Botvinnik was right - completely artificial compositions are just bad for your improvement as a chess player. We can see it with a good examples of engines - once that solve it easily completely suck in actual play compared to those who don't. Because latter are optimized towards actual chess game, and not some positions with double tripled pawns + blocked bishop setup.
This is clearly chess and I disagree with you.
Humans who solve it fast do not suck in actual play compared to human who don't.

I am even not sure that you are correct for engines because usually weak chess engines cannot see g3 in a reasonable time when lc0 is very fast in solving it and it is not optimized for solving studies.

Most relatively weak chess engines that have less than rating of 3000 cannot even see that white is winning after 1.g3 axb5 in a reasonable time because the win is too deep for them.
chrisw
Posts: 4820
Joined: Tue Apr 03, 2012 4:28 pm
Location: Midi-Pyrénées
Full name: Christopher Whittington

Re: Easy for humans, very hard for engines

Post by chrisw »

Viz wrote: Sat May 25, 2024 9:04 am This is quite literally not chess.
Botvinnik was right - completely artificial compositions are just bad for your improvement as a chess player. We can see it with a good examples of engines - once that solve it easily completely suck in actual play compared to those who don't. Because latter are optimized towards actual chess game, and not some positions with double tripled pawns + blocked bishop setup.
AAN eval that “knows” this is obviously better than an ANN eval (with same size matrices) that doesn’t. Same size matrix doesn’t cost time differently. So, if you can get this knowledge “in” without some other useful knowledge falling out the other side, it’s worth it.
I doubt we have much idea about knowledge density in networks, what more can be held per given size, etc etc. we do know that LCZ solves it, so we could assume that some more input processing (of mobility) works, we also know that NNUEs are not a good format for presenting these kinds of inputs. Suppose you had fast access massively parallel GPU programming capability then trying to “know” everything at eval becomes more possible (case LCZ).