I doubt if we can trust what engines say because engines have bugs and can bring back wrong evaluations from TT.Ovyron wrote: ↑Sat Jan 18, 2020 10:00 amHopefully you're not talking about Aquarium's IDEA, which is an abomination and you're basically wasting 90% of resources when you use it. The effects of learning can't really be emulated by a GUI, which would rely on Exclude Moves in lines where it thinks "exploration" is necessary or by others where you check the nodes in Multi-PV and go back to a previous position when the mainline's score falls under some previous line's score.
The engine needs to see the analysis, and when it sees that this doesn't work and this other thing doesn't work it'll automatically find the best line and go deeper in it without you having to play the moves. The magic happens because the engine will automatically tell you if another line is worth considering (as you revisit a previous node it'll switch to it) or not (and it'd just repeat the same move with updated score.)
With IDEA if the opponent has some plan and it works against all your variations it'll take a looong while to play all of them out until the tree is filled and the score is finally useful, with Learning the engine only needs to see it once, then it'll see other tries transpose and will show the useful score on your second revisit! So that's how I was managing to "deem positions as lost" after only visiting a single node...
Sadly, over the years very few users have been able to get the concept, and they don't use it, so it has been removed from engines (Rybka 3 had it removed and Houdini 4 had it removed and Shredder 13 had it removed, etc.) and then the name "Learning" was used for entirely different and unrelated features that fudge scores depending on game result or just bring a TT back as if you didn't unload the engine (but the engine forgets as positions are overwritten.)
So one has to rely on private software, if one is lucky...
I get it, though apparently the private programs remain so because of Stockfish's licence. The programmers making them would freely share them but they don't want their sources to be known, if Stockfish allowed people to create closed derivatives who knows how many Learning Stockfishes would we have. But I guess this is a completely different subject entirely.
But, hey, Jeremy's Bernstein open implementation of learning for Stockfish from 2014 is still here:
https://open-chess.org/download/file.ph ... 6bac1adc60
I still don't get why nobody has implemented it for latest Stockfish and given an up to date engine public learning. But if my opponents had access to Learning and all my advantage against them would vanish, then the current situation is actually the best for me (the point is having a learning engine yourself, not that it's public) and I should shut up about it.
It may be interesting to test stockfish on some cursed wins with DTZ=101 when stockfish does not use tablebases to see in how many cases stockfish is going to find a wrong winning score after a long search.
Here is one example(I did not try to use stockfish but it may be interesting if somebody test stockfish only with 5 piece syzygy tablebases to see if it is going to see a winning score or it is going to see the correct draw score).
https://syzygy-tables.info/?fen=8/2k5/3 ... _w_-_-_0_1