Critter's learning is flawed because it's blind unless it reaches more depth, which eventually becomes prohibitive. It also has problems learning about refutations at low depth, which are very important (Say, Critter searches up to depth 24 some line and makes a move, the user knows this move loses and plays the refutation, Critter sees this new line is best since depth 10, but to see it from the root would require reaching depth 25, which isn't practical so Critter may lose this position over and over.)carldaman wrote:It probably can be done without tying up too many resources, since the engines that have learning features are quite few - Critter, Baron, Phalanx, RomiChess, of course, and maybe a few others.
Stockfish (PA_GTB) learning is flawed because it's very easy to confuse it in some endgame, make it show 0.00, backtrack this to the root, in a lost position. If this happens by accident Stockfish will aim to this position thinking it's a draw and lose it every time.
Other engines with Learning:
Yace - No comment on this one. The engine was so weak that I couldn't check the effect of learning. Though it's stronger than RomiChess
Rybka 3 - Has actually a very good learning. The problem is the user needs to manually propagate it, so no way you're going to be playing 1000000 games with it, as you'd need to sit after every game and make sure Rybka learns.
Shredder 11 - My favorite learning algorythm thus far. Doesn't have the flaw of other learning algorythms and one can jump around more easily. I think it's limited to a small file, though. And it was removed in Shredder 12 or 13 (never had 12, but 13 doesn't have the feature.)
Edit - Oh, and don't forget Houdini's learning, which was also removed on later versions. In my experience its learning was useless.