As in previous discussions, here are my thoughts.
(1) position learning does not help in real games. The tree space is so large, a single value here and there doesn't help at all.
I'd like this feature as an analysis tool. As long as it was reimplemented, I could try to edit the feature into an experiemental version of Crafty
Generating learn can be restricted to a fuction like Analyze(); or even a compile option so it doesn't effect anything for your testing.
(2) it introduces devious problems in testing on suites of positions. Learning can most definitely help there, but it is not the kind of help you want, because a change is not making the program perform better, learning is. I can't count the number of times people have posted results here only to have to retract them later with a "I forgot bout position learning and had already run the test once, which polluted the next run."
I understand the control test conditions concern.
(3) it has advantages in backward analysis of games, done manually. But that is the only plus I have seen myself. I removed it because of (1) and (2) above, no help in real games and breaking test results (it happened to me for a while until I made sure that I had a "learn=0" in every directory I could possibly test in, and that still failed when I would set up a new directory for a specific test condition...
The purpose for I have in mind is automatically generating learn for each position, and output to file. It would be similar to the old C.A.P. idea, but with it done in the search, without using .epd files to analyze. Learning after each node...
move > learn(); > move > learn();...
Joshua