Note that I do not think that most engines do learning in a hard manner.diep wrote:hi Remi,Rémi Coulom wrote:Very interesting. I am curious to see the results.
I had started to implement alternative models in bayeselo at the time I wrote the unfinished paper I posted here earlier. But I did not try to MM them. My plan was to use Newton's method or Conjugate Gradient. I don't expect it will be possible to apply MM to Glenn-David.
I recommend normalizing elo scales by having the same derivative at zero of the expected gain (p(win)+p(draw)/2). That's how I did for the original bayeselo.
Rémi
What seems very popular nowadays is that all sorts of engines do learning in a rather hard manner. Hard i mean: difficult to turn off.
If we talk only about learning from hash table that is probably the most common problem then it is easy to disable this learning by telling the engine to quit after every game and download the engine again.
Even in the worst case
I think that it is easy to turn off learning even if the engine does not support it.
Simply care to have a copy of every engine.
After every game delete every file that the engine generated during the game and also delete the engines(to prevent the engine to learn by changing the exe file).
Save the pgn in a folder that the engine has no idea about it so they cannot use the pgn of the previous game to learn
After that copy the copies that you have to get an additional copy for the next game and run another game.







