I´d like to use an engine that learns that way (to analyse).
I' ve downloaded an old modified version of SF 6; are there any other engine that does this ?
Permanent Hash/Position Learning
Moderators: hgm, Rebel, chrisw
-
- Posts: 493
- Joined: Wed Mar 15, 2006 6:13 am
- Location: Curitiba - PR - BRAZIL
Permanent Hash/Position Learning
A. Ponti
AMD Ryzen 1800x, Windows 10.
FIDE current ratings: standard 1913, rapid 1931
AMD Ryzen 1800x, Windows 10.
FIDE current ratings: standard 1913, rapid 1931
-
- Posts: 4556
- Joined: Tue Jul 03, 2007 4:30 am
Re: Permanent Hash/Position Learning
Yes:
Critter 1.6a
Shredder 11 (I don't know if 12 has it, 13 doesn't)
Houdini 4 (I don't know if 5 has it, 6 doesn't)
Rybka 3 (and its variants Human and Dynamic - Requires MultiPV switching for propagation)
Baron
Phalanx
Yace
Spike
RomiChess has something like this, you need to feed it the PGNs of the games manually, and it learns to penalize draws.
Critter 1.6a
Shredder 11 (I don't know if 12 has it, 13 doesn't)
Houdini 4 (I don't know if 5 has it, 6 doesn't)
Rybka 3 (and its variants Human and Dynamic - Requires MultiPV switching for propagation)
Baron
Phalanx
Yace
Spike
RomiChess has something like this, you need to feed it the PGNs of the games manually, and it learns to penalize draws.
-
- Posts: 251
- Joined: Sat Dec 02, 2006 10:47 pm
- Location: Toronto
- Full name: Peter Kasinski
Re: Permanent Hash/Position Learning
I think Andscacs can do it too.
-
- Posts: 4556
- Joined: Tue Jul 03, 2007 4:30 am
Re: Permanent Hash/Position Learning
No, what Andscacs does is saving the hash, and allow you to load it later, but that's just like never unloading the engine (so, all engines do this, as long as you don't unload them.) Engines with this functionality will eventually overwrite, and forget what you teach them (even with things like "NeverClearHash".)kasinp wrote:I think Andscacs can do it too.
-
- Posts: 4367
- Joined: Fri Mar 10, 2006 5:23 am
- Location: http://www.arasanchess.org
Re: Permanent Hash/Position Learning
Arasan uses a permanent hash table for learning, but only when playing games, and only if started as a Winboard/xboard engine (not UCI). I think this is a fairly common feature. But what you are talking about I think is something different: it is keeping a permanent hash in analysis mode. AFAIK the Stockfish mod mentioned and Andscacs are two engines that do this.
See http://talkchess.com/forum/viewtopic.php?t=64517 for previous discussion.
--Jon
See http://talkchess.com/forum/viewtopic.php?t=64517 for previous discussion.
--Jon
-
- Posts: 493
- Joined: Wed Mar 15, 2006 6:13 am
- Location: Curitiba - PR - BRAZIL
Re: Permanent Hash/Position Learning
That means Arasan is always learning ? The more it plays, the bigger the hash file ?jdart wrote:Arasan uses a permanent hash table for learning, but only when playing games, and only if started as a Winboard/xboard engine (not UCI). I think this is a fairly common feature. But what you are talking about I think is something different: it is keeping a permanent hash in analysis mode. AFAIK the Stockfish mod mentioned and Andscacs are two engines that do this.
See http://talkchess.com/forum/viewtopic.php?t=64517 for previous discussion.
--Jon
A. Ponti
AMD Ryzen 1800x, Windows 10.
FIDE current ratings: standard 1913, rapid 1931
AMD Ryzen 1800x, Windows 10.
FIDE current ratings: standard 1913, rapid 1931
-
- Posts: 4367
- Joined: Fri Mar 10, 2006 5:23 am
- Location: http://www.arasanchess.org
Re: Permanent Hash/Position Learning
Yes. But it will take a long time for the size to be significant, given current disk and memory sizes. The version that has been playing on the chess servers for a couple of years has a learning file of about 26,000 lines (800k or so)..
--Jon
--Jon
-
- Posts: 493
- Joined: Wed Mar 15, 2006 6:13 am
- Location: Curitiba - PR - BRAZIL
Re: Permanent Hash/Position Learning
Maybe that´s not enough to increase an engine´s strengh dramatically (say, by 100 ELO points or more).
It is possible to gain some points using a huge op. book and some points using tablebases... but what about storing (and loading this hash file when the engine starts) *middlegame positions* ? Supose an engine "only" plays QP, English and Reti as white (is this possible ? hehehehe) , could this type of hash file increase the engine´s strengh ? What do you think ?
It is possible to gain some points using a huge op. book and some points using tablebases... but what about storing (and loading this hash file when the engine starts) *middlegame positions* ? Supose an engine "only" plays QP, English and Reti as white (is this possible ? hehehehe) , could this type of hash file increase the engine´s strengh ? What do you think ?
A. Ponti
AMD Ryzen 1800x, Windows 10.
FIDE current ratings: standard 1913, rapid 1931
AMD Ryzen 1800x, Windows 10.
FIDE current ratings: standard 1913, rapid 1931
-
- Posts: 545
- Joined: Tue Jun 06, 2017 4:49 pm
- Location: Italy
Re: Permanent Hash/Position Learning
In 2006 I built an experimental book (Manhattan.abk) for Romichess. It only played one mandatory move per position, and it always started with English as white, Caro-Kann and Queen's Indian as black. It was a tiny book, but deep enough. While having a reply (only one) for most suitable opponent move, the number of lines was very little. About 200, if I remember. It has been a failure for several reasons.Ponti wrote:Maybe that´s not enough to increase an engine´s strengh dramatically (say, by 100 ELO points or more).
It is possible to gain some points using a huge op. book and some points using tablebases... but what about storing (and loading this hash file when the engine starts) *middlegame positions* ? Supose an engine "only" plays QP, English and Reti as white (is this possible ? hehehehe) , could this type of hash file increase the engine´s strengh ? What do you think ?
- Book quality was less than average
- Learning was effective until some game conditions changed (TCs, different book)
- Learning against an opponent book wasn't valuable when changing opponent.
These days, with much more performant hardware and far better engines, I think few millions games could make the engine stronger, but I don't know how much. Anyway, Cerebellum poly version with 0% for second best move could bring some results, if you use it with SF PA GTB. I didn't try, anyway...
The task of having a learning book would be easier than position learning. If there'll be a GUI that performs the proper search by using a Stockfish version upgraded by Daniel's code, the search tree could be a learning book subtree. If it looks complicated, it's only because of my poor english...
F.S.I. Chess Teacher
-
- Posts: 4556
- Joined: Tue Jul 03, 2007 4:30 am
Re: Permanent Hash/Position Learning
Rodolfo, for elo points I think RomiChess's method is the best, as it eventually can learn how to beat anyone from a given position, doing it from starting position would just take a while...
The problem is Romi reaching won positions and losing them, then avoiding them because it lost...
For this conundrum I propose the following solutions:
1. Implement Romi Learning into Stockfish! It could potentially be 200 elo stronger than any opponent after a few thousand training games. The main elo would come from being ahead of the clock as Stockfish would learn what the opponent plays and just move instantly, deeper and deeper, what won in the past. This is also Michael's dream, given Romi is open source, the only reason this hasn't happened is lack of interest.
2. Have an adapter that uses Romi as book, then switches to Stockfish to play the game, then sends the pgn to Romi with game result so it learns. The adapter could be simple, and could be used to make any engine a book for another engine: just play engine A's moves if they're played in less than a second (book assumed), otherwise (A took longer than 1 second) fire up engine B (assumes out of book) for the rest of the game.
The problem is Romi reaching won positions and losing them, then avoiding them because it lost...
For this conundrum I propose the following solutions:
1. Implement Romi Learning into Stockfish! It could potentially be 200 elo stronger than any opponent after a few thousand training games. The main elo would come from being ahead of the clock as Stockfish would learn what the opponent plays and just move instantly, deeper and deeper, what won in the past. This is also Michael's dream, given Romi is open source, the only reason this hasn't happened is lack of interest.
2. Have an adapter that uses Romi as book, then switches to Stockfish to play the game, then sends the pgn to Romi with game result so it learns. The adapter could be simple, and could be used to make any engine a book for another engine: just play engine A's moves if they're played in less than a second (book assumed), otherwise (A took longer than 1 second) fire up engine B (assumes out of book) for the rest of the game.