CCRL 40/40, 40/4 and FRC lists updated (16th February 2019)

Discussion of computer chess matches and engine tournaments.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Ferdy
Posts: 4111
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: CCRL 40/40, 40/4 and FRC lists updated (16th February 2019)

Post by Ferdy » Wed Feb 20, 2019 12:06 pm

Modern Times wrote:
Tue Feb 19, 2019 8:05 pm
No learning is allowed during the course of running an engine. Prior fixed learning is allowed. I can't think of any examples at the moment. Any testing an engine author does prior to release is in effect learning.

I guess the exception is if the learning file is essentially an opening book, then that is a problem but things get a bit murky as well. In essence the engine is supposed to do the work and not use pre-prepared moves, except for tablebases as Graham says,
Learn file, opening book and NN are generally the same, it helps to improve the play of the engine.
Before we only have learn file (a file that may contain positions that the engine does not understand easily or does not understand at all given more time, like this pos is losing, but engine see it as winning or even or a pos that is winning but see it as losing or even, etc) , and opening book (with best moves and limited to certain plies), and now an NN that just combines the 2 in one file. LZ is even more vicious in using NN, it uses the move probilities in its search (not just from the root position). Given startpos, it already knew that e4, d4, Nf3, c4 are the top moves, if you go a little deeper than startpos as in the case for most common tournament with limited opening book, the traditional engine is sweating searching trying to find the best move, but LZ just query it in NN file and it has all the best moves at that position and beyond.

I thought that traditional engine should be allowed to use a data file (learn file, book file, any file) in a contest where LZ is participating.

User avatar
Guenther
Posts: 3109
Joined: Wed Oct 01, 2008 4:33 am
Location: Regensburg, Germany
Full name: Guenther Simon
Contact:

Re: CCRL 40/40, 40/4 and FRC lists updated (16th February 2019)

Post by Guenther » Wed Feb 20, 2019 1:22 pm

hgm wrote:
Wed Feb 20, 2019 10:13 am
...
I don't know if there are any Chess engines that store their eval parameters in a separate parameter file (e.g. to facilitate tuning without having to recompile), but I know for sure that most Shogi engines do this. (E.g. for Bonanza you would have to download a >200MB file with eval parameters, in addition to the executable.)
...
There are several chess programs that do this too.
Right now, I remember at least Gosu, Amateur and newer and more prominent, SmarThink.
I am sure there are more, but I am too lazy to check them now.
Current foe list count : [101]
http://rwbc-chess.de/chronology.htm

User avatar
hgm
Posts: 23773
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: CCRL 40/40, 40/4 and FRC lists updated (16th February 2019)

Post by hgm » Wed Feb 20, 2019 2:21 pm

Ferdy wrote:
Wed Feb 20, 2019 12:06 pm
..., the traditional engine is sweating searching trying to find the best move, but LZ just query it in NN file and it has all the best moves at that position and beyond.
You make it sound like running the NN requires no sweat at all, and is like probing an opening book or memory-loaded bitbase. But nothing could be further from the truth. In the time it would take to get the move probabilities for a given position from the NN, on a single CPU, Stockfish would have done a search so deep that it knows the best move with a far higher precision, and would likely already play above 2000 Elo.

User avatar
Rebel
Posts: 4788
Joined: Thu Aug 18, 2011 10:04 am

Re: CCRL 40/40, 40/4 and FRC lists updated (16th February 2019)

Post by Rebel » Wed Feb 20, 2019 2:55 pm

Guenther wrote:
Wed Feb 20, 2019 1:22 pm
hgm wrote:
Wed Feb 20, 2019 10:13 am
...
I don't know if there are any Chess engines that store their eval parameters in a separate parameter file (e.g. to facilitate tuning without having to recompile), but I know for sure that most Shogi engines do this. (E.g. for Bonanza you would have to download a >200MB file with eval parameters, in addition to the executable.)
...
There are several chess programs that do this too.
Right now, I remember at least Gosu, Amateur and newer and more prominent, SmarThink.
I am sure there are more, but I am too lazy to check them now.
Like to give you a helping hand.

Rebel 12 (2003) - Extended Learner, this is something new similar to (DOS) CAT approach only better. When the option is active it will add large parts of games that went well to the opening book or use the data for a faster calculation next times.

From my notes:

Code: Select all

1000 games at 40m/60s [self-play]

Initial round : 49.5% starting from an empty learn file.
Second  round : 52.4%
third   round : 55.1%
Not allowed on CCRL. Not sure about CEGT.
90% of coding is debugging, the other 10% is writing bugs.

Ferdy
Posts: 4111
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: CCRL 40/40, 40/4 and FRC lists updated (16th February 2019)

Post by Ferdy » Wed Feb 20, 2019 3:15 pm

hgm wrote:
Wed Feb 20, 2019 2:21 pm
Ferdy wrote:
Wed Feb 20, 2019 12:06 pm
..., the traditional engine is sweating searching trying to find the best move, but LZ just query it in NN file and it has all the best moves at that position and beyond.
You make it sound like running the NN requires no sweat at all, and is like probing an opening book or memory-loaded bitbase. But nothing could be further from the truth. In the time it would take to get the move probabilities for a given position from the NN, on a single CPU, Stockfish would have done a search so deep that it knows the best move with a far higher precision, and would likely already play above 2000 Elo.
I am not saying that all chess positions are covered by nn.
Just allow a system that makes both engines happy. Gpu for LZ and cpu for SF. But it does not change the fact that LZ is using a learning file. But this is not the issue because that is by design.

User avatar
hgm
Posts: 23773
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: CCRL 40/40, 40/4 and FRC lists updated (16th February 2019)

Post by hgm » Wed Feb 20, 2019 3:41 pm

No, I don't think that the file with NN parameters can be called a 'learn file' anymore than stockfish.exe can be called a learn file. They are just eval and search parameters that happen to be located in a separate file for convenience, rather than statically linked in the executable. LC0 has not done any position learning, which is what 'learn file' traditionally refers to: just a book that contains individual position and their evaluation or recommended best move. The NN is nothing like that: it has to calculate evals and moves on the fly, through a horrendously complex calculation, just like alpha-beta engines calculate these on the fly, through algorithms that can be applied to any legal position, and not just to a negligible subset of positions that happened to be learned.

It is unfortunate (confusing) that standard computer-chess terminology clashes here with general AI terminology: what the latter would consider 'learning', we refer to as 'tuning'. But what we call 'Texel tuning' of an evaluation is really just 'learning' the evaluation to predict position values through a gradient-descent fitting in general terminology, and in fact exactly the same mathematical procedure as what is used to train the NN based on game data.

This is even more clear in the case of Deus X, which has its NN trained from an external database of GM games.

User avatar
Guenther
Posts: 3109
Joined: Wed Oct 01, 2008 4:33 am
Location: Regensburg, Germany
Full name: Guenther Simon
Contact:

Re: CCRL 40/40, 40/4 and FRC lists updated (16th February 2019)

Post by Guenther » Wed Feb 20, 2019 5:06 pm

Rebel wrote:
Wed Feb 20, 2019 2:55 pm
Guenther wrote:
Wed Feb 20, 2019 1:22 pm
hgm wrote:
Wed Feb 20, 2019 10:13 am
...
I don't know if there are any Chess engines that store their eval parameters in a separate parameter file (e.g. to facilitate tuning without having to recompile), but I know for sure that most Shogi engines do this. (E.g. for Bonanza you would have to download a >200MB file with eval parameters, in addition to the executable.)
...
There are several chess programs that do this too.
Right now, I remember at least Gosu, Amateur and newer and more prominent, SmarThink.
I am sure there are more, but I am too lazy to check them now.
Like to give you a helping hand.

Rebel 12 (2003) - Extended Learner, this is something new similar to (DOS) CAT approach only better. When the option is active it will add large parts of games that went well to the opening book or use the data for a faster calculation next times.

From my notes:

Code: Select all

1000 games at 40m/60s [self-play]

Initial round : 49.5% starting from an empty learn file.
Second  round : 52.4%
third   round : 55.1%
Not allowed on CCRL. Not sure about CEGT.
No no no, that's something very different.
The programs I listed and also what HG meant is the plain eval as external file (or parts of it) no more, no less.
Current foe list count : [101]
http://rwbc-chess.de/chronology.htm

User avatar
Rebel
Posts: 4788
Joined: Thu Aug 18, 2011 10:04 am

Re: CCRL 40/40, 40/4 and FRC lists updated (16th February 2019)

Post by Rebel » Thu Feb 21, 2019 4:12 am

Guenther wrote:
Wed Feb 20, 2019 5:06 pm
Rebel wrote:
Wed Feb 20, 2019 2:55 pm
Guenther wrote:
Wed Feb 20, 2019 1:22 pm
hgm wrote:
Wed Feb 20, 2019 10:13 am
...
I don't know if there are any Chess engines that store their eval parameters in a separate parameter file (e.g. to facilitate tuning without having to recompile), but I know for sure that most Shogi engines do this. (E.g. for Bonanza you would have to download a >200MB file with eval parameters, in addition to the executable.)
...
There are several chess programs that do this too.
Right now, I remember at least Gosu, Amateur and newer and more prominent, SmarThink.
I am sure there are more, but I am too lazy to check them now.
Like to give you a helping hand.

Rebel 12 (2003) - Extended Learner, this is something new similar to (DOS) CAT approach only better. When the option is active it will add large parts of games that went well to the opening book or use the data for a faster calculation next times.

From my notes:

Code: Select all

1000 games at 40m/60s [self-play]

Initial round : 49.5% starting from an empty learn file.
Second  round : 52.4%
third   round : 55.1%
Not allowed on CCRL. Not sure about CEGT.
No no no, that's something very different.
The programs I listed and also what HG meant is the plain eval as external file (or parts of it) no more, no less.
Alright, so the data files of Gosu, Amateur and SmarThink have nothing to do with learning.
90% of coding is debugging, the other 10% is writing bugs.

User avatar
Graham Banks
Posts: 33236
Joined: Sun Feb 26, 2006 9:52 am
Location: Auckland, NZ

Re: CCRL 40/40, 40/4 and FRC lists updated (16th February 2019)

Post by Graham Banks » Thu Feb 21, 2019 4:44 am

Rebel wrote:
Thu Feb 21, 2019 4:12 am
Guenther wrote:
Wed Feb 20, 2019 5:06 pm
Rebel wrote:
Wed Feb 20, 2019 2:55 pm
Guenther wrote:
Wed Feb 20, 2019 1:22 pm
hgm wrote:
Wed Feb 20, 2019 10:13 am
...
I don't know if there are any Chess engines that store their eval parameters in a separate parameter file (e.g. to facilitate tuning without having to recompile), but I know for sure that most Shogi engines do this. (E.g. for Bonanza you would have to download a >200MB file with eval parameters, in addition to the executable.)
...
There are several chess programs that do this too.
Right now, I remember at least Gosu, Amateur and newer and more prominent, SmarThink.
I am sure there are more, but I am too lazy to check them now.
Like to give you a helping hand.

Rebel 12 (2003) - Extended Learner, this is something new similar to (DOS) CAT approach only better. When the option is active it will add large parts of games that went well to the opening book or use the data for a faster calculation next times.

From my notes:

Code: Select all

1000 games at 40m/60s [self-play]

Initial round : 49.5% starting from an empty learn file.
Second  round : 52.4%
third   round : 55.1%
Not allowed on CCRL. Not sure about CEGT.
No no no, that's something very different.
The programs I listed and also what HG meant is the plain eval as external file (or parts of it) no more, no less.
Alright, so the data files of Gosu, Amateur and SmarThink have nothing to do with learning.
GreKo has a weights file too, but none of these engines is actually learning from the CCRL games played.
My email addresses:
gbanksnz at gmail.com
gbanksnz at yahoo.co.nz

User avatar
Rebel
Posts: 4788
Joined: Thu Aug 18, 2011 10:04 am

Re: CCRL 40/40, 40/4 and FRC lists updated (16th February 2019)

Post by Rebel » Thu Feb 21, 2019 4:53 am

hgm wrote:
Wed Feb 20, 2019 3:41 pm
No, I don't think that the file with NN parameters can be called a 'learn file' anymore than stockfish.exe can be called a learn file. They are just eval and search parameters that happen to be located in a separate file for convenience, rather than statically linked in the executable. LC0 has not done any position learning, which is what 'learn file' traditionally refers to: just a book that contains individual position and their evaluation or recommended best move.
There is much more you can do with learning than just a book in disguise, by head:

#1. The rebel 12 example I gave. In particular speeding up the search and often calculate one ply deeper.

#2. Evaluation of Move Sequences.

#3. Evaluation of Pawn Structures.

#4. The Rybka MIT technique.

Imagine a game that went as follows:

Move 1-10 : book
Move 11 : score +0.20
Move 15 : score +0.25
Move 20 : score -0.10
Move 25 : score -0.80
Move 30 : score -3.xx

The learning algorithm may decide to concentrate on move 15-20 and play something else.

Learning is an unexplored area and as long as it is on the boycott list of rating lists it will remain that way.
90% of coding is debugging, the other 10% is writing bugs.

Post Reply