Open letter to chess programmers

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
hgm
Posts: 27788
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Open letter to chess programmers

Post by hgm »

OK, I now have a WinBoard version that prints it like this:

Code: Select all

1r3rk1/4pp1p/2Bpn1p1/4q3/1PR5/3QP1P1/5P1P/2R3K1 w - - bm h4;
r1bqk2r/pp3pp1/2p2nnp/3p4/1b1P4/2N1PNB1/PPQ2PPP/2R1KB1R b Kkq - bm Ne7;
2rq1nk1/ppr1bppp/2n1p3/3pP3/3P1NQ1/1P1NB3/P4PPP/R2R2K1 w - - bm Rab1;
r1bqkb1r/pp2pppp/2n2n2/2pp4/3P4/1P2PN2/P1P2PPP/RNBQKB1R w KQkq - bm Bd3;
3rr1k1/p1pqn1pp/1p2pb2/3p1p2/1PPP1N2/2BQ2P1/P3PP1P/2RR2K1 w - - bm Nh5;
r1bq1rk1/pp3ppp/2n5/2bp4/8/5NP1/PPQ1PPBP/R1B2RK1 b - - bm b6;
rnq2rk1/1p2pp1p/3p1np1/8/3Q4/5N2/PPP1BPPP/R4RK1 w - - bm Qd3;
r2qr1k1/pp1n1pp1/2pb1n1p/3pp3/1PP3b1/3P1NP1/PBNQPPBP/R4RK1 b - - bm dxc4;
r2qr1k1/1bpn1ppp/1p1ppn2/p7/2PP4/4QNPB/PP2PP1P/2RRB1K1 b - - bm Rb8;
3r1rk1/1q3ppp/p2p1b2/1p2p3/4Pn2/P1P2NNP/1PQ2PP1/R3R1K1 b - - bm Rb8;
All that was needed for this was to add the code

Code: Select all

	if(*appData.finger) {
	    static FILE *f;
	    char *fen = PositionToFEN(backwardMostMove, NULL);
	    if(!f) f = fopen(appData.finger, "w");
	    fen[strlen(fen) - 4] = NULLCHAR;
	    if(f) fprintf(f, "%s bm %s;\n", fen, parseList[backwardMostMove]);
	    free(fen);
         GameEnds(GameUnfinished, NULL, GE_XBOARD);
     }
and a new option -finger FILENAME in the option list where the user can specify the output file (and switch on the code).
User avatar
hgm
Posts: 27788
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Open letter to chess programmers

Post by hgm »

The patch is now incorporated in the latest commit on the GNU-Savannah source repository. To run the similarity test requires the command

xboard -fcp ENGINE -scp ENGINE -mg 10000 -finger fingerprint.epd -lpi -1 -lpf simcsvn1.unix.epd -searchTime 0:01

On Windows, start WinBoard through the Startup Dialog, selecting the engine under test both as first and second engine, and as 'Additional options' write:

-mg 10000 -finger fingerprint.epd -lpi -1 -lpf simcsvn1.dos.epd -searchTime 0:01

The output will then be written to the file 'fingerprint.epd'.
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: 24+ hours ...

Post by Laskos »

lucasart wrote:
Laskos wrote: For diminishing returns something like 2^(8+((ELO-1400)/100)^1.1) might be more accurate (assuming ELO>1400).
I've done some measurements, and the diminishing return effect is not as clear as I thought. It really starts to kick in after a certain depth, probably due to move count and SEE pruning at very low depths. I got a goot fit with the following model:

* nodes = 2^(n+8) for n >= 0
* ELO(self-play) ~= 200*n^0.93

(ELO scale is obviously relative to an arbitrary constant. here I chose ELO=0 when n=0, that is to say for 256 nodes)

Then I would transform that into human ELO based on a pure guess:
* 256 nodes corresponds to 1400 ELO (?)
* computer vs computer (as opposed to computer vs human) combined with the fact that it's self-play, inflates ELO by a factor two

So I get:

ELO(human scale) = 1400 + 100*n^0.93

I'm still waiting for more test results to finish, as it takes longer for large values of n, obviously... I need to see how the diminishing return effect accelerates (or not) at n=11 and n=12. So far, I've measured n=0..10
Did you get n=11,12 results? Your 0.93 is basically mine 1/1.1, the scheme is the same. I put nodes = 2^(8+((ELO-1400)/100)^1.1) as a guesstimate giving slowly diminishing returns.
User avatar
lucasart
Posts: 3232
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: 24+ hours ...

Post by lucasart »

Laskos wrote:
lucasart wrote:
Laskos wrote: For diminishing returns something like 2^(8+((ELO-1400)/100)^1.1) might be more accurate (assuming ELO>1400).
I've done some measurements, and the diminishing return effect is not as clear as I thought. It really starts to kick in after a certain depth, probably due to move count and SEE pruning at very low depths. I got a goot fit with the following model:

* nodes = 2^(n+8) for n >= 0
* ELO(self-play) ~= 200*n^0.93

(ELO scale is obviously relative to an arbitrary constant. here I chose ELO=0 when n=0, that is to say for 256 nodes)

Then I would transform that into human ELO based on a pure guess:
* 256 nodes corresponds to 1400 ELO (?)
* computer vs computer (as opposed to computer vs human) combined with the fact that it's self-play, inflates ELO by a factor two

So I get:

ELO(human scale) = 1400 + 100*n^0.93

I'm still waiting for more test results to finish, as it takes longer for large values of n, obviously... I need to see how the diminishing return effect accelerates (or not) at n=11 and n=12. So far, I've measured n=0..10
Did you get n=11,12 results? Your 0.93 is basically mine 1/1.1, the scheme is the same. I put nodes = 2^(8+((ELO-1400)/100)^1.1) as a guesstimate giving slowly diminishing returns.
Yes, I finally finished the test, and the diminishing returns effect is more pronounced there, as expected. I am now using this formula:

Code: Select all

nodes = pow(2.0, 8.0 + pow((UCI_Elo-ELO_MIN)/128.0, 1.0/0.9));
ELO_MIN is an arbitrary number corresponding to the ELO value we want to assign to 2^8=256 nodes. I bumped it up to 1500 ELO, after trying to play a few games and realizing that a 256 nodes search is stronger than I expected.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: 24+ hours ...

Post by Laskos »

lucasart wrote:
Laskos wrote:
lucasart wrote:
Laskos wrote: For diminishing returns something like 2^(8+((ELO-1400)/100)^1.1) might be more accurate (assuming ELO>1400).
I've done some measurements, and the diminishing return effect is not as clear as I thought. It really starts to kick in after a certain depth, probably due to move count and SEE pruning at very low depths. I got a goot fit with the following model:

* nodes = 2^(n+8) for n >= 0
* ELO(self-play) ~= 200*n^0.93

(ELO scale is obviously relative to an arbitrary constant. here I chose ELO=0 when n=0, that is to say for 256 nodes)

Then I would transform that into human ELO based on a pure guess:
* 256 nodes corresponds to 1400 ELO (?)
* computer vs computer (as opposed to computer vs human) combined with the fact that it's self-play, inflates ELO by a factor two

So I get:

ELO(human scale) = 1400 + 100*n^0.93

I'm still waiting for more test results to finish, as it takes longer for large values of n, obviously... I need to see how the diminishing return effect accelerates (or not) at n=11 and n=12. So far, I've measured n=0..10
Did you get n=11,12 results? Your 0.93 is basically mine 1/1.1, the scheme is the same. I put nodes = 2^(8+((ELO-1400)/100)^1.1) as a guesstimate giving slowly diminishing returns.
Yes, I finally finished the test, and the diminishing returns effect is more pronounced there, as expected. I am now using this formula:

Code: Select all

nodes = pow(2.0, 8.0 + pow((UCI_Elo-ELO_MIN)/128.0, 1.0/0.9));
ELO_MIN is an arbitrary number corresponding to the ELO value we want to assign to 2^8=256 nodes. I bumped it up to 1500 ELO, after trying to play a few games and realizing that a 256 nodes search is stronger than I expected.
You could go below ELO_MIN by having Sgn(UCI_Elo - ELO_MIN) * Abs{(UCI_Elo - ELO_MIN)/128}^(1.0/0.9). Or just change 8 to 4 (minimum 16 nodes instead of 256), for example. Maybe some would like to play at 1200 level or so.
User avatar
lucasart
Posts: 3232
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: 24+ hours ...

Post by lucasart »

Laskos wrote:
lucasart wrote:
Laskos wrote:
lucasart wrote:
Laskos wrote: For diminishing returns something like 2^(8+((ELO-1400)/100)^1.1) might be more accurate (assuming ELO>1400).
I've done some measurements, and the diminishing return effect is not as clear as I thought. It really starts to kick in after a certain depth, probably due to move count and SEE pruning at very low depths. I got a goot fit with the following model:

* nodes = 2^(n+8) for n >= 0
* ELO(self-play) ~= 200*n^0.93

(ELO scale is obviously relative to an arbitrary constant. here I chose ELO=0 when n=0, that is to say for 256 nodes)

Then I would transform that into human ELO based on a pure guess:
* 256 nodes corresponds to 1400 ELO (?)
* computer vs computer (as opposed to computer vs human) combined with the fact that it's self-play, inflates ELO by a factor two

So I get:

ELO(human scale) = 1400 + 100*n^0.93

I'm still waiting for more test results to finish, as it takes longer for large values of n, obviously... I need to see how the diminishing return effect accelerates (or not) at n=11 and n=12. So far, I've measured n=0..10
Did you get n=11,12 results? Your 0.93 is basically mine 1/1.1, the scheme is the same. I put nodes = 2^(8+((ELO-1400)/100)^1.1) as a guesstimate giving slowly diminishing returns.
Yes, I finally finished the test, and the diminishing returns effect is more pronounced there, as expected. I am now using this formula:

Code: Select all

nodes = pow(2.0, 8.0 + pow((UCI_Elo-ELO_MIN)/128.0, 1.0/0.9));
ELO_MIN is an arbitrary number corresponding to the ELO value we want to assign to 2^8=256 nodes. I bumped it up to 1500 ELO, after trying to play a few games and realizing that a 256 nodes search is stronger than I expected.
You could go below ELO_MIN by having Sgn(UCI_Elo - ELO_MIN) * Abs{(UCI_Elo - ELO_MIN)/128}^(1.0/0.9). Or just change 8 to 4 (minimum 16 nodes instead of 256), for example. Maybe some would like to play at 1200 level or so.
formulaically yes, but in practice this is problematic. you cannot use any arbirtrarly low number of nodes. You have to force the engine to finish depth 1, if the nuber of nodes is really small, otherwise you will return an unitialized or stupid best move. Don't forget that the node count is also counting QS nodes, so with 16 nodes you're not going very far...
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
Lars
Posts: 12
Joined: Sat Jun 08, 2013 7:13 pm
Location: Denmark

Re: Open letter to chess programmers

Post by Lars »

Hi,

I have run this on Windows / winboard using crafty as the engine.

And found that it does not Work. Look at the crafty logfile.

The program enginewb.cpp does not correctly parse the feature information from crafty.

Crafty v23.4 (1 cpus)

White(1): xboard
White(1): protover 2
feature ping=1 setboard=1 san=1 time=1 draw=1
feature sigint=0 sigterm=0 reuse=1 analyze=1
feature myname="Crafty-23.4" name=1
feature playother=1 colors=0 memory=1
feature variants="normal,nocastle"
feature done=1
White(1): accepted ping
White(1): accepted sigint
White(1): accepted myname
White(1): accepted playother
White(1): accepted variants
White(1): accepted done
White(1): new
White(1): post
White(1): easy
pondering disabled.
White(1): ping 1
pong 1
White(1): ping 2
pong 2
White(1): st 1
search time set to 1.00.
White(1): go
pijl
Posts: 115
Joined: Mon Sep 17, 2012 8:59 pm

Re: Open letter to chess programmers

Post by pijl »

Lars wrote:Hi,

I have run this on Windows / winboard using crafty as the engine.

And found that it does not Work. Look at the crafty logfile.

The program enginewb.cpp does not correctly parse the feature information from crafty.

Crafty v23.4 (1 cpus)

White(1): xboard
White(1): protover 2
feature ping=1 setboard=1 san=1 time=1 draw=1
feature sigint=0 sigterm=0 reuse=1 analyze=1
feature myname="Crafty-23.4" name=1
feature playother=1 colors=0 memory=1
feature variants="normal,nocastle"
feature done=1
White(1): accepted ping
White(1): accepted sigint
White(1): accepted myname
White(1): accepted playother
White(1): accepted variants
White(1): accepted done
White(1): new
White(1): post
White(1): easy
pondering disabled.
White(1): ping 1
pong 1
White(1): ping 2
pong 2
White(1): st 1
search time set to 1.00.
White(1): go
Thanks. I will update the program well in advance for the next tournament. Also, as the latest winboard does support running the fingerprint now as well that could be used instead.
User avatar
hgm
Posts: 27788
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Open letter to chess programmers

Post by hgm »

Indeed, that was true in source.

To prevent the need for compiling it yourself, I now uploaded my binary:

http://hgm.nubati.net/WB-sim.zip