On Windows, start WinBoard through the Startup Dialog, selecting the engine under test both as first and second engine, and as 'Additional options' write:
Laskos wrote:
For diminishing returns something like 2^(8+((ELO-1400)/100)^1.1) might be more accurate (assuming ELO>1400).
I've done some measurements, and the diminishing return effect is not as clear as I thought. It really starts to kick in after a certain depth, probably due to move count and SEE pruning at very low depths. I got a goot fit with the following model:
* nodes = 2^(n+8) for n >= 0
* ELO(self-play) ~= 200*n^0.93
(ELO scale is obviously relative to an arbitrary constant. here I chose ELO=0 when n=0, that is to say for 256 nodes)
Then I would transform that into human ELO based on a pure guess:
* 256 nodes corresponds to 1400 ELO (?)
* computer vs computer (as opposed to computer vs human) combined with the fact that it's self-play, inflates ELO by a factor two
So I get:
ELO(human scale) = 1400 + 100*n^0.93
I'm still waiting for more test results to finish, as it takes longer for large values of n, obviously... I need to see how the diminishing return effect accelerates (or not) at n=11 and n=12. So far, I've measured n=0..10
Did you get n=11,12 results? Your 0.93 is basically mine 1/1.1, the scheme is the same. I put nodes = 2^(8+((ELO-1400)/100)^1.1) as a guesstimate giving slowly diminishing returns.
Laskos wrote:
For diminishing returns something like 2^(8+((ELO-1400)/100)^1.1) might be more accurate (assuming ELO>1400).
I've done some measurements, and the diminishing return effect is not as clear as I thought. It really starts to kick in after a certain depth, probably due to move count and SEE pruning at very low depths. I got a goot fit with the following model:
* nodes = 2^(n+8) for n >= 0
* ELO(self-play) ~= 200*n^0.93
(ELO scale is obviously relative to an arbitrary constant. here I chose ELO=0 when n=0, that is to say for 256 nodes)
Then I would transform that into human ELO based on a pure guess:
* 256 nodes corresponds to 1400 ELO (?)
* computer vs computer (as opposed to computer vs human) combined with the fact that it's self-play, inflates ELO by a factor two
So I get:
ELO(human scale) = 1400 + 100*n^0.93
I'm still waiting for more test results to finish, as it takes longer for large values of n, obviously... I need to see how the diminishing return effect accelerates (or not) at n=11 and n=12. So far, I've measured n=0..10
Did you get n=11,12 results? Your 0.93 is basically mine 1/1.1, the scheme is the same. I put nodes = 2^(8+((ELO-1400)/100)^1.1) as a guesstimate giving slowly diminishing returns.
Yes, I finally finished the test, and the diminishing returns effect is more pronounced there, as expected. I am now using this formula:
ELO_MIN is an arbitrary number corresponding to the ELO value we want to assign to 2^8=256 nodes. I bumped it up to 1500 ELO, after trying to play a few games and realizing that a 256 nodes search is stronger than I expected.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
Laskos wrote:
For diminishing returns something like 2^(8+((ELO-1400)/100)^1.1) might be more accurate (assuming ELO>1400).
I've done some measurements, and the diminishing return effect is not as clear as I thought. It really starts to kick in after a certain depth, probably due to move count and SEE pruning at very low depths. I got a goot fit with the following model:
* nodes = 2^(n+8) for n >= 0
* ELO(self-play) ~= 200*n^0.93
(ELO scale is obviously relative to an arbitrary constant. here I chose ELO=0 when n=0, that is to say for 256 nodes)
Then I would transform that into human ELO based on a pure guess:
* 256 nodes corresponds to 1400 ELO (?)
* computer vs computer (as opposed to computer vs human) combined with the fact that it's self-play, inflates ELO by a factor two
So I get:
ELO(human scale) = 1400 + 100*n^0.93
I'm still waiting for more test results to finish, as it takes longer for large values of n, obviously... I need to see how the diminishing return effect accelerates (or not) at n=11 and n=12. So far, I've measured n=0..10
Did you get n=11,12 results? Your 0.93 is basically mine 1/1.1, the scheme is the same. I put nodes = 2^(8+((ELO-1400)/100)^1.1) as a guesstimate giving slowly diminishing returns.
Yes, I finally finished the test, and the diminishing returns effect is more pronounced there, as expected. I am now using this formula:
ELO_MIN is an arbitrary number corresponding to the ELO value we want to assign to 2^8=256 nodes. I bumped it up to 1500 ELO, after trying to play a few games and realizing that a 256 nodes search is stronger than I expected.
You could go below ELO_MIN by having Sgn(UCI_Elo - ELO_MIN) * Abs{(UCI_Elo - ELO_MIN)/128}^(1.0/0.9). Or just change 8 to 4 (minimum 16 nodes instead of 256), for example. Maybe some would like to play at 1200 level or so.
Laskos wrote:
For diminishing returns something like 2^(8+((ELO-1400)/100)^1.1) might be more accurate (assuming ELO>1400).
I've done some measurements, and the diminishing return effect is not as clear as I thought. It really starts to kick in after a certain depth, probably due to move count and SEE pruning at very low depths. I got a goot fit with the following model:
* nodes = 2^(n+8) for n >= 0
* ELO(self-play) ~= 200*n^0.93
(ELO scale is obviously relative to an arbitrary constant. here I chose ELO=0 when n=0, that is to say for 256 nodes)
Then I would transform that into human ELO based on a pure guess:
* 256 nodes corresponds to 1400 ELO (?)
* computer vs computer (as opposed to computer vs human) combined with the fact that it's self-play, inflates ELO by a factor two
So I get:
ELO(human scale) = 1400 + 100*n^0.93
I'm still waiting for more test results to finish, as it takes longer for large values of n, obviously... I need to see how the diminishing return effect accelerates (or not) at n=11 and n=12. So far, I've measured n=0..10
Did you get n=11,12 results? Your 0.93 is basically mine 1/1.1, the scheme is the same. I put nodes = 2^(8+((ELO-1400)/100)^1.1) as a guesstimate giving slowly diminishing returns.
Yes, I finally finished the test, and the diminishing returns effect is more pronounced there, as expected. I am now using this formula:
ELO_MIN is an arbitrary number corresponding to the ELO value we want to assign to 2^8=256 nodes. I bumped it up to 1500 ELO, after trying to play a few games and realizing that a 256 nodes search is stronger than I expected.
You could go below ELO_MIN by having Sgn(UCI_Elo - ELO_MIN) * Abs{(UCI_Elo - ELO_MIN)/128}^(1.0/0.9). Or just change 8 to 4 (minimum 16 nodes instead of 256), for example. Maybe some would like to play at 1200 level or so.
formulaically yes, but in practice this is problematic. you cannot use any arbirtrarly low number of nodes. You have to force the engine to finish depth 1, if the nuber of nodes is really small, otherwise you will return an unitialized or stupid best move. Don't forget that the node count is also counting QS nodes, so with 16 nodes you're not going very far...
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
Thanks. I will update the program well in advance for the next tournament. Also, as the latest winboard does support running the fingerprint now as well that could be used instead.