Observator bias or...
Moderators: hgm, Rebel, chrisw, Ras, hgm, chrisw, Rebel, Ras
Re: Observator bias or...
I remember since testing with Kiwi that results with 100 games are very unreliable. It sometimes happen that a version gets a bad start but gets better at the end of the long test. On the other hand, I had a version reach 64% after 100 games and finish with a disappointing 50% after 720 games... I will now increase the number to 800 and see if that brings some benefits (not much is expected though).
Re: Observator bias or...
I once wrote a small util that emulates a match between two equal engines in strength, here is the code, do some tests and shiver.Alessandro Scotti wrote:I remember since testing with Kiwi that results with 100 games are very unreliable. It sometimes happen that a version gets a bad start but gets better at the end of the long test. On the other hand, I had a version reach 64% after 100 games and finish with a disappointing 50% after 720 games... I will now increase the number to 800 and see if that brings some benefits (not much is expected though).
Ed
-----------------------------------------------------------
Code: Select all
#include <stdio.h>
#include <stdlib.h>
void main() // emulate matches
{ int r,x,max,c; float win,loss,draw,f1,f2,f3,f4; char w[200]; int rnd,d,e;
srand(rnd);
again: printf("Number of Games "); gets(w); max=atoi(w);
loop: x=0; win=0; loss=0; draw=0; printf("\n");
next: if (x==max) goto einde;
r=rand(); r=r&3; if (r==0) goto next;
if (r==1) win++;
if (r==2) loss++;
if (r==3) draw++;
x++; if (x==(max/4)) goto disp;
if (x==(max/2)) goto disp;
if (x==(max/4)+(max/2)) goto disp;
if (x==max) goto disp;
goto next;
disp: f1=win+(draw/2); f2=loss+(draw/2); f4=x; f3=(f1*100)/f4; d=f1; e=f2;
printf("%d-%d (%.1f%%) ",d,e,f3);
goto next;
einde: c=getch(); if (c=='q') return;
if (c=='a') { printf("\n\n"); goto again; }
goto loop;
}
-
- Posts: 12702
- Joined: Wed Mar 08, 2006 8:57 pm
- Location: Redmond, WA USA
Re: Observator bias or...
Slightly cleaned up version of the same thing. The original exhibits undefined behavior because of access to an uninitialized variable.
Code: Select all
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
void terminate_chance(void)
{
int c;
puts("enter 'q' to quit, anything else to continue");
c = getch();
if (c == 'q')
exit(EXIT_SUCCESS);
puts("\n");
}
int main(void)
{ // emulate matches
int r,
x,
max,
c;
double win,
loss,
draw,
f1,
f2,
f3,
f4;
char w[200];
int rnd,
d,
e;
int keep_looping;
srand((unsigned) time(NULL));
for (;;) {
keep_looping = 1;
printf("Number of Games ");
fflush(stdout);
fgets(w, sizeof w, stdin);
max = atoi(w);
for (; keep_looping;) {
x = 0;
win = 0;
loss = 0;
draw = 0;
printf("\n");
for (; keep_looping;) {
for (; keep_looping;) {
do {
if (x == max) {
terminate_chance();
keep_looping = 0;
}
r = rand();
r &= 3;
}
while (r == 0);
if (r == 1)
win++;
if (r == 2)
loss++;
if (r == 3)
draw++;
x++;
if (x == (max / 4))
break;
if (x == (max / 2))
break;
if (x == (max / 4) + (max / 2))
break;
if (x == max)
break;
}
f1 = win + (draw / 2);
f2 = loss + (draw / 2);
f4 = x;
f3 = (f1 * 100) / f4;
d = f1;
e = f2;
printf("%d-%d (%.1f%%) ", d, e, f3);
}
}
}
}
-
- Posts: 12702
- Joined: Wed Mar 08, 2006 8:57 pm
- Location: Redmond, WA USA
Re: Observator bias or...
New version that should also compile on non-windows platforms.
Code: Select all
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
void terminate_chance(void)
{
int c;
puts("enter 'q' to quit, anything else to continue");
c = getchar();
if (c == 'q')
exit(EXIT_SUCCESS);
puts("\n");
}
int main(void)
{ // emulate matches
int r,
x,
max;
double win,
loss,
draw,
f1,
f2,
f3,
f4;
char w[200];
int d,
e;
int keep_looping;
srand((unsigned) time(NULL));
for (;;) {
keep_looping = 1;
printf("Number of Games:");
fflush(stdout);
fgets(w, sizeof w, stdin);
max = atoi(w);
for (; keep_looping;) {
x = 0;
win = 0;
loss = 0;
draw = 0;
printf("\n");
for (; keep_looping;) {
for (; keep_looping;) {
do {
if (x == max) {
terminate_chance();
keep_looping = 0;
}
r = rand();
r &= 3;
}
while (r == 0);
if (r == 1)
win++;
if (r == 2)
loss++;
if (r == 3)
draw++;
x++;
if (x == (max / 4))
break;
if (x == (max / 2))
break;
if (x == (max / 4) + (max / 2))
break;
if (x == max)
break;
}
f1 = win + (draw / 2);
f2 = loss + (draw / 2);
f4 = x;
f3 = (f1 * 100) / f4;
d = (int) (f1 + 0.5);
e = (int) (f2 + 0.5);
printf("%d-%d (%.1f%%) ", d, e, f3);
}
}
}
}
-
- Posts: 12702
- Joined: Wed Mar 08, 2006 8:57 pm
- Location: Redmond, WA USA
Re: Observator bias or...
This version cures the extraneous printing and is formatted better.
Code: Select all
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
void terminate_chance(char *string, size_t length)
{
int c;
puts("\nEnter 'q' and <Enter> to quit, anything else to continue");
fgets(string, sizeof string, stdin);
c = string[0];
if (c == 'q')
exit(EXIT_SUCCESS);
puts("\n");
}
int main(void)
{ // emulate matches
int r,
x,
max;
double win,
loss,
draw,
f1,
f2,
f3,
f4;
char w[200];
int d,
e;
int keep_looping;
srand((unsigned) time(NULL));
for (;;) {
keep_looping = 1;
printf("Number of Games:");
fflush(stdout);
fgets(w, sizeof w, stdin);
max = atoi(w);
for (; keep_looping;) {
x = 0;
win = 0;
loss = 0;
draw = 0;
printf("\n");
for (; keep_looping;) {
for (; keep_looping;) {
do {
if (x == max) {
terminate_chance(w, sizeof w);
keep_looping = 0;
}
r = rand();
r &= 3;
}
while (r == 0);
if (r == 1)
win++;
if (r == 2)
loss++;
if (r == 3)
draw++;
x++;
if (x == (max / 4))
break;
if (x == (max / 2))
break;
if (x == (max / 4) + (max / 2))
break;
if (x == max)
break;
}
if (keep_looping) {
f1 = win + (draw / 2);
f2 = loss + (draw / 2);
f4 = x;
f3 = (f1 * 100) / f4;
d = (int) (f1 + 0.5);
e = (int) (f2 + 0.5);
printf("%d-%d (%.1f%%) ", d, e, f3);
}
}
}
}
}
-
- Posts: 28268
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Observator bias or...
64% after 100 games between approximately equal engines is extreme: the standard error over 100 games should be 0.4/sqrt(100) = 4%, so a 14% deviation represents 3.5 sigma. This should happen on the average only 1 in ~4000 tries.Alessandro Scotti wrote:I remember since testing with Kiwi that results with 100 games are very unreliable. It sometimes happen that a version gets a bad start but gets better at the end of the long test. On the other hand, I had a version reach 64% after 100 games and finish with a disappointing 50% after 720 games... I will now increase the number to 800 and see if that brings some benefits (not much is expected though).
I noted a very strange effect when I was testing uMax in self play. The standard error over 100 games should be 4%, but when I played 1000 games between the same versions, and looked at the scores of the ten individual 100-game runs, these results deviated on the average much more from each other (and the final average result) than you would expect from the calculated standard error. This can only happen if the games are not independent! I can indeed not exclude this, as all the games were played in a single run, and were using the random seed the previous game ended with. So with a bad randomizer, if a single game repeats due to an equal or very close seed at the start of the game, it might imply that the following game repeats as well, destroying the independence of the game.
Whatever the cause, the effect was that the error in the win percentage was always a lot larger than you would expect based on the number of games.
Re: Observator bias or...
I think the math only works if P(win)=P(loose)=P(draw)=1/3 (which I doubt is the case)hgm wrote:64% after 100 games between approximately equal engines is extreme: the standard error over 100 games should be 0.4/sqrt(100) = 4%, so a 14% deviation represents 3.5 sigma. This should happen on the average only 1 in ~4000 tries.Alessandro Scotti wrote:I remember since testing with Kiwi that results with 100 games are very unreliable. It sometimes happen that a version gets a bad start but gets better at the end of the long test. On the other hand, I had a version reach 64% after 100 games and finish with a disappointing 50% after 720 games... I will now increase the number to 800 and see if that brings some benefits (not much is expected though).
I noted a very strange effect when I was testing uMax in self play. The standard error over 100 games should be 4%, but when I played 1000 games between the same versions, and looked at the scores of the ten individual 100-game runs, these results deviated on the average much more from each other (and the final average result) than you would expect from the calculated standard error. This can only happen if the games are not independent! I can indeed not exclude this, as all the games were played in a single run, and were using the random seed the previous game ended with. So with a bad randomizer, if a single game repeats due to an equal or very close seed at the start of the game, it might imply that the following game repeats as well, destroying the independence of the game.
Whatever the cause, the effect was that the error in the win percentage was always a lot larger than you would expect based on the number of games.
Ed's code even assumes P(win,white)==P(win,black) which I doubt as well.
Tony
-
- Posts: 10661
- Joined: Thu Mar 09, 2006 12:37 am
- Location: Tel-Aviv Israel
Re: Observator bias or...
With bigger probability for white the variance is even smaller so result of 64% after 100 games is even less expected.Tony wrote:I think the math only works if P(win)=P(loose)=P(draw)=1/3 (which I doubt is the case)hgm wrote:64% after 100 games between approximately equal engines is extreme: the standard error over 100 games should be 0.4/sqrt(100) = 4%, so a 14% deviation represents 3.5 sigma. This should happen on the average only 1 in ~4000 tries.Alessandro Scotti wrote:I remember since testing with Kiwi that results with 100 games are very unreliable. It sometimes happen that a version gets a bad start but gets better at the end of the long test. On the other hand, I had a version reach 64% after 100 games and finish with a disappointing 50% after 720 games... I will now increase the number to 800 and see if that brings some benefits (not much is expected though).
I noted a very strange effect when I was testing uMax in self play. The standard error over 100 games should be 4%, but when I played 1000 games between the same versions, and looked at the scores of the ten individual 100-game runs, these results deviated on the average much more from each other (and the final average result) than you would expect from the calculated standard error. This can only happen if the games are not independent! I can indeed not exclude this, as all the games were played in a single run, and were using the random seed the previous game ended with. So with a bad randomizer, if a single game repeats due to an equal or very close seed at the start of the game, it might imply that the following game repeats as well, destroying the independence of the game.
Whatever the cause, the effect was that the error in the win percentage was always a lot larger than you would expect based on the number of games.
Ed's code even assumes P(win,white)==P(win,black) which I doubt as well.
Tony
Uri
Re: Observator bias or...
Not if P(draw)<1/3Uri Blass wrote:With bigger probability for white the variance is even smaller so result of 64% after 100 games is even less expected.Tony wrote:I think the math only works if P(win)=P(loose)=P(draw)=1/3 (which I doubt is the case)hgm wrote:64% after 100 games between approximately equal engines is extreme: the standard error over 100 games should be 0.4/sqrt(100) = 4%, so a 14% deviation represents 3.5 sigma. This should happen on the average only 1 in ~4000 tries.Alessandro Scotti wrote:I remember since testing with Kiwi that results with 100 games are very unreliable. It sometimes happen that a version gets a bad start but gets better at the end of the long test. On the other hand, I had a version reach 64% after 100 games and finish with a disappointing 50% after 720 games... I will now increase the number to 800 and see if that brings some benefits (not much is expected though).
I noted a very strange effect when I was testing uMax in self play. The standard error over 100 games should be 4%, but when I played 1000 games between the same versions, and looked at the scores of the ten individual 100-game runs, these results deviated on the average much more from each other (and the final average result) than you would expect from the calculated standard error. This can only happen if the games are not independent! I can indeed not exclude this, as all the games were played in a single run, and were using the random seed the previous game ended with. So with a bad randomizer, if a single game repeats due to an equal or very close seed at the start of the game, it might imply that the following game repeats as well, destroying the independence of the game.
Whatever the cause, the effect was that the error in the win percentage was always a lot larger than you would expect based on the number of games.
Ed's code even assumes P(win,white)==P(win,black) which I doubt as well.
Tony
Uri
Tony