SCCT Rating List - Calculation by EloStat 1.3

Discussion of computer chess matches and engine tournaments.

Moderator: Ras

User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: SCCT Rating List - Calculation by EloStat 1.3

Post by Laskos »

Daniel Shawul wrote:
The IPON problem was using default Bayeselo. I think using scale=1 eliminates the problem to compare with performances. I do not know whith what scale the real Elos are shown.

Kai
I do not know of the IPON problem. Neither did I ventured to guess the what caused the difference b/n pure/complete rating lists of CCRL. But I know not using the scale is worse than using scale = 1 to make comparisons between different lists. If you take the above example I did, mm calculated the scale to be around 0.7 which is why the elostat and bayeselo ratings numbers are more or less equal. If I used scale=1, bayeselo output would be magnified by 1/0.7 = 1.43 so a 100 elo difference maybe magnified to 140 elo. This definately would make comparisons difficult.
If you look at using scale=1, there isn't really any advantage. Staying true to the model ? why anyway because one can assume the model multiplies by a factor. What advantage does scale=1 bring? I know for sure using scaled rating atleast makes comparisons somewhat more acceptable.
Using scale=1 seemed to give me close to real Elos and not some abstract Bayeselos, but that was the IPON database. Also, Adam plots for CCRL (?) databse showed the same, he matched the Bayeselo predictions to the Elo logistic, if I am not wrong, and got very good fit with scale set to 1. But now, that that problem with pure and complete lists appeared, I do not know where the problem is (or if there is a problem at all, maybe we should not compare different lists and that's it).

Kai
Sedat Canbaz
Posts: 3018
Joined: Thu Mar 09, 2006 11:58 am
Location: Antalya/Turkey

Re: SCCT Rating List - Calculation by EloStat 1.3

Post by Sedat Canbaz »

More details,
I have done two (2) times calculations (during my calculations on 27.08.2012)

1st. time i was shocked when i noticed +16 Elo difference in 12.txt
That's why i repeated on the same date twice,you can see 13.txt

And as far as i remember,
For 12.txt i used mm 0 1 and for 13.txt i used mm 1 1

That's all,
Sedat
Modern Times
Posts: 3748
Joined: Thu Jun 07, 2012 11:02 pm

Re: SCCT Rating List - Calculation by EloStat 1.3

Post by Modern Times »

Laskos wrote: But now, that that problem with pure and complete lists appeared, I do not know where the problem is (or if there is a problem at all, maybe we should not compare different lists and that's it).

Kai
mm 1 1 causes engines on the two lists to be +36 Elo apart in some cases. But most are very much less than that. See here, which is bayeselo default except mm11
http://www.computerchess.org.uk/ccrl/40 ... _pure.html

additionally adding scale 1 into the mix is how the lists currently are, and then you get engines with ratings up to about 90 Elo different. That is kind of hard to explain to users of the lists
http://www.computerchess.org.uk/ccrl/40 ... _pure.html

I'm coming around to the fact that the complete database and the pure database are different databases, with different characteristics, and thus will have different ratings. I'm OK with "mm 1 1", that doesn't alter anything by more than 36 Elo on the pure list and in the vast majority of the cases very much less. It is when you add "scale 1" into the mix, then I am very unhappy with the results, despite the theory.
Sedat Canbaz
Posts: 3018
Joined: Thu Mar 09, 2006 11:58 am
Location: Antalya/Turkey

Re: SCCT Rating List - Calculation by EloStat 1.3

Post by Sedat Canbaz »

Daniel Shawul wrote: I really can't tell what Sedat was doing . He was saying since Fruit's score decreased its elo should decrease too. Then I duly pointed out to him that it is the expected score that should decrease for a drop in elo. I requested for data before and after the fruit games are added since that is the only way to tell what was going on, only now did he provide it. But he ignored me and went on a rampage for long ... till now when he vanishes :)
Hey Daniel,

Look what i will tell you more,
SCCT games are usually available as once per month or sometimes in two weeks

But this time i uploaded the games in 6 days !!
Are you not happy with my service ??
Maybe daily i need to upload all games ?

I see that you calculated the latest database and now what are you hero ?
And i am getting now the same BayesElo results as in yours publish calculation list

But i have a question to you:
-Can you explain me please,why in the 12.txt and 13.txt we see such strange results ?

Thanks in forwards,
Sedat
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: SCCT Rating List - Calculation by EloStat 1.3

Post by Laskos »

Daniel Shawul wrote:
Edit:
To your edited addition
I think Adam has shown clearly that default Bayeselo compresses ratings. I do not know what I am loosing, I was not winning here something either.
Ask Adam about it and see if he still thinks bayeselo compresses or anything like that. Well you keep on writing one liners so it seems you are interested in keeping me busy when false claims are out of the window. I do not wish to engage until 'another' data comes up ... It is amusing to say the least :)
This Adam's post clearly shows compression using default Bayeselo http://talkchess.com/forum/viewtopic.php?t=44380
There was another, earlier thread by another guy that showed the compression for default Bayeselo.
You seem too sensitive to the issue, so I won't bother you again here, you just confused me more, I was thnking that by revealing that "scale" parameter Remi cleared the problem. Seems not.

Kai
Daniel Shawul
Posts: 4186
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: SCCT Rating List - Calculation by EloStat 1.3

Post by Daniel Shawul »

Modern Times wrote:
Laskos wrote: But now, that that problem with pure and complete lists appeared, I do not know where the problem is (or if there is a problem at all, maybe we should not compare different lists and that's it).

Kai
mm 1 1 causes engines on the two lists to be +36 Elo apart in some cases. But most are very much less than that. See here, which is bayeselo default except mm11
http://www.computerchess.org.uk/ccrl/40 ... _pure.html

additionally adding scale 1 into the mix is how the lists currently are, and then you get engines with ratings up to about 90 Elo different. That is kind of hard to explain to users of the lists
http://www.computerchess.org.uk/ccrl/40 ... _pure.html

I'm coming around to the fact that the complete database and the pure database are different databases, with different characteristics, and thus will have different ratings. I'm OK with "mm 1 1", that doesn't alter anything by more than 36 Elo on the pure list and in the vast majority of the cases very much less. It is when you add "scale 1" into the mix, then I am very unhappy with the results, despite the theory.
This seems to be a post I agree with more or less. But good luck convincing them to avoid using scale = 1.
Daniel Shawul
Posts: 4186
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: SCCT Rating List - Calculation by EloStat 1.3

Post by Daniel Shawul »

Laskos wrote:
Daniel Shawul wrote:
Edit:
To your edited addition
I think Adam has shown clearly that default Bayeselo compresses ratings. I do not know what I am loosing, I was not winning here something either.
Ask Adam about it and see if he still thinks bayeselo compresses or anything like that. Well you keep on writing one liners so it seems you are interested in keeping me busy when false claims are out of the window. I do not wish to engage until 'another' data comes up ... It is amusing to say the least :)
This Adam's post clearly shows compression using default Bayeselo http://talkchess.com/forum/viewtopic.php?t=44380
There was another, earlier thread by another guy that showed the compression for default Bayeselo.
You seem too sensitive to the issue, so I won't bother you again here, you just confused me more, I was thnking that by revealing that "scale" parameter Remi cleared the problem. Seems not.

Kai
There is no compression it is just the value is scaled. You are recycling stuff that has been cleared up. Either ask Adam or go to programming forum and read his post. Scale and Offset are parameters that you can set to an arbitrary value. It is ridclous to say a rating computed with an offset of 2500 is more magnified than that which is calculated with an offset of 2300. It is also equally ridclous to compare a scaled result with the model. If you compare like I did for 3 or more models (!) of bayeselo, you would see the matches are so good like a perfect couple . You just recycle stuff again and again so it becomes tiring... It is like going back to 0 discussing about compression blah blah now. Atleast Ray's post finally seems to make some sense. Be objective and honest , just don't extend discussions for the sake of it...
Sedat Canbaz
Posts: 3018
Joined: Thu Mar 09, 2006 11:58 am
Location: Antalya/Turkey

Re: SCCT Rating List - Calculation by EloStat 1.3

Post by Sedat Canbaz »

Dear Friends,

Finely i found the reason about why we see Fruit with +16 Elo better performance

The answer is:
- I used BayesElo with default mm during my calculations on 27.08.2012 (i mean for 12.txt and 13.txt)
*Note:the previous my calculations were with mm 1 1 or sometimes with mm 0 1

Code: Select all

Rank Name                          Elo    +    - games score oppo. draws 
   1 Houdini 2.0t3 Pro x64 6c     3359   14   14  1700   70%  3217   39% 
   2 Houdini 2.0t3* Pro x64 6c    3359   19   19  1000   75%  3185   37% 
   3 Houdini 2.0z Pro x64 6c      3356   15   15  1600   71%  3202   36% 
   4 Houdini 2.0s2 Pro x64 6c     3355   19   19  1000   74%  3179   34% 
   5 Houdini 1.5a x64 6c          3342   17   17  1100   68%  3218   41% 
   6 Houdini 2.0Bar2 x64 6c       3342   18   18  1050   72%  3190   44% 
   7 Houdini 2.0c Pro x64 6c      3341   15   15  1500   71%  3193   39% 
   8 Houdini 2.0Higgs Pro x64 6c  3338   18   18  1050   70%  3198   42% 
   9 Houdini2Bar1 Pro x64 6c      3328   17   17  1100   69%  3200   46% 
  10 Critter 1.6 x64 6c           3300   13   13  1900   63%  3215   53% 
  11 Critter 1.4 x64 6c           3290   16   16  1200   66%  3177   47% 
  12 Rybka 4.1 79DT v1 x64 6c     3286   17   17  1100   66%  3176   38% 
  13 Stockfish 120430P x64 6c     3284   13   13  1850   60%  3214   50% 
  14 Rybka 4.1 SSE42 x64 6c       3276   13   13  1800   59%  3212   49% 
  15 Ivanhoe B46fC x64 6c         3276   16   16  1250   63%  3185   48% 
  16 Ivanhoe B46fE.02 x64 6c      3276   13   13  1900   59%  3216   53% 
  17 Stockfish 2.2.2 JA x64 6c    3275   16   16  1200   62%  3192   47% 
  18 Rybka 4.1 NO-SSE x64 6c      3275   14   14  1500   60%  3204   49% 
  19 Fire 2.2 xTreme x64 6c       3263   12   12  1900   57%  3216   52% 
  20 Stockfish VE09 x64 6c        3263   17   17  1000   63%  3178   48% 
  21 Vitruvius 1.11C x64 6c       3261   13   13  1900   56%  3216   51% 
  22 Gull II beta2 x64 6c         3215   15   14  1400   50%  3210   51% 
  23 Strelka 5.5 x64 1c           3198   14   14  1650   45%  3229   48% 
  24 Bouquet 1.4 x64 6c           3185   15   15  1250   46%  3207   47% 
  25 Naum 4.2 x64 6c              3178   13   13  1900   44%  3218   44% 
  26 Komodo 4.0 x64 1c            3160   13   13  1900   41%  3219   42% 
  27 Deep Fritz 13 w32 6c         3129   13   13  1900   36%  3220   43% 
  28 Equinox 1.35 x64 6c          3129   14   14  1550   40%  3194   40% 
  29 Spike 1.4 Leiden w32 6c      3110   13   14  1900   34%  3220   38% 
  30 Chiron 1.1a x64 6c           3108   13   13  1900   34%  3220   39% 
  31 Deep Fritz 12 w32 6c         3093   16   17  1200   36%  3185   42% 
  32 Deep Junior 13.3 x64 6c      3091   14   15  1700   31%  3228   36% 
  33 Protector 1.4.0 x64 6c       3087   14   14  1900   31%  3221   36% 
  34 Spark 1.0 x64 6c             3084   14   14  1850   31%  3217   39% 
  35 Deep Junior 13 x64 6c        3082   16   16  1300   35%  3189   36% 
  36 Deep Shredder 12 x64 6c      3080   14   14  1900   30%  3221   37% 
  37 Hiarcs 13.2 w32 6c           3063   14   14  1900   29%  3221   32% 
  38 Zappa Mexico II x64 6c       3053   15   15  1600   29%  3206   34% 
  39 Fruit 090705 x64 6c          2980   18   18  1200   23%  3190   29%  
Note that the above SCCT rating is created on default BayesElo values mm , games are based 27.08.2012 (total:29750 games,including up to Rybka 4.1 NO-SSE+1500 games per player )

But however,
Now i am asking to all BayesElo experts:
-Is that a right BayesElo measuring,where we see 16 Elo better performance for Fruit ???

Btw,there were quite interesting comments,but finely i've noticed again the Winner is 'Practice'

Greetings,
Sedat
Last edited by Sedat Canbaz on Thu Aug 30, 2012 12:50 am, edited 2 times in total.
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: SCCT Rating List - Calculation by EloStat 1.3

Post by Laskos »

Daniel Shawul wrote:
Laskos wrote:
Daniel Shawul wrote:
Edit:
To your edited addition
I think Adam has shown clearly that default Bayeselo compresses ratings. I do not know what I am loosing, I was not winning here something either.
Ask Adam about it and see if he still thinks bayeselo compresses or anything like that. Well you keep on writing one liners so it seems you are interested in keeping me busy when false claims are out of the window. I do not wish to engage until 'another' data comes up ... It is amusing to say the least :)
This Adam's post clearly shows compression using default Bayeselo http://talkchess.com/forum/viewtopic.php?t=44380
There was another, earlier thread by another guy that showed the compression for default Bayeselo.
You seem too sensitive to the issue, so I won't bother you again here, you just confused me more, I was thnking that by revealing that "scale" parameter Remi cleared the problem. Seems not.

Kai
There is no compression it is just the value is scaled. You are recycling stuff that has been cleared up. Either ask Adam or go to programming forum and read his post. Scale and Offset are parameters that you can set to an arbitrary value. It is ridclous to say a rating computed with an offset of 2500 is more magnified than that which is calculated with an offset of 2300. It is also equally ridclous to compare a scaled result with the model. If you compare like I did for 3 or more models (!) of bayeselo, you would see the matches are so good like a perfect couple . You just recycle stuff again and again so it becomes tiring... It is like going back to 0 discussing about compression blah blah now. Atleast Ray's post finally seems to make some sense. Be objective and honest , just don't extend discussions for the sake of it...
I don't know what is this mistification about scaling, it is NOT arbitrary, I want the rating list to predict for a 2900 Elos engine against a 2700 Elos engine a performance of 75%. If it predicts 80%, as Bayeselo default often does, then give me that secret table of default Bayeselo predictions. And in this case default Bayeselo ratings are said to be compressed compared to the ususal Elo logistic. Do these rating lists need to predict something, or they are numbers to establish a metric on engine ratings?

Kai
Daniel Shawul
Posts: 4186
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: SCCT Rating List - Calculation by EloStat 1.3

Post by Daniel Shawul »

Daniel Shawul wrote:
Modern Times wrote:
Laskos wrote: But now, that that problem with pure and complete lists appeared, I do not know where the problem is (or if there is a problem at all, maybe we should not compare different lists and that's it).

Kai
mm 1 1 causes engines on the two lists to be +36 Elo apart in some cases. But most are very much less than that. See here, which is bayeselo default except mm11
http://www.computerchess.org.uk/ccrl/40 ... _pure.html

additionally adding scale 1 into the mix is how the lists currently are, and then you get engines with ratings up to about 90 Elo different. That is kind of hard to explain to users of the lists
http://www.computerchess.org.uk/ccrl/40 ... _pure.html

I'm coming around to the fact that the complete database and the pure database are different databases, with different characteristics, and thus will have different ratings. I'm OK with "mm 1 1", that doesn't alter anything by more than 36 Elo on the pure list and in the vast majority of the cases very much less. It is when you add "scale 1" into the mix, then I am very unhappy with the results, despite the theory.
This seems to be a post I agree with more or less. But good luck convincing them to avoid using scale = 1.
Ray,
I run the effect of the scale and how it helps for comparison with elostat using scct data. First run is elostat (i.e one within bayeselo). Then it is bayselo with calculated scale (which turned out to be 0.7), and finally the third one with scale = 1 as you use it now. Note that I didn't even need to calculate ratings again because scale is such a 'post processing' parameter, much like offset. The ratings are magnified by 1/0.7=1.4x times.That is a difference of 100 elo will become 140 elo. Clearly list 1 and list 2 are comparable while the third one has magnified values. Provide this example to CCRL team and ask them if that is what they want.. In my opinion it was good before i.e using calculated scale (defalult bayeselo), but changing it to scale=1 has caused problems for no apparent advantage of using it...

Summary:

Exampe comparison: Gull and Vitruvius
Elostat: 55 - 7 = 48 elo
Bayeselo default = 46 -- 3 = 49 elo
Bayeselo (scale = 1) as used right now in CCRL = 67 - -4 = 71 elo

Clearly elostat and bayeselo are comparable ~49elo difference between the two but scale = 1 gives 71 elo. That is 1.4 x 50 = 70 elo as I predicted

Code: Select all

version 0056, Copyright (C) 1997-2007 Remi Coulom.
compiled Jan 30 2007 20:30:07.
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under the terms and conditions of the GNU General Public License.
See http://www.gnu.org/copyleft/gpl.html for details.
ResultSet>readpgn scct1.pgn
29250 game(s) loaded, 0 game(s) with unknown result ignored.
ResultSet>elo
ResultSet-EloRating>elostat
16 iterations
00:00:00,00
ResultSet-EloRating>ratings
Rank Name                          Elo    +    - games score oppo. draws
   1 Houdini 2.0t3* Pro x64 6c     164   18   17  1000   75%   -24   37%
   2 Houdini 2.0t3 Pro x64 6c      158   13   13  1700   70%    11   39%
   3 Houdini 2.0s2 Pro x64 6c      154   19   18  1000   74%   -30   34%
   4 Houdini 2.0z Pro x64 6c       151   15   14  1550   71%    -8   36%
   5 Houdini 2.0Bar2 x64 6c        149   17   16  1000   73%   -23   43%
   6 Houdini 2.0Higgs Pro x64 6c   140   17   16  1000   71%   -14   42%
   7 Houdini 2.0c Pro x64 6c       138   15   14  1450   71%   -18   39%
   8 Houdini 1.5a x64 6c           138   16   16  1100   68%    11   41%
   9 Houdini2Bar1 Pro x64 6c       128   15   15  1100   69%    -8   46%
  10 Critter 1.6 x64 6c             98   11   11  1900   63%     9   53%
  11 Critter 1.4 x64 6c             86   15   14  1150   67%   -36   47%
  12 Rybka 4.1 79DT v1 x64 6c       82   17   16  1100   66%   -33   38%
  13 Stockfish 120430P x64 6c       79   11   11  1850   60%     8   50%
  14 Rybka 4.1 NO-SSE x64 6c        72   16   15  1000   63%   -20   49%
  15 Stockfish 2.2.2 JA x64 6c      72   15   14  1200   62%   -16   47%
  16 Deep Rybka 4.1 x64 6c          72   12   12  1750   60%     4   48%
  17 Ivanhoe B46fE.02 x64 6c        71   11   11  1900   59%     9   53%
  18 Ivanhoe B46fC x64 6c           69   14   14  1200   64%   -28   47%
  19 Stockfish VE09 x64 6c          65   16   15  1000   63%   -31   48%
  20 Fire 2.2 xTreme x64 6c         56   11   11  1900   57%    10   52%
  21 Vitruvius 1.11C x64 6c         55   11   11  1900   56%    10   51%
  22 Gull II beta2 x64 6c            7   13   13  1400   50%     4   51%
  23 Strelka 5.5 x64 1c            -13   12   12  1650   45%    23   48%
  24 Bouquet 1.4 x64 6c            -25   14   14  1250   46%     0   47%
  25 Naum 4.2 x64 6c               -32   12   12  1900   44%    12   44%
  26 Komodo 4.0 x64 1c             -52   12   12  1900   41%    12   42%
  27 Equinox 1.35 x64 6c           -82   13   14  1550   40%   -14   40%
  28 Deep Fritz 13 w32 6c          -83   12   12  1900   36%    13   43%
  29 Spike 1.4 Leiden w32 6c      -102   12   13  1900   34%    14   38%
  30 Chiron 1.1a x64 6c           -104   12   13  1900   34%    14   39%
  31 Deep Fritz 12 w32 6c         -119   15   16  1150   37%   -27   42%
  32 Deep Junior 13.3 x64 6c      -120   13   14  1700   31%    22   36%
  33 Protector 1.4.0 x64 6c       -126   13   13  1900   31%    14   36%
  34 Deep Junior 13 x64 6c        -127   15   16  1300   35%   -20   36%
  35 Spark 1.0 x64 6c             -128   12   13  1850   31%    11   39%
  36 Deep Shredder 12 x64 6c      -132   13   13  1900   30%    15   37%
  37 Hiarcs 13.2 w32 6c           -144   13   14  1900   29%    15   32%
  38 Zappa Mexico II x64 6c       -161   14   15  1550   29%    -4   34%
  39 Fruit 090705 x64 6c          -231   18   19  1150   23%   -23   29%
ResultSet-EloRating>mm 1 1
00:00:00,01
ResultSet-EloRating>ratings
Rank Name                          Elo    +    - games score oppo. draws
   1 Houdini 2.0t3 Pro x64 6c      151   12   12  1700   70%     0   39%
   2 Houdini 2.0t3* Pro x64 6c     150   15   15  1000   75%   -34   37%
   3 Houdini 2.0z Pro x64 6c       147   12   12  1550   71%   -19   36%
   4 Houdini 2.0s2 Pro x64 6c      145   16   16  1000   74%   -41   34%
   5 Houdini 1.5a x64 6c           133   14   14  1100   68%     1   41%
   6 Houdini 2.0Bar2 x64 6c        132   15   15  1000   73%   -34   43%
   7 Houdini 2.0c Pro x64 6c       131   13   13  1450   71%   -29   39%
   8 Houdini 2.0Higgs Pro x64 6c   128   15   15  1000   71%   -25   42%
   9 Houdini2Bar1 Pro x64 6c       118   14   14  1100   69%   -19   46%
  10 Critter 1.6 x64 6c             89   10   10  1900   63%    -2   53%
  11 Critter 1.4 x64 6c             77   14   14  1150   67%   -47   47%
  12 Rybka 4.1 79DT v1 x64 6c       76   14   14  1100   66%   -44   38%
  13 Stockfish 120430P x64 6c       71   11   11  1850   60%    -3   50%
  14 Deep Rybka 4.1 x64 6c          63   11   11  1750   60%    -7   48%
  15 Stockfish 2.2.2 JA x64 6c      62   13   13  1200   62%   -27   47%
  16 Ivanhoe B46fE.02 x64 6c        62   10   10  1900   59%    -2   53%
  17 Rybka 4.1 NO-SSE x64 6c        62   14   14  1000   63%   -31   49%
  18 Ivanhoe B46fC x64 6c           61   13   13  1200   64%   -39   47%
  19 Stockfish VE09 x64 6c          52   14   14  1000   63%   -42   48%
  20 Fire 2.2 xTreme x64 6c         48   10   10  1900   57%    -1   52%
  21 Vitruvius 1.11C x64 6c         46   10   10  1900   56%    -1   51%
  22 Gull II beta2 x64 6c           -3   12   12  1400   50%    -7   51%
  23 Strelka 5.5 x64 1c            -22   11   11  1650   45%    12   48%
  24 Bouquet 1.4 x64 6c            -35   13   13  1250   46%   -11   47%
  25 Naum 4.2 x64 6c               -43   10   10  1900   44%     1   44%
  26 Komodo 4.0 x64 1c             -63   11   11  1900   41%     2   42%
  27 Equinox 1.35 x64 6c           -95   12   12  1550   40%   -25   40%
  28 Deep Fritz 13 w32 6c          -95   11   11  1900   36%     2   43%
  29 Spike 1.4 Leiden w32 6c      -114   11   11  1900   34%     3   38%
  30 Chiron 1.1a x64 6c           -117   11   11  1900   34%     3   39%
  31 Deep Fritz 12 w32 6c         -132   14   14  1150   37%   -38   42%
  32 Deep Junior 13.3 x64 6c      -134   12   12  1700   31%    11   36%
  33 Protector 1.4.0 x64 6c       -138   11   11  1900   31%     4   36%
  34 Spark 1.0 x64 6c             -141   11   11  1850   31%     0   39%
  35 Deep Junior 13 x64 6c        -144   13   13  1300   35%   -30   36%
  36 Deep Shredder 12 x64 6c      -145   11   11  1900   30%     4   37%
  37 Hiarcs 13.2 w32 6c           -161   11   11  1900   29%     4   32%
  38 Zappa Mexico II x64 6c       -176   13   13  1550   29%   -14   34%
  39 Fruit 090705 x64 6c          -246   15   15  1150   23%   -33   29%
ResultSet-EloRating>scale
0.691348
ResultSet-EloRating>scale 1
1
ResultSet-EloRating>ratings
Rank Name                          Elo    +    - games score oppo. draws
   1 Houdini 2.0t3 Pro x64 6c      219   17   17  1700   70%     0   39%
   2 Houdini 2.0t3* Pro x64 6c     217   22   22  1000   75%   -50   37%
   3 Houdini 2.0z Pro x64 6c       212   18   18  1550   71%   -27   36%
   4 Houdini 2.0s2 Pro x64 6c      209   22   22  1000   74%   -59   34%
   5 Houdini 1.5a x64 6c           193   21   21  1100   68%     1   41%
   6 Houdini 2.0Bar2 x64 6c        191   22   22  1000   73%   -49   43%
   7 Houdini 2.0c Pro x64 6c       190   18   18  1450   71%   -42   39%
   8 Houdini 2.0Higgs Pro x64 6c   185   22   22  1000   71%   -36   42%
   9 Houdini2Bar1 Pro x64 6c       171   20   20  1100   69%   -27   46%
  10 Critter 1.6 x64 6c            128   15   15  1900   63%    -3   53%
  11 Critter 1.4 x64 6c            111   20   20  1150   67%   -69   47%
  12 Rybka 4.1 79DT v1 x64 6c      109   21   21  1100   66%   -63   38%
  13 Stockfish 120430P x64 6c      102   15   15  1850   60%    -5   50%
  14 Deep Rybka 4.1 x64 6c          91   16   16  1750   60%   -10   48%
  15 Stockfish 2.2.2 JA x64 6c      90   19   19  1200   62%   -39   47%
  16 Ivanhoe B46fE.02 x64 6c        90   15   15  1900   59%    -2   53%
  17 Rybka 4.1 NO-SSE x64 6c        89   21   21  1000   63%   -44   49%
  18 Ivanhoe B46fC x64 6c           89   19   19  1200   64%   -56   47%
  19 Stockfish VE09 x64 6c          75   21   21  1000   63%   -60   48%
  20 Fire 2.2 xTreme x64 6c         70   15   15  1900   57%    -2   52%
  21 Vitruvius 1.11C x64 6c         67   15   15  1900   56%    -2   51%
  22 Gull II beta2 x64 6c           -4   17   17  1400   50%   -11   51%
  23 Strelka 5.5 x64 1c            -32   16   16  1650   45%    18   48%
  24 Bouquet 1.4 x64 6c            -50   19   19  1250   46%   -16   47%
  25 Naum 4.2 x64 6c               -62   15   15  1900   44%     2   44%
  26 Komodo 4.0 x64 1c             -91   15   15  1900   41%     2   42%
  27 Equinox 1.35 x64 6c          -137   17   17  1550   40%   -36   40%
  28 Deep Fritz 13 w32 6c         -137   15   15  1900   36%     4   43%
  29 Spike 1.4 Leiden w32 6c      -165   16   16  1900   34%     4   38%
  30 Chiron 1.1a x64 6c           -169   16   16  1900   34%     4   39%
  31 Deep Fritz 12 w32 6c         -190   20   20  1150   37%   -56   42%
  32 Deep Junior 13.3 x64 6c      -194   17   17  1700   31%    16   36%
  33 Protector 1.4.0 x64 6c       -200   16   16  1900   31%     5   36%
  34 Spark 1.0 x64 6c             -205   16   16  1850   31%     0   39%
  35 Deep Junior 13 x64 6c        -208   19   19  1300   35%   -44   36%
  36 Deep Shredder 12 x64 6c      -209   16   16  1900   30%     6   37%
  37 Hiarcs 13.2 w32 6c           -232   16   16  1900   29%     6   32%
  38 Zappa Mexico II x64 6c       -255   18   18  1550   29%   -21   34%
  39 Fruit 090705 x64 6c          -356   22   22  1150   23%   -48   29%
ResultSet-EloRating>
Last edited by Daniel Shawul on Thu Aug 30, 2012 1:07 am, edited 2 times in total.