Komodo 5 running for the IPON

Discussion of computer chess matches and engine tournaments.

Moderators: hgm, Rebel, chrisw

User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: Komodo 5 running for the IPON

Post by Laskos »

IWB wrote:
Laskos wrote:
It seems that for IPON as well as for CCRL setting mm 1 1 and then scale to 1 gives the most accurate results. Or you could use Ordo. What you see in Adam posts as 340 instead of 400 (all these fitted numbers must be compared with the default 400 of the logistic) is giving the magnitude of the compression, in this case some 15%. Not negligible at all. Hope you use Bayeselo mm 1 1 scale 1 prior 0.1 or Ordo.

Kai
I have some doubts that I will use that.

1. I do not understand what it is doing (but I will read into it a bit more when i have some time left)
2. That put my list far off of any comparibility with the ohers.

This is how it looks with mm 01:

Code: Select all

   1 Houdini 2.0 STD          3026    9    9  5400   78%  2791   26% 
   2 Houdini 1.5a             3018   10   10  4000   79%  2775   26% 
   3 Komodo 5                 3007   11   11  2700   75%  2816   33% 
   4 Critter 1.4a             2983    9    9  4450   77%  2772   32% 
   5 Komodo 4                 2982    9    9  4850   75%  2781   30% 
   6 Critter 1.6a             2973   10   10  3150   70%  2823   40% 
and this with mm 11, scale 1, prior 0.1:

Code: Select all

   1 Houdini 2.0 STD          3084   11   10  5400   78%  2789   26% 
   2 Houdini 1.5a             3074   12   12  4000   79%  2769   26% 
   3 Komodo 5                 3060   14   13  2700   75%  2821   33% 
   4 Critter 1.4a             3031   11   11  4450   77%  2765   32% 
   5 Komodo 4                 3030   11   11  4850   75%  2777   30% 
   6 Critter 1.6a             3018   12   12  3150   70%  2829   40% 
that is 58 Elo difference for H2.0! My list is loosing any possibility to be compared with others ...
I might be convinced if ALL are doing it.

Bye
Ingo
Yes, wait until CCRL does that, Adam begins to use the "adjusted" Bayeselo and Ordo (look here http://talkchess.com/forum/viewtopic.php?t=44509), maybe it will become standard. Thanks for the excellent rating list.

Kai
Wolfgang
Posts: 906
Joined: Sat May 13, 2006 1:08 am

Re: Komodo 5 running for the IPON

Post by Wolfgang »

Here are the - interim - results of the CEGT jury... :-)

http://cegt.siteboard.eu/f6t330-testing-komodo-5-0.html

BTW, mainly played on AMD-hardware!! +30, congrats!!
Best
Wolfgang
CEGT-Team
www.cegt.net
www.cegt.forumieren.com
Adam Hair
Posts: 3226
Joined: Wed May 06, 2009 10:31 pm
Location: Fuquay-Varina, North Carolina

Re: Komodo 5 running for the IPON

Post by Adam Hair »

IWB wrote:
Laskos wrote:
It seems that for IPON as well as for CCRL setting mm 1 1 and then scale to 1 gives the most accurate results. Or you could use Ordo. What you see in Adam posts as 340 instead of 400 (all these fitted numbers must be compared with the default 400 of the logistic) is giving the magnitude of the compression, in this case some 15%. Not negligible at all. Hope you use Bayeselo mm 1 1 scale 1 prior 0.1 or Ordo.

Kai
I have some doubts that I will use that.

1. I do not understand what it is doing (but I will read into it a bit more when i have some time left)
2. That put my list far off of any comparibility with the ohers.

I can understand if you have doubts. It does change the ratings.

Here is an explanation (forgive me if I explain something that you find obvious):

'mm 1 1' causes Bayeselo to calculate White advantage and drawelo from the IPON data. The default values in Bayeselo are 32.8 and 97.3. The values for IPON, if I remember correctly, are ~50 and ~160. Of course, you use your drawelo information already. Using 'mm 1 1' instead has a small effect on IPON ratings. A curious question is why isn't White advantage equal to 0 as we think it should? Perhaps we (or I) think incorrectly :) .

Prior (using virtual draws) is important for databases with engines that have played a small number of games, such as ChessWar and WBEC. It has almost no effect on IPON ratings, where all the engines have played many games against each other. You could ignore using 'prior 0.1'.

Scale is the trouble maker. The ratings from Bayeselo are automatically scaled to look more like the output from ELOStat and SSDF ratings. Using 'scale 1' removes the scaling. The resulting ratings are exactly what Bayeselo calculates them to be. They will match the mathematical model that Bayeselo uses for determining the ratings. It has no effect on the order of the engines ('mm 1 1' does). But it does make them appear expanded as compared to what we are use to.

IWB wrote: This is how it looks with mm 01:

Code: Select all

   1 Houdini 2.0 STD          3026    9    9  5400   78%  2791   26% 
   2 Houdini 1.5a             3018   10   10  4000   79%  2775   26% 
   3 Komodo 5                 3007   11   11  2700   75%  2816   33% 
   4 Critter 1.4a             2983    9    9  4450   77%  2772   32% 
   5 Komodo 4                 2982    9    9  4850   75%  2781   30% 
   6 Critter 1.6a             2973   10   10  3150   70%  2823   40% 
and this with mm 11, scale 1, prior 0.1:

Code: Select all

   1 Houdini 2.0 STD          3084   11   10  5400   78%  2789   26% 
   2 Houdini 1.5a             3074   12   12  4000   79%  2769   26% 
   3 Komodo 5                 3060   14   13  2700   75%  2821   33% 
   4 Critter 1.4a             3031   11   11  4450   77%  2765   32% 
   5 Komodo 4                 3030   11   11  4850   75%  2777   30% 
   6 Critter 1.6a             3018   12   12  3150   70%  2829   40% 
that is 58 Elo difference for H2.0! My list is loosing any possibility to be compared with others ...
I might be convinced if ALL are doing it.

Bye
Ingo
I think the CCRL may start using these commands. But, the CEGT uses ELOStat. They would have to start using Bayeselo also so that we could all do the ratings the same way.

I know that the CCRL will not stop using Bayeselo. And the CEGT has used ELOStat for a long time, so I do not expect them to change.

Adam
lkaufman
Posts: 5981
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Komodo 5 running for the IPON

Post by lkaufman »

Adam Hair wrote:
IWB wrote:
Laskos wrote:
It seems that for IPON as well as for CCRL setting mm 1 1 and then scale to 1 gives the most accurate results. Or you could use Ordo. What you see in Adam posts as 340 instead of 400 (all these fitted numbers must be compared with the default 400 of the logistic) is giving the magnitude of the compression, in this case some 15%. Not negligible at all. Hope you use Bayeselo mm 1 1 scale 1 prior 0.1 or Ordo.

Kai
I have some doubts that I will use that.

1. I do not understand what it is doing (but I will read into it a bit more when i have some time left)
2. That put my list far off of any comparibility with the ohers.

I can understand if you have doubts. It does change the ratings.

Here is an explanation (forgive me if I explain something that you find obvious):

'mm 1 1' causes Bayeselo to calculate White advantage and drawelo from the IPON data. The default values in Bayeselo are 32.8 and 97.3. The values for IPON, if I remember correctly, are ~50 and ~160. Of course, you use your drawelo information already. Using 'mm 1 1' instead has a small effect on IPON ratings. A curious question is why isn't White advantage equal to 0 as we think it should? Perhaps we (or I) think incorrectly :) .

Prior (using virtual draws) is important for databases with engines that have played a small number of games, such as ChessWar and WBEC. It has almost no effect on IPON ratings, where all the engines have played many games against each other. You could ignore using 'prior 0.1'.

Scale is the trouble maker. The ratings from Bayeselo are automatically scaled to look more like the output from ELOStat and SSDF ratings. Using 'scale 1' removes the scaling. The resulting ratings are exactly what Bayeselo calculates them to be. They will match the mathematical model that Bayeselo uses for determining the ratings. It has no effect on the order of the engines ('mm 1 1' does). But it does make them appear expanded as compared to what we are use to.

IWB wrote: This is how it looks with mm 01:

Code: Select all

   1 Houdini 2.0 STD          3026    9    9  5400   78%  2791   26% 
   2 Houdini 1.5a             3018   10   10  4000   79%  2775   26% 
   3 Komodo 5                 3007   11   11  2700   75%  2816   33% 
   4 Critter 1.4a             2983    9    9  4450   77%  2772   32% 
   5 Komodo 4                 2982    9    9  4850   75%  2781   30% 
   6 Critter 1.6a             2973   10   10  3150   70%  2823   40% 
and this with mm 11, scale 1, prior 0.1:

Code: Select all

   1 Houdini 2.0 STD          3084   11   10  5400   78%  2789   26% 
   2 Houdini 1.5a             3074   12   12  4000   79%  2769   26% 
   3 Komodo 5                 3060   14   13  2700   75%  2821   33% 
   4 Critter 1.4a             3031   11   11  4450   77%  2765   32% 
   5 Komodo 4                 3030   11   11  4850   75%  2777   30% 
   6 Critter 1.6a             3018   12   12  3150   70%  2829   40% 
that is 58 Elo difference for H2.0! My list is loosing any possibility to be compared with others ...
I might be convinced if ALL are doing it.

Bye
Ingo
I think the CCRL may start using these commands. But, the CEGT uses ELOStat. They would have to start using Bayeselo also so that we could all do the ratings the same way.

I know that the CCRL will not stop using Bayeselo. And the CEGT has used ELOStat for a long time, so I do not expect them to change.

Adam
That is a very good explanation. I could add that EloStat compresses the ratings by incorrectly averaging the ratings before applying the adjustment for the results. Therefore I think it is illogical for BayesElo to try to compress its ratings to match the incorrect EloStat. I "vote" for CCRL and IPON to use scale 1 with BayesElo. In this way rating differences will accurately reflect results to be expected in direct matches. It also means that for interim results, averaging the individual performance ratings will become approximately correct.
IWB
Posts: 1539
Joined: Thu Mar 09, 2006 2:02 pm

Re: Komodo 5 running for the IPON

Post by IWB »

Adam Hair wrote:
IWB wrote:
Laskos wrote:
It seems that for IPON as well as for CCRL setting mm 1 1 and then scale to 1 gives the most accurate results. Or you could use Ordo. What you see in Adam posts as 340 instead of 400 (all these fitted numbers must be compared with the default 400 of the logistic) is giving the magnitude of the compression, in this case some 15%. Not negligible at all. Hope you use Bayeselo mm 1 1 scale 1 prior 0.1 or Ordo.

Kai
I have some doubts that I will use that.

1. I do not understand what it is doing (but I will read into it a bit more when i have some time left)
2. That put my list far off of any comparibility with the ohers.

I can understand if you have doubts. It does change the ratings.

Here is an explanation (forgive me if I explain something that you find obvious):

'mm 1 1' causes Bayeselo to calculate White advantage and drawelo from the IPON data. The default values in Bayeselo are 32.8 and 97.3. The values for IPON, if I remember correctly, are ~50 and ~160. Of course, you use your drawelo information already. Using 'mm 1 1' instead has a small effect on IPON ratings. A curious question is why isn't White advantage equal to 0 as we think it should? Perhaps we (or I) think incorrectly :) .

Prior (using virtual draws) is important for databases with engines that have played a small number of games, such as ChessWar and WBEC. It has almost no effect on IPON ratings, where all the engines have played many games against each other. You could ignore using 'prior 0.1'.

Scale is the trouble maker. The ratings from Bayeselo are automatically scaled to look more like the output from ELOStat and SSDF ratings. Using 'scale 1' removes the scaling. The resulting ratings are exactly what Bayeselo calculates them to be. They will match the mathematical model that Bayeselo uses for determining the ratings. It has no effect on the order of the engines ('mm 1 1' does). But it does make them appear expanded as compared to what we are use to.

IWB wrote: This is how it looks with mm 01:

Code: Select all

   1 Houdini 2.0 STD          3026    9    9  5400   78%  2791   26% 
   2 Houdini 1.5a             3018   10   10  4000   79%  2775   26% 
   3 Komodo 5                 3007   11   11  2700   75%  2816   33% 
   4 Critter 1.4a             2983    9    9  4450   77%  2772   32% 
   5 Komodo 4                 2982    9    9  4850   75%  2781   30% 
   6 Critter 1.6a             2973   10   10  3150   70%  2823   40% 
and this with mm 11, scale 1, prior 0.1:

Code: Select all

   1 Houdini 2.0 STD          3084   11   10  5400   78%  2789   26% 
   2 Houdini 1.5a             3074   12   12  4000   79%  2769   26% 
   3 Komodo 5                 3060   14   13  2700   75%  2821   33% 
   4 Critter 1.4a             3031   11   11  4450   77%  2765   32% 
   5 Komodo 4                 3030   11   11  4850   75%  2777   30% 
   6 Critter 1.6a             3018   12   12  3150   70%  2829   40% 
that is 58 Elo difference for H2.0! My list is loosing any possibility to be compared with others ...
I might be convinced if ALL are doing it.

Bye
Ingo
I think the CCRL may start using these commands. But, the CEGT uses ELOStat. They would have to start using Bayeselo also so that we could all do the ratings the same way.

I know that the CCRL will not stop using Bayeselo. And the CEGT has used ELOStat for a long time, so I do not expect them to change.

Adam
Thx Adaim,

I understand the scale and the prior argument and have some problems with the mm 1 1. But as you said, this difference is marginal.
So, if the CCRL decides to go with the proposed setup I will join! If not Iwill not do it for the reasons I already mentioned.

Elostat is not working at all. I publish in my download the ELostat rating. The next list will have a performance difference between H2 and K5 of more than 3% but only shows 9 ELo difference. Even for my mathematical stomach feeling this is rubbish! I hope the CEGT will change too!

Bye and thx again
INgo
lkaufman
Posts: 5981
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Komodo 5 running for the IPON

Post by lkaufman »

IWB wrote: I understand the scale and the prior argument and have some problems with the mm 1 1. But as you said, this difference is marginal.
So, if the CCRL decides to go with the proposed setup I will join! If not Iwill not do it for the reasons I already mentioned.

Elostat is not working at all. I publish in my download the ELostat rating. The next list will have a performance difference between H2 and K5 of more than 3% but only shows 9 ELo difference. Even for my mathematical stomach feeling this is rubbish! I hope the CEGT will change too!

Bye and thx again
INgo
This is an excellent example of why averaging the ratings before applying the results correction is just wrong. It also shows why it makes little sense to use a scale factor for BayesElo that attempts to match Elostat!

Larry
lkaufman
Posts: 5981
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Komodo 5 running for the IPON

Post by lkaufman »

We have decided to switch to using scale = 1 for our distributed test, which will correspond to CCRL and IPON once they make the switch. It caused our elo gain over Komodo 4 to increase to 41 points.

I have a question regarding opening book used for testing. You rightly keep the positions secret, so no one can "tune" to your book. However I was wondering what is the average depth your book goes to, in half-ply? I am trying to determine whether difference in book depth is a significant factor in differences between various test results.

Larry
lkaufman
Posts: 5981
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Komodo 5 running for the IPON

Post by lkaufman »

Wolfgang wrote:Here are the - interim - results of the CEGT jury... :-)

http://cegt.siteboard.eu/f6t330-testing-komodo-5-0.html

BTW, mainly played on AMD-hardware!! +30, congrats!!
An elo gain of 30 is pretty consistent with our testing, but the resultant rating is quite a bit lower than what we would have expected, because the rating of Komodo 4 also seemed very low. Same for CCRL. It seems that the most likely explanation is that our time management is poor for 40/x games; we don't test that way normally, preferring increment play. We no longer suspect AMD as a significant problem.
Is it possible for you or any of the testers to make any comments about our time utilization in these games? Have we played faster, slower, or about the same as our top-level opponents?
User avatar
michiguel
Posts: 6401
Joined: Thu Mar 09, 2006 8:30 pm
Location: Chicago, Illinois, USA

Re: Komodo 5 running for the IPON

Post by michiguel »

IWB wrote:
Laskos wrote:
It seems that for IPON as well as for CCRL setting mm 1 1 and then scale to 1 gives the most accurate results. Or you could use Ordo. What you see in Adam posts as 340 instead of 400 (all these fitted numbers must be compared with the default 400 of the logistic) is giving the magnitude of the compression, in this case some 15%. Not negligible at all. Hope you use Bayeselo mm 1 1 scale 1 prior 0.1 or Ordo.

Kai
I have some doubts that I will use that.

1. I do not understand what it is doing (but I will read into it a bit more when i have some time left)
2. That put my list far off of any comparibility with the ohers.

This is how it looks with mm 01:

Code: Select all

   1 Houdini 2.0 STD          3026    9    9  5400   78%  2791   26% 
   2 Houdini 1.5a             3018   10   10  4000   79%  2775   26% 
   3 Komodo 5                 3007   11   11  2700   75%  2816   33% 
   4 Critter 1.4a             2983    9    9  4450   77%  2772   32% 
   5 Komodo 4                 2982    9    9  4850   75%  2781   30% 
   6 Critter 1.6a             2973   10   10  3150   70%  2823   40% 
and this with mm 11, scale 1, prior 0.1:

Code: Select all

   1 Houdini 2.0 STD          3084   11   10  5400   78%  2789   26% 
   2 Houdini 1.5a             3074   12   12  4000   79%  2769   26% 
   3 Komodo 5                 3060   14   13  2700   75%  2821   33% 
   4 Critter 1.4a             3031   11   11  4450   77%  2765   32% 
   5 Komodo 4                 3030   11   11  4850   75%  2777   30% 
   6 Critter 1.6a             3018   12   12  3150   70%  2829   40% 
that is 58 Elo difference for H2.0! My list is loosing any possibility to be compared with others ...
I might be convinced if ALL are doing it.

Bye
Ingo
FWIW
This is with Ordo (latest, unreleased) with this command line
./ordo -a 2749 -p results.pgn -W
"-W" corrects the white value automatically. Average 2749 is neeed to get DS to have 2800.

Komodo results are not in the website, yet.
Miguel

Code: Select all

                        ENGINE:  RATING    POINTS  PLAYED    (%)
               Houdini 2.0 STD:  3041.6    4142.0    5250   78.9%
                  Houdini 1.5a:  3033.9    3162.5    4000   79.1%
                  Critter 1.4a:  3002.4    3419.5    4450   76.8%
                      Komodo 4:  2998.4    3653.0    4850   75.3%
                  Critter 1.6a:  2990.5    2152.0    3000   71.7%
                      Komodo 3:  2986.9    2075.5    2800   74.1%
            Stockfish 2.2.2 JA:  2977.7    3447.0    4650   74.1%
                  Deep Rybka 4:  2976.8    3627.0    4900   74.0%
                Deep Rybka 4.1:  2976.6    4343.5    6050   71.8%
                   Critter 1.2:  2976.6    2232.0    3100   72.0%
                 Houdini 1.03a:  2974.9    2520.0    3200   78.8%
                Komodo 2.03 DC:  2969.6    1985.5    2700   73.5%
            Stockfish 2.1.1 JA:  2960.7    2426.5    3500   69.3%
                  Critter 1.01:  2941.3    1970.0    2800   70.4%
             Stockfish 2.01 JA:  2940.9    2246.0    3100   72.5%
            Stockfish 1.9.1 JA:  2919.2    2131.0    3000   71.0%
                    Rybka 3 mp:  2918.5    3228.0    4200   76.9%
                  Critter 0.90:  2911.2    2327.5    3400   68.5%
            Stockfish 1.7.1 JA:  2903.4    2131.0    2900   73.5%
                   Rybka 3 32b:  2859.6    1191.5    1700   70.1%
            Stockfish 1.6.x JA:  2843.8    1792.5    2600   68.9%
                 Komodo 1.3 JA:  2839.3    1946.0    3300   59.0%
             Deep Fritz 13 32b:  2833.9    1587.5    3150   50.4%
                      Naum 4.2:  2833.6    5233.5    9150   57.2%
                   Chiron 1.1a:  2831.9    2621.5    4800   54.6%
                  Critter 0.80:  2825.1    1795.5    2800   64.1%
                  Fritz 13 32b:  2819.3    2308.0    4300   53.7%
                 Komodo 1.2 JA:  2809.7    2175.0    3700   58.8%
               Rybka 2.3.2a mp:  2805.2    2172.5    3500   62.1%
              Deep Shredder 12:  2800.0    5533.5   10250   54.0%
                  Hannibal 1.2:  2795.7    1593.0    3450   46.2%
                      Gull 1.2:  2795.7    3044.5    6150   49.5%
                  Critter 0.70:  2791.8    1107.0    1900   58.3%
                      Gull 1.1:  2791.6    1675.5    3100   54.0%
                      Naum 4.1:  2789.8    1465.0    2300   63.7%
       Deep Sjeng c't 2010 32b:  2789.3    3508.5    7150   49.1%
                 Komodo 1.0 JA:  2785.2    1756.5    2900   60.6%
                 Spike 1.4 32b:  2783.1    2994.0    6250   47.9%
             Deep Fritz 12 32b:  2778.4    3268.5    6300   51.9%
                        Naum 4:  2776.1    1628.5    2700   60.3%
                Rybka 2.2n2 mp:  2775.5    1311.5    2100   62.5%
                     Gull 1.0a:  2767.1    1254.0    2300   54.5%
            Stockfish 1.5.1 JA:  2762.3    1128.5    1900   59.4%
                    Rybka 1.2f:  2761.4    1578.5    2400   65.8%
               Protector 1.4.0:  2757.4    2863.0    6350   45.1%
                     spark-1.0:  2757.3    3068.5    6850   44.8%
                  Hannibal 1.1:  2747.9    2142.0    4900   43.7%
            HIARCS 13.2 MP 32b:  2746.4    2899.5    6650   43.6%
              Deep Junior 13.3:  2745.5    1107.0    2850   38.8%
                Deep Junior 13:  2745.3    1452.5    3600   40.3%
                  Fritz 12 32b:  2740.6    1091.0    2000   54.5%
                    Quazar 0.4:  2734.5    1474.0    3750   39.3%
            HIARCS 13.1 MP 32b:  2728.1    1734.5    3600   48.2%
              Deep Junior 12.5:  2726.8    1963.0    4850   40.5%
             Deep Fritz 11 32b:  2720.8     744.5    1300   57.3%
                 Doch64 1.2 JA:  2710.6     820.5    1600   51.3%
                     spark-0.4:  2709.3    1458.0    3100   47.0%
              Stockfish 1.4 JA:  2708.4     849.0    1700   49.9%
               Zappa Mexico II:  2706.4    5050.5   11550   43.7%
             Shredder Bonn 32b:  2705.3    1119.0    2200   50.9%
                  Critter 0.60:  2694.0    1072.0    2200   48.7%
            Protector 1.3.2 JA:  2693.9    2361.5    5300   44.6%
                MinkoChess 1.3:  2688.6    1108.5    3450   32.1%
              Deep Shredder 11:  2685.1    1412.0    2700   52.3%
              Doch64 09.980 JA:  2682.2     710.0    1500   47.3%
                Deep Junior 12:  2674.9    1356.0    3600   37.7%
                    Onno-1-1-1:  2674.5    1923.0    4300   44.7%
                 Hannibal 1.0a:  2674.3    1600.0    4200   38.1%
              Deep Onno 1-2-70:  2673.1    2806.5    7700   36.4%
                      Naum 3.1:  2672.8    1514.5    3000   50.5%
                Zappa Mexico I:  2672.4    1221.0    2200   55.5%
                Rybka 1.0 Beta:  2671.4    1023.5    2300   44.5%
               Spark-0.3 VC(a):  2668.2    1625.0    3600   45.1%
                    Onno-1-0-0:  2665.5     594.5    1200   49.5%
             Deep Sjeng WC2008:  2663.2    2434.5    5600   43.5%
         Toga II 1.4 beta5c BB:  2659.5    3255.5    8300   39.2%
              Deep Junior 11.2:  2658.2    1176.0    2900   40.6%
                 Strelka 2.0 B:  2653.9    1778.5    5500   32.3%
            Hiarcs 12.1 MP 32b:  2650.2    2427.5    5600   43.3%
                  Tornado 4.88:  2648.2     803.0    2400   33.5%
                Deep Sjeng 3.0:  2647.6     601.5    1400   43.0%
                      Umko 1.2:  2647.5    1016.5    3300   30.8%
                 Critter 0.52b:  2636.6    1097.0    2600   42.2%
        Shredder Classic 4 32b:  2636.4     922.5    1800   51.2%
             Deep Junior 11.1a:  2626.4    1153.0    2800   41.2%
                  Naum 2.2 32b:  2624.8     614.0    1300   47.2%
                    Nemo 1.0.1:  2619.9     708.0    2700   26.2%
                      Umko 1.1:  2619.8    1146.0    3900   29.4%
              Deep Junior 2010:  2617.5    1210.0    3100   39.0%
               Glaurung 2.2 JA:  2616.6    1027.5    2600   39.5%
            Rybka 1.0 Beta 32b:  2616.6     506.0    1100   46.0%
               HIARCS 11.2 32b:  2611.6     827.0    1900   43.5%
            Fruit 05/11/03 32b:  2609.0    1774.0    4400   40.3%
                     Loop 2007:  2602.1    2456.0    7900   31.1%
                Toga II 1.2.1a:  2598.8     716.5    1600   44.8%
                Jonny 4.00 32b:  2598.2    1389.5    5200   26.7%
                     ListMP 11:  2594.4     987.5    2600   38.0%
                 LoopMP 12 32b:  2592.3     635.0    1500   42.3%
                  Tornado 4.80:  2590.5     681.5    2700   25.2%
              Deep Shredder 10:  2588.6    1754.0    4400   39.9%
       Twisted Logic 20100131x:  2584.2    1140.0    3500   32.6%
                Crafty 23.3 JA:  2579.6    1290.5    5200   24.8%
           Spike 1.2 Turin 32b:  2561.8    2349.5    7700   30.5%
            Deep Sjeng 2.7 32b:  2537.7     465.5    1400   33.2%
                Crafty 23.1 JA:  2526.4    1002.0    3800   26.4%
IWB
Posts: 1539
Joined: Thu Mar 09, 2006 2:02 pm

Re: Komodo 5 running for the IPON

Post by IWB »

Sorry for the delay but finaly the result for Komodo 5 is online.

http://www.inwoba.de

The IPON-RRRL will be updated soon.

Bye
Ingo