software improvement question

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: software improvement question

Post by mjlef »

Dann Corbit wrote:
My opinion is that the question is a reformation of "How many angels can dance on the head of a pin?"

How many Elo can a software program improve? Before Alpha-beta came along... Before null-move came along... before ...

Improvement moves in fits and starts and nobody can predict how big the leaps will be.

Right now, the best chess programs have a branching factor of about 2.
If that should improve to 1.5 and the quality of the moves remains the same, it would be a ridiculous improvement of many hundreds of Elo.
e.g.:
2^30 = 1,073,741,824
1.5^30 = 191,751

I think it is also a mistake to try to predict the future of hardware. It is possible that Moore's law will die on the vine, or there may be a revolution in computing that doubles the rate of growth.
Hmm, that assume a program with a branching factor of 1.5 or 2 sees evertthing at the same depth as a program with a brnaching factor of 6, which is not the case.

I agree with Uri that it is not clear how much software advances have improved playing strength compared with hardware advances (processor speed, word size and memory). Each genartion of programmers is limited to what the hardware offers, and what works on an 8 bit processor often does not work well on a 64 bit. Still, if someone can rig an old program to make it UCI compliant, it sure would be interesting to see the results. Maybe Ed can make an old Rebel do this and we can see.

Mark
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Are you sure?

Post by bob »

Nid Hogge wrote:Robert Hyatt wrote :
The chips are also getting bigger. I'm not sure the feature size has been reducing that much. The fab processes have improved so that larger chips (and hence more transistors) can be made while keeping acceptable yield levels...
Thanks for the answer.

I don't understand what do you mean by getting bigger.. die sizes are getting smaller!? The[45nm] penryn dual-core version has a die size of 107mm2, which is 25 percent smaller than Intel's current 65nm products. [=/quote]

65nm came out in 2005. 45nm is scheduled for full production late this year (2007). That's nearly 3 years between the two. To double the transistors per chip, assuming same area, you need to reduce the feature size by a factor of 1.4 (square root of 2).

As to current vs older sizes, if you put the same design on a new process, chip size will go down while transistor count stays the same. But we are seeing transistor counts go up and feature size goes down, but the chips are proportionally not getting any smaller to speak of because the engineers are burning transistors like mad to make larger L1/L2 caches and more cores.

That was what I was talking about. It is hard to discuss chip size when you have shrinking feature size, demand for more transistors for more cores/cache, and then re-designs that make the chip layout far more efficient (the core-2 is a really good example of this with the various pipeline components laid out end-to-end on the chip rather than scattered here and there.

The US DOD has a paper somewhere on the internet that discusses this and points out that at the present time, Moore's law has stretched to 36 months. And it also points out that over the past 5-6 years it has grown from 18 months in 2000, to 24 months by 2004, to 36 months in 2007. It has flattened out significantly. And this is based purely on feature size, ignoring the larger IC sizes of today I mentioned...

Moore's law is on the way out, as we knew it would be at some point...
Absolutely agreed. I've read an article some time ago that said the Moore's law controls the IT industry and not vice versa. they go by it and will do everything to make it stay alive. I excpect it to end soon as well(although you can never know..).

I think that the philosophy is getting too old to hold true. it was good back in those days where bigger=better but that's no the case now anymore.
It reminds me the MHz race not long ago with the Pentium 4's..and they gathered that smarter design is much better and efficent than stuffing as much speed as you can.

So I hope it will improve with Nehalem\whatever comes next with wiser architectures that offer better performacne per watt.
User avatar
fern
Posts: 8755
Joined: Sun Feb 26, 2006 4:07 pm

Re: software improvement question

Post by fern »

Not being an expert in this field, not even an amateur programmer, but just as player I have the strong impression that, no matter how much software has incorporated more tricks and finesses along the years, the decisive or main factor to explain greater strength is the immense growth of depth search capabilities or search as such. As a player any can feel the difference that for the very same program just to restrict the ply depth produces. If any engine can go no more than 6 or 7 plies, I feel all the time I can handle that: when going to 12-13 etc, I feel and I know I am lost because I will be outsearched any moment.

My best
fernando
Nid Hogge

Re: Are you sure?

Post by Nid Hogge »

65nm came out in 2005. 45nm is scheduled for full production late this year (2007). That's nearly 3 years between the two. To double the transistors per chip, assuming same area, you need to reduce the feature size by a factor of 1.4 (square root of 2).

As to current vs older sizes, if you put the same design on a new process, chip size will go down while transistor count stays the same. But we are seeing transistor counts go up and feature size goes down, but the chips are proportionally not getting any smaller to speak of because the engineers are burning transistors like mad to make larger L1/L2 caches and more cores.

That was what I was talking about. It is hard to discuss chip size when you have shrinking feature size, demand for more transistors for more cores/cache, and then re-designs that make the chip layout far more efficient (the core-2 is a really good example of this with the various pipeline components laid out end-to-end on the chip rather than scattered here and there.
Thank you Prof. All clear now.
Dr.Ex
Posts: 202
Joined: Sun Jul 08, 2007 4:10 am

Re: software improvement question

Post by Dr.Ex »

Uri Blass wrote:
Dr.Ex wrote:
Uri Blass wrote:There is a discussion in the hiarcs forum about the question how much software improvement are since 1989.



see page 5 of the discussion
http://hiarcs.net/forums/viewtopic.php? ... c&start=60

Nick claims in the second post of the that page:

"That blows my theory and suspicion that an engine (any engine) reputed to be the best in the world would show at its best a max 150-200 Elo improvement as a software compared to for example the Spracklen Software of 1989 inside a V10"
I think that this theory is clearly wrong when one of the problems is that programmers today do not work to optimize their program to the V10 and with all the knowledge that we have today if programmers work to optimize their program to the hardware of 1989(that mean same search algorithm but with better design of data structure for example in case of bitboard programs not to use bitboards that are probably slow in V10 and in case of toga something else that I do not know) it will be possible to see at least 400 elo improvement even by toga.
I disagree. The Fidelity V10 has a 68040 processor 25Mhz, 1mb ram and 1mb rom. It's 64.000 moves opening library also had to fit in the rom.
It's about as strong as my Mephisto Vancouver 32 bit, which is probably 2100 Elo.
Optimized Toga on a 25Mhz processor would be tactically very weak and nowhere near 2500 Elo.
In that case what is my error?

Based on the ssdf list we have:

7 Fruit 2.2.1 256MB Athlon 1200 MHz 2837 21 -20 1224 64% 2734

I think that we can assume that toga that is stronger than fruit on 1600 mhz that is better than 1200 mhz has at least fide rating 2800

I also think that we can assume that computer do not earn more than 50 elo from doubling the speed against humans.

starting with
Toga 1600 mhz=2800
I get
Toga 400 mhz=2700
Toga 100 mhz=2600
Toga 25 mhz=2500

Uri
Your error is simply that computers do not earn 50 Elo from doubling the speed against humans. It doesn't matter at all whether a specific engine
gets on average to depth 17 or 19 against humans in a typical middle game position. That's probably worth less than 10 Elo.
But there is a huge difference in strength against humans whether the same engine reaches depth 8 or 10.
ed

Re: software improvement question

Post by ed »

mjlef wrote:Each genartion of programmers is limited to what the hardware offers, and what works on an 8 bit processor often does not work well on a 64 bit. Still, if someone can rig an old program to make it UCI compliant, it sure would be interesting to see the results. Maybe Ed can make an old Rebel do this and we can see.
Recently I added WB support to Gideon Pro (1993), I could hardly believe the difference in strength with the latest Pro Deo, a huge gap.

Ed
Uri Blass
Posts: 10825
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: software improvement question

Post by Uri Blass »

Dr.Ex wrote:
Uri Blass wrote:
Dr.Ex wrote:
Uri Blass wrote:There is a discussion in the hiarcs forum about the question how much software improvement are since 1989.



see page 5 of the discussion
http://hiarcs.net/forums/viewtopic.php? ... c&start=60

Nick claims in the second post of the that page:

"That blows my theory and suspicion that an engine (any engine) reputed to be the best in the world would show at its best a max 150-200 Elo improvement as a software compared to for example the Spracklen Software of 1989 inside a V10"
I think that this theory is clearly wrong when one of the problems is that programmers today do not work to optimize their program to the V10 and with all the knowledge that we have today if programmers work to optimize their program to the hardware of 1989(that mean same search algorithm but with better design of data structure for example in case of bitboard programs not to use bitboards that are probably slow in V10 and in case of toga something else that I do not know) it will be possible to see at least 400 elo improvement even by toga.
I disagree. The Fidelity V10 has a 68040 processor 25Mhz, 1mb ram and 1mb rom. It's 64.000 moves opening library also had to fit in the rom.
It's about as strong as my Mephisto Vancouver 32 bit, which is probably 2100 Elo.
Optimized Toga on a 25Mhz processor would be tactically very weak and nowhere near 2500 Elo.
In that case what is my error?

Based on the ssdf list we have:

7 Fruit 2.2.1 256MB Athlon 1200 MHz 2837 21 -20 1224 64% 2734

I think that we can assume that toga that is stronger than fruit on 1600 mhz that is better than 1200 mhz has at least fide rating 2800

I also think that we can assume that computer do not earn more than 50 elo from doubling the speed against humans.

starting with
Toga 1600 mhz=2800
I get
Toga 400 mhz=2700
Toga 100 mhz=2600
Toga 25 mhz=2500

Uri
Your error is simply that computers do not earn 50 Elo from doubling the speed against humans. It doesn't matter at all whether a specific engine
gets on average to depth 17 or 19 against humans in a typical middle game position. That's probably worth less than 10 Elo.
But there is a huge difference in strength against humans whether the same engine reaches depth 8 or 10.

I disagree with it but if I understand correctly your theory is that they earn more than 50 per doubling at low speed and average of less than 2500 elo at high speed so we may have something like:

Toga 1600 mhz=2800
Toga 400 mhz=2770
Toga 100 mhz=2600
Toga 25 mhz=2300

or maybe based on your theory that does not believe in toga 2800

Toga 1600 mhz=2600
Toga 400 mhz=2570
Toga 100 mhz=2500
Toga 25 mhz=2300


Note that I do not believe in something like that.
There may be diminishing returns but it does not go from +10 elo for doubling to +100 elo for doubling at low speed.
Uri Blass
Posts: 10825
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: software improvement question

Post by Uri Blass »

ed wrote:
mjlef wrote:Each genartion of programmers is limited to what the hardware offers, and what works on an 8 bit processor often does not work well on a 64 bit. Still, if someone can rig an old program to make it UCI compliant, it sure would be interesting to see the results. Maybe Ed can make an old Rebel do this and we can see.
Recently I added WB support to Gideon Pro (1993), I could hardly believe the difference in strength with the latest Pro Deo, a huge gap.

Ed
I wonder what is the difference when both run on hardware of 1993.

Uri
ed

Re: software improvement question

Post by ed »

Uri Blass wrote:
ed wrote:
mjlef wrote:Each genartion of programmers is limited to what the hardware offers, and what works on an 8 bit processor often does not work well on a 64 bit. Still, if someone can rig an old program to make it UCI compliant, it sure would be interesting to see the results. Maybe Ed can make an old Rebel do this and we can see.
Recently I added WB support to Gideon Pro (1993), I could hardly believe the difference in strength with the latest Pro Deo, a huge gap.

Ed
I wonder what is the difference when both run on hardware of 1993.

Uri
With hardly any reductions, no IID, a buggy hash table, no futility pruning, no nullmove pruning etc. etc. I would say the outcome would be pretty predictable.

Ed