harware vs software advances

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
mhull
Posts: 13447
Joined: Wed Mar 08, 2006 9:02 pm
Location: Dallas, Texas
Full name: Matthew Hull

Re: harware vs software advances

Post by mhull »

Don wrote:
Dann Corbit wrote:
mhull wrote:
Dann Corbit wrote:I strongly suspect that increase in chess strength is exponential both in compute power and in chess algorithm software.

What I mean is that Houdini is probably 1000x stronger than the strongest chess program in 1994 {on like hardware}, just like the modern machines are 1000 times faster.
What if the like hardware is old hardware? Here I'm assuming that the "path length" for processing a node is much longer in modern software than in old software, such that old software might process more nodes per second. The longer path length might not have enough depth to achieve a return on it's investment in intelligence. If this assumption is wrong, I still wouldn't bet that 1000x would be accurate.

The reverse is of course demonstrated to be true, that dumber but faster software cannot make its shorter path-length pay better dividends on modern hardware due to the limit of its investment in intelligence (path length).

The latter is already proven. The former may be more difficult to demonstrate, as has been discussed here already. Someone would have to invest the time to optimize new software for old hardware, which in reality might not comfortably fit within it, depending on the platform. For instance, try cramming a modern program into a unit that runs on a 68030 with a tiny hash memory and compete against a Mephisto or Fidelity program. I'm not sure that would work out well for the modern program. The Intel P90 might work, but one still might better put their money on Genius in that contest.
That is a good point, and I am not sure how you would measure such a thing accurately, since with chess engines, they are *definitely* tuned for particular hardware combinations.
It's basically unresolvable. See the big argument I had with Bob Hyatt over this (I think it might be thread which is now almost a year old.) Do you run a new program on old hardware or an old program on new hardware to compare? You get different answers depending on how you figure this and the answer is highly subject to spin and manipulation. You can pick and choose the most favorable comparison to make your point.

What seems really clear is that software and hardware have both advanced enormously.
I think it may be resolvable (after a fashion). Here again I put forward the introduction of the delay loop in both programs (somehow), running on modern hardware but with perhaps the memory limits of the old. Dial back the old program to P90 NPS rates (based on historical logs). Then begin dialing back the modern program, first to an equal NPS rate as the old program, then lower and lower, charting the results. That would reduce the debate to which NPS rating on the chart best represents what the new program could have achieved optimally on the ancient hardware. You would at least know where the NPS/ELO break point would be.
Matthew Hull
Dann Corbit
Posts: 12540
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: harware vs software advances

Post by Dann Corbit »

mhull wrote:
Don wrote:
Dann Corbit wrote:
mhull wrote:
Dann Corbit wrote:I strongly suspect that increase in chess strength is exponential both in compute power and in chess algorithm software.

What I mean is that Houdini is probably 1000x stronger than the strongest chess program in 1994 {on like hardware}, just like the modern machines are 1000 times faster.
What if the like hardware is old hardware? Here I'm assuming that the "path length" for processing a node is much longer in modern software than in old software, such that old software might process more nodes per second. The longer path length might not have enough depth to achieve a return on it's investment in intelligence. If this assumption is wrong, I still wouldn't bet that 1000x would be accurate.

The reverse is of course demonstrated to be true, that dumber but faster software cannot make its shorter path-length pay better dividends on modern hardware due to the limit of its investment in intelligence (path length).

The latter is already proven. The former may be more difficult to demonstrate, as has been discussed here already. Someone would have to invest the time to optimize new software for old hardware, which in reality might not comfortably fit within it, depending on the platform. For instance, try cramming a modern program into a unit that runs on a 68030 with a tiny hash memory and compete against a Mephisto or Fidelity program. I'm not sure that would work out well for the modern program. The Intel P90 might work, but one still might better put their money on Genius in that contest.
That is a good point, and I am not sure how you would measure such a thing accurately, since with chess engines, they are *definitely* tuned for particular hardware combinations.
It's basically unresolvable. See the big argument I had with Bob Hyatt over this (I think it might be thread which is now almost a year old.) Do you run a new program on old hardware or an old program on new hardware to compare? You get different answers depending on how you figure this and the answer is highly subject to spin and manipulation. You can pick and choose the most favorable comparison to make your point.

What seems really clear is that software and hardware have both advanced enormously.
I think it may be resolvable (after a fashion). Here again I put forward the introduction of the delay loop in both programs (somehow), running on modern hardware but with perhaps the memory limits of the old. Dial back the old program to P90 NPS rates (based on historical logs). Then begin dialing back the modern program, first to an equal NPS rate as the old program, then lower and lower, charting the results. That would reduce the debate to which NPS rating on the chart best represents what the new program could have achieved optimally on the ancient hardware. You would at least know where the NPS/ELO break point would be.
I suspect that the modern compilers are also vastly superior (Actually, it is more than suspicion, since builds on the old compilers are not nearly so fast as on the new ones).

I think in the end it is like comparing old time baseball players to modern ones. There is some basis for comparison, and we imagine that modern greats would do well in the past and vice-versa, but I don't think we can prove it effectively.
User avatar
mhull
Posts: 13447
Joined: Wed Mar 08, 2006 9:02 pm
Location: Dallas, Texas
Full name: Matthew Hull

Re: harware vs software advances

Post by mhull »

Dann Corbit wrote:
mhull wrote:
Don wrote:
Dann Corbit wrote:
mhull wrote:
Dann Corbit wrote:I strongly suspect that increase in chess strength is exponential both in compute power and in chess algorithm software.

What I mean is that Houdini is probably 1000x stronger than the strongest chess program in 1994 {on like hardware}, just like the modern machines are 1000 times faster.
What if the like hardware is old hardware? Here I'm assuming that the "path length" for processing a node is much longer in modern software than in old software, such that old software might process more nodes per second. The longer path length might not have enough depth to achieve a return on it's investment in intelligence. If this assumption is wrong, I still wouldn't bet that 1000x would be accurate.

The reverse is of course demonstrated to be true, that dumber but faster software cannot make its shorter path-length pay better dividends on modern hardware due to the limit of its investment in intelligence (path length).

The latter is already proven. The former may be more difficult to demonstrate, as has been discussed here already. Someone would have to invest the time to optimize new software for old hardware, which in reality might not comfortably fit within it, depending on the platform. For instance, try cramming a modern program into a unit that runs on a 68030 with a tiny hash memory and compete against a Mephisto or Fidelity program. I'm not sure that would work out well for the modern program. The Intel P90 might work, but one still might better put their money on Genius in that contest.
That is a good point, and I am not sure how you would measure such a thing accurately, since with chess engines, they are *definitely* tuned for particular hardware combinations.
It's basically unresolvable. See the big argument I had with Bob Hyatt over this (I think it might be thread which is now almost a year old.) Do you run a new program on old hardware or an old program on new hardware to compare? You get different answers depending on how you figure this and the answer is highly subject to spin and manipulation. You can pick and choose the most favorable comparison to make your point.

What seems really clear is that software and hardware have both advanced enormously.
I think it may be resolvable (after a fashion). Here again I put forward the introduction of the delay loop in both programs (somehow), running on modern hardware but with perhaps the memory limits of the old. Dial back the old program to P90 NPS rates (based on historical logs). Then begin dialing back the modern program, first to an equal NPS rate as the old program, then lower and lower, charting the results. That would reduce the debate to which NPS rating on the chart best represents what the new program could have achieved optimally on the ancient hardware. You would at least know where the NPS/ELO break point would be.
I suspect that the modern compilers are also vastly superior (Actually, it is more than suspicion, since builds on the old compilers are not nearly so fast as on the new ones).
It seems to me that an NPS governor loop would render the compiler/optimization issues moot, especially if we have an old program with known NPS/hardware setups from historical log files. You wouldn't have to optimize anything if we are dialing-back NPS anyway. Simply dial it down to a known historical performance level.
Dann Corbit wrote:I think in the end it is like comparing old time baseball players to modern ones. There is some basis for comparison, and we imagine that modern greats would do well in the past and vice-versa, but I don't think we can prove it effectively.
If baseball players ran on computers, we'd have a better chance at such comparisons, which is why it seems completely doable for chess programs. The idea of mapping arbitrary, delay-loop-induced NPS/ELO curves seems fairly practical. In the process, old programs would cross known historical boundaries as their NPS are reduced to the standards of the past, while the new programs will be exploring those limitations in detail for the first time without having to optimize them for ancient hardware. The only question left would be which NPS level represents the new program in the specified old hardware -- a problem that might submit to some sort of triangulation. But even if it didn't, discovering what some new program's NPS would correspond to an old programs ELO at a specific NPS/hardware could prove almost as informative.
Matthew Hull
IanO
Posts: 496
Joined: Wed Mar 08, 2006 9:45 pm
Location: Portland, OR

Re: harware vs software advances

Post by IanO »

In this thread, I'm surprised no one has brought up another source of evidence for software vs. hardware comparisons: the hobbyist dedicated computer community. There are now a number of systems which have been designed to bring the modern advances in chess engine technology to the realm of low-cost embedded processors used in dedicated chess computers. (Commercial dedicated computer technology is otherwise completely stagnant. The 10-32 MHz processors in the toys currently sold are running engines that have been basically unmodified since the 1980s. Even hash tables are unavailable, a regression since the 1990s! The most modern innovation would be null-move in some of Morsch's programs.)

1. Phoenix Chess Systems' Revelation with processor emulation. The core system not only provides a platform capable of running modern programs (XScale 500 MHz with 32M of RAM for hash tables), but also provides emulation of the 68000 and 6502 in order to allow running some of the historical dedicated computer ROMs (Lang's championship Mephisto series, Schröder's Polgar and Rebell modules, Kittinger's Super Expert C). Among the modern programs supported are HIARCS 13, Shredder 12, Sjeng 3, Toga II, Fruit and Rybka 2.2. Rybka trails about a 100 elo from Shredder and HIARCS on this slower platform on the schachcomputer.info rating list.

2. Mysticum by Guido Marquardt. A project to interface a 550 MHz VIA Samuel running Windows XP embedded and stock WinBoard engines to a dedicated chess board. This allows 64 MB for hash. It runs Rybka 2.3, Hiarcs 12.1, Toga II, Fruit, ProDeo 1.6, Shredder Classic, and Stockfish. I don't remember seeing performance comparisons on this platform.

3. AVR-Max-SchachZwerg, for those who want to see how low they can go. It has a measly 8MHz ATMega88, with 8K rom and 1K ram. Only microMax 4.8 has been attempted for this beast, attaining about 1300 elo.

Similarly, there are modern programs now available for 400-600 MHz cell phones. Hiarcs, Shredder, Stockfish, Tiger, Genius, etc. There have been tournaments proving that they are significantly stronger than the best dedicateds, and the SSDF has started rating them in their pool as well.
User avatar
mhull
Posts: 13447
Joined: Wed Mar 08, 2006 9:02 pm
Location: Dallas, Texas
Full name: Matthew Hull

Re: harware vs software advances

Post by mhull »

IanO wrote:Similarly, there are modern programs now available for 400-600 MHz cell phones. Hiarcs, Shredder, Stockfish, Tiger, Genius, etc. There have been tournaments proving that they are significantly stronger than the best dedicateds, and the SSDF has started rating them in their pool as well.
The question seems to have been answered for 400-600 MHz. But they didn't have that kind of speed in 1995. How about below 100 MHz or perhaps even lower in into a 680x0 footprint, which would still allow for a hash table?
Matthew Hull
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: harware vs software advances

Post by bob »

mhull wrote:
Don wrote:
Dann Corbit wrote:
mhull wrote:
Dann Corbit wrote:I strongly suspect that increase in chess strength is exponential both in compute power and in chess algorithm software.

What I mean is that Houdini is probably 1000x stronger than the strongest chess program in 1994 {on like hardware}, just like the modern machines are 1000 times faster.
What if the like hardware is old hardware? Here I'm assuming that the "path length" for processing a node is much longer in modern software than in old software, such that old software might process more nodes per second. The longer path length might not have enough depth to achieve a return on it's investment in intelligence. If this assumption is wrong, I still wouldn't bet that 1000x would be accurate.

The reverse is of course demonstrated to be true, that dumber but faster software cannot make its shorter path-length pay better dividends on modern hardware due to the limit of its investment in intelligence (path length).

The latter is already proven. The former may be more difficult to demonstrate, as has been discussed here already. Someone would have to invest the time to optimize new software for old hardware, which in reality might not comfortably fit within it, depending on the platform. For instance, try cramming a modern program into a unit that runs on a 68030 with a tiny hash memory and compete against a Mephisto or Fidelity program. I'm not sure that would work out well for the modern program. The Intel P90 might work, but one still might better put their money on Genius in that contest.
That is a good point, and I am not sure how you would measure such a thing accurately, since with chess engines, they are *definitely* tuned for particular hardware combinations.
It's basically unresolvable. See the big argument I had with Bob Hyatt over this (I think it might be thread which is now almost a year old.) Do you run a new program on old hardware or an old program on new hardware to compare? You get different answers depending on how you figure this and the answer is highly subject to spin and manipulation. You can pick and choose the most favorable comparison to make your point.

What seems really clear is that software and hardware have both advanced enormously.
I think it may be resolvable (after a fashion). Here again I put forward the introduction of the delay loop in both programs (somehow), running on modern hardware but with perhaps the memory limits of the old. Dial back the old program to P90 NPS rates (based on historical logs). Then begin dialing back the modern program, first to an equal NPS rate as the old program, then lower and lower, charting the results. That would reduce the debate to which NPS rating on the chart best represents what the new program could have achieved optimally on the ancient hardware. You would at least know where the NPS/ELO break point would be.


I think that to do this right, you would have to do it twice. Take the old program, and run it on new hardware, and compare to the new program. Then take the new program, and run iit on old hardware and compare against the old program.

As I have said repeatedly, I think the two answers would very likely not be close at all. Because when you run old software on new hardware, or new hardware on old software, it is somewhat like taking a fish out of water and putting it in some other medium. It's really evolved to perform best in water, not in syrup, or something very light like alcohol...
User avatar
mhull
Posts: 13447
Joined: Wed Mar 08, 2006 9:02 pm
Location: Dallas, Texas
Full name: Matthew Hull

Re: harware vs software advances

Post by mhull »

bob wrote:
mhull wrote:
Don wrote:
Dann Corbit wrote:
mhull wrote:
Dann Corbit wrote:I strongly suspect that increase in chess strength is exponential both in compute power and in chess algorithm software.

What I mean is that Houdini is probably 1000x stronger than the strongest chess program in 1994 {on like hardware}, just like the modern machines are 1000 times faster.
What if the like hardware is old hardware? Here I'm assuming that the "path length" for processing a node is much longer in modern software than in old software, such that old software might process more nodes per second. The longer path length might not have enough depth to achieve a return on it's investment in intelligence. If this assumption is wrong, I still wouldn't bet that 1000x would be accurate.

The reverse is of course demonstrated to be true, that dumber but faster software cannot make its shorter path-length pay better dividends on modern hardware due to the limit of its investment in intelligence (path length).

The latter is already proven. The former may be more difficult to demonstrate, as has been discussed here already. Someone would have to invest the time to optimize new software for old hardware, which in reality might not comfortably fit within it, depending on the platform. For instance, try cramming a modern program into a unit that runs on a 68030 with a tiny hash memory and compete against a Mephisto or Fidelity program. I'm not sure that would work out well for the modern program. The Intel P90 might work, but one still might better put their money on Genius in that contest.
That is a good point, and I am not sure how you would measure such a thing accurately, since with chess engines, they are *definitely* tuned for particular hardware combinations.
It's basically unresolvable. See the big argument I had with Bob Hyatt over this (I think it might be thread which is now almost a year old.) Do you run a new program on old hardware or an old program on new hardware to compare? You get different answers depending on how you figure this and the answer is highly subject to spin and manipulation. You can pick and choose the most favorable comparison to make your point.

What seems really clear is that software and hardware have both advanced enormously.
I think it may be resolvable (after a fashion). Here again I put forward the introduction of the delay loop in both programs (somehow), running on modern hardware but with perhaps the memory limits of the old. Dial back the old program to P90 NPS rates (based on historical logs). Then begin dialing back the modern program, first to an equal NPS rate as the old program, then lower and lower, charting the results. That would reduce the debate to which NPS rating on the chart best represents what the new program could have achieved optimally on the ancient hardware. You would at least know where the NPS/ELO break point would be.


I think that to do this right, you would have to do it twice. Take the old program, and run it on new hardware, and compare to the new program. Then take the new program, and run iit on old hardware and compare against the old program.

As I have said repeatedly, I think the two answers would very likely not be close at all. Because when you run old software on new hardware, or new hardware on old software, it is somewhat like taking a fish out of water and putting it in some other medium. It's really evolved to perform best in water, not in syrup, or something very light like alcohol...
Considering for the moment only programs running on a single CPU, the only thing that platform specific optimizations are doing is delivering a certain level of performance which are typically measured in average NPS. And any program that is sped up or slowed with respect to its own NPS would seem to scale in the same way as another program relative to its respective NPS (unless one or the other programs has a bogus NPS counter). As a starting point, we could use the old program's speedup factor as a guide to slowing down the new program. But to be even more accurate, optimizing the old program for new hardware might give a more accurate speedup factor by which the new program could then be slowed.

It seems to me that tests based on these assumptions would be close enough to make fairly accurate general observations from which meaningful conclusions could be drawn about the questions surrounding the respective roles of hardware and software in playing strength, and where the tipping points might come into play for one or the other as the participant's respective performance levels are adjusted up or down.
Matthew Hull
Uri Blass
Posts: 10282
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: harware vs software advances

Post by Uri Blass »

bob wrote:
mhull wrote:
Don wrote:
Dann Corbit wrote:
mhull wrote:
Dann Corbit wrote:I strongly suspect that increase in chess strength is exponential both in compute power and in chess algorithm software.

What I mean is that Houdini is probably 1000x stronger than the strongest chess program in 1994 {on like hardware}, just like the modern machines are 1000 times faster.
What if the like hardware is old hardware? Here I'm assuming that the "path length" for processing a node is much longer in modern software than in old software, such that old software might process more nodes per second. The longer path length might not have enough depth to achieve a return on it's investment in intelligence. If this assumption is wrong, I still wouldn't bet that 1000x would be accurate.

The reverse is of course demonstrated to be true, that dumber but faster software cannot make its shorter path-length pay better dividends on modern hardware due to the limit of its investment in intelligence (path length).

The latter is already proven. The former may be more difficult to demonstrate, as has been discussed here already. Someone would have to invest the time to optimize new software for old hardware, which in reality might not comfortably fit within it, depending on the platform. For instance, try cramming a modern program into a unit that runs on a 68030 with a tiny hash memory and compete against a Mephisto or Fidelity program. I'm not sure that would work out well for the modern program. The Intel P90 might work, but one still might better put their money on Genius in that contest.
That is a good point, and I am not sure how you would measure such a thing accurately, since with chess engines, they are *definitely* tuned for particular hardware combinations.
It's basically unresolvable. See the big argument I had with Bob Hyatt over this (I think it might be thread which is now almost a year old.) Do you run a new program on old hardware or an old program on new hardware to compare? You get different answers depending on how you figure this and the answer is highly subject to spin and manipulation. You can pick and choose the most favorable comparison to make your point.

What seems really clear is that software and hardware have both advanced enormously.
I think it may be resolvable (after a fashion). Here again I put forward the introduction of the delay loop in both programs (somehow), running on modern hardware but with perhaps the memory limits of the old. Dial back the old program to P90 NPS rates (based on historical logs). Then begin dialing back the modern program, first to an equal NPS rate as the old program, then lower and lower, charting the results. That would reduce the debate to which NPS rating on the chart best represents what the new program could have achieved optimally on the ancient hardware. You would at least know where the NPS/ELO break point would be.


I think that to do this right, you would have to do it twice. Take the old program, and run it on new hardware, and compare to the new program. Then take the new program, and run iit on old hardware and compare against the old program.

As I have said repeatedly, I think the two answers would very likely not be close at all. Because when you run old software on new hardware, or new hardware on old software, it is somewhat like taking a fish out of water and putting it in some other medium. It's really evolved to perform best in water, not in syrup, or something very light like alcohol...
I think that the question is also what time control to use(even if we agree to take the new program and run it on old hardware).

I expect the new programs to get relatively better result at 120/40 time control relative to blitz.

Anpther problem is that some new programs may not be able to run on old hardware(for example if they use some tables that are too big for the old hardware to remember) and the question is what is the time of the old hardware.

I think that if we compare with year 2000 we are going to get at least 400 elo improvement only by software even if we use hardware of 2000 for every program and not very fast time control(let say at least 10 minutes for 40 moves).

hopefully there are people with some hardware of 2000 who can test it.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: harware vs software advances

Post by bob »

mhull wrote:
bob wrote:
mhull wrote:
Don wrote:
Dann Corbit wrote:
mhull wrote:
Dann Corbit wrote:I strongly suspect that increase in chess strength is exponential both in compute power and in chess algorithm software.

What I mean is that Houdini is probably 1000x stronger than the strongest chess program in 1994 {on like hardware}, just like the modern machines are 1000 times faster.
What if the like hardware is old hardware? Here I'm assuming that the "path length" for processing a node is much longer in modern software than in old software, such that old software might process more nodes per second. The longer path length might not have enough depth to achieve a return on it's investment in intelligence. If this assumption is wrong, I still wouldn't bet that 1000x would be accurate.

The reverse is of course demonstrated to be true, that dumber but faster software cannot make its shorter path-length pay better dividends on modern hardware due to the limit of its investment in intelligence (path length).

The latter is already proven. The former may be more difficult to demonstrate, as has been discussed here already. Someone would have to invest the time to optimize new software for old hardware, which in reality might not comfortably fit within it, depending on the platform. For instance, try cramming a modern program into a unit that runs on a 68030 with a tiny hash memory and compete against a Mephisto or Fidelity program. I'm not sure that would work out well for the modern program. The Intel P90 might work, but one still might better put their money on Genius in that contest.
That is a good point, and I am not sure how you would measure such a thing accurately, since with chess engines, they are *definitely* tuned for particular hardware combinations.
It's basically unresolvable. See the big argument I had with Bob Hyatt over this (I think it might be thread which is now almost a year old.) Do you run a new program on old hardware or an old program on new hardware to compare? You get different answers depending on how you figure this and the answer is highly subject to spin and manipulation. You can pick and choose the most favorable comparison to make your point.

What seems really clear is that software and hardware have both advanced enormously.
I think it may be resolvable (after a fashion). Here again I put forward the introduction of the delay loop in both programs (somehow), running on modern hardware but with perhaps the memory limits of the old. Dial back the old program to P90 NPS rates (based on historical logs). Then begin dialing back the modern program, first to an equal NPS rate as the old program, then lower and lower, charting the results. That would reduce the debate to which NPS rating on the chart best represents what the new program could have achieved optimally on the ancient hardware. You would at least know where the NPS/ELO break point would be.


I think that to do this right, you would have to do it twice. Take the old program, and run it on new hardware, and compare to the new program. Then take the new program, and run iit on old hardware and compare against the old program.

As I have said repeatedly, I think the two answers would very likely not be close at all. Because when you run old software on new hardware, or new hardware on old software, it is somewhat like taking a fish out of water and putting it in some other medium. It's really evolved to perform best in water, not in syrup, or something very light like alcohol...
Considering for the moment only programs running on a single CPU, the only thing that platform specific optimizations are doing is delivering a certain level of performance which are typically measured in average NPS. And any program that is sped up or slowed with respect to its own NPS would seem to scale in the same way as another program relative to its respective NPS (unless one or the other programs has a bogus NPS counter). As a starting point, we could use the old program's speedup factor as a guide to slowing down the new program. But to be even more accurate, optimizing the old program for new hardware might give a more accurate speedup factor by which the new program could then be slowed.

It seems to me that tests based on these assumptions would be close enough to make fairly accurate general observations from which meaningful conclusions could be drawn about the questions surrounding the respective roles of hardware and software in playing strength, and where the tipping points might come into play for one or the other as the participant's respective performance levels are adjusted up or down.
while that is true to an extent, there are other issues. On slower hardware we might avoid doing some things that slow us down further. The software of 20 years ago could use many ideas from today (in terms of evaluation) but it was simply too expensive.

If you take today's software on old hardware and compare to a program that was optimized for that hardware, it will have an advantage. Whether that advantage is enough to win a match is not really the issue, but the advantage would skew the results. The opposite is true today, where we now do things that were too expensive on the old hardware. An old program will run faster, but perhaps not enough to get a major fraction of a ply, and so it misses out completely...
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: harware vs software advances

Post by Laskos »

Uri Blass wrote:
I think that the question is also what time control to use(even if we agree to take the new program and run it on old hardware).

I expect the new programs to get relatively better result at 120/40 time control relative to blitz.

Anpther problem is that some new programs may not be able to run on old hardware(for example if they use some tables that are too big for the old hardware to remember) and the question is what is the time of the old hardware.

I think that if we compare with year 2000 we are going to get at least 400 elo improvement only by software even if we use hardware of 2000 for every program and not very fast time control(let say at least 10 minutes for 40 moves).

hopefully there are people with some hardware of 2000 who can test it.
I am using ultra-short time controls quite often, something like games in 6 seconds. The new strong engines are beating the crap out of old strong engines (from year 2000 or so) on the same hardware even at this time control, using a hash of 4MB, available at that time. The difference of software in the last 10 years is _at_least_ 400 Elo points, the difference in hardware is ~350 Elo points. I don't really understand what the whole discussion is about, I think I once wrote that the software development is a little faster than the hardware one, on average 40-45 Elo points per year software, ~35 Elo points per year hardware.

Kai