Computer chess progress over say the last 20 years?

Discussion of chess software programming and technical issues.

Moderator: Ras

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Computer chess progress over say the last 20 years?

Post by bob »

Uri Blass wrote:
bob wrote:
syzygy wrote:
bob wrote:
syzygy wrote:
bob wrote:The point is simple. Faster hardware allowed us to do things that did not work with much slower hardware. IE faster hardware allowed improvements in software that were not feasible with very slow hardware.
Faster hardware allowed us to tune chess engines at ultra-bullet time controls, which means they have now been perfectly tuned for playing at regular time control on hardware of 20 years ago.

But you have already left your original position, which was proved to be untenable:
bob wrote:What one can get away with at 3M nodes per second is quite a bit different from what you can get away with 100x (or 1000x) slower...
SF's search of today works just fine at regular time control on hardware that is 100x as slow, because regular time control on such hardware corresponds to the conditions under which SF is being tested and tuned.
I haven't left my original position at all. Point still stands from my perspective. I can do things today I considered too expensive in 1995. I could do things in 1995 that I considered to be unbearably expensive in 1968.
But the undeniable mathematical reality is that what today tests fine at ultra-bullet would have worked, and still works, just as fine on hardware that is 100x slower at time controls that are 100x longer.

So again, it is rather unlikely that Pawel's Rodent could be tuned to do better at 30Knps, as Rodent most likely is better tuned for regular games at 30Knps than for regular games at 3Mnps or whatever it reaches on modern hardware. Because most likely it has already been tuned at time controls on modern hardware that correspond to regular time controls at those 30Knps.
Not necessarily. While they might do "just fine" who is testing to see what new ideas they can tone down or throw away at bullet? Nobody. Because playing bullet games is not the goal. For example, singular extensions. Not so good at bullet.
singular extensions are good enough for bullet with the hardware of today(at least for stockfish) and they only accept changes that work at bullet.
All I can say is that a few years back, when everybody started to copy the simplistic tt-singular idea from robbolito, it didn't work at blitz. I took stockfish and removed it and tested on my cluster and it was very slightly stronger WITHOUT the tt-singular stuff. But that was a long time back.

Since singular extensions have been in for a LONG time, I doubt anyone has done much testing to see if it helps or hurts at bullet.
IanO
Posts: 501
Joined: Wed Mar 08, 2006 9:45 pm
Location: Portland, OR

Re: Computer chess progress over say the last 20 years?

Post by IanO »

Vinvin wrote:
fierz wrote:Dear all,

I would like to produce a graph showing the progress of the top chess computer over time - in one version as absolute numbers (elo vs year), and in another version, pure algorithmic progress (elo vs year of release, on identical hardware). I can find a lot of information on the web - for example, the CCRL rating list has lots of current and older programs running on equal hardware to compare. What I'm lacking though is a list of which chess program was best in a given year (to search the top program per year in the CCRL list).

Has anyone got an idea where I could find this info, or maybe already compiled such a list of progress in computer chess? I would especially like to be able to compare the influence that better hardware had and the influence that better software had.

One example of what I would like to do: using the CCRL (40/4) list but maybe there is some better resource?

2016: best engine is Stockfish with a single-CPU rating of 3246
2005: Fruit 2.1 was one of the best engines around, with a rating of 2693
2003: Ruffian 1.05 (?) was one of the best engines , with a rating of 2608

I would like to extend this list with more years, and further back, but I'm unsure of which engines were top when - can anyone help?

I find it very interesting that pure algorithmic progress between 2003 and 2016 yielded 600 rating points! Guesstimating 60 Elo for a speed doubling, 18 Months as doubling time from Moore's law, and 13 years, I get ~500 elo for hardware improvement in that time, so software appears to have made more progress than hardware. I would also be interested to hear your thoughts on what makes the difference between Stockfish and e.g. Ruffian - what was invented algorithm-wise in the last 10 years (LMR? what else?) and how much did it contribute to the improvement? Or is it all due to better testing?

best regards
Martin
Here are some data from 1995 to 2010 based on the SSDF rating list [and CCRL]: http://www.talkchess.com/forum/viewtopi ... 801#532801
That list extended past 2010, now entirely based on CCRL:

2011-2013 Houdini
2014 Stockfish 5, Houdini 4, Komodo 7, Gull 3
2014 Komodo 8, Stockfish 5, Houdini 4, Gull 3
2015 Stockfish 6, Komodo 8, Houdini 4, Gull 3
2015 Komodo 9.2, Stockfish 6, Houdini 4, Gull 3
2016 Stockfish 7, Komodo 9.4

Houdini dominated the post-Rybka era from 2011 to 2013, then Stockfish/Fishtest and Komodo have been leap-frogging since then. Is this the new "eternal rivalry" we last saw between Junior and Shredder? And like Naum before it, Gull 3 has become "bottom of the best".

It is also encouraging to see how many engines have scaled the previous peak of Rybka's top strength. Besides the top four above, I count Critter, Fire, and Equinox. Ginkgo has also surpassed Rybka on the IPON list.

I am also grateful that we now have the TCEC as a long time control data point for strongest engine. Long may it continue!