Chess AI engine in 5 years.

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

Leo
Posts: 1104
Joined: Fri Sep 16, 2016 6:55 pm
Location: USA/Minnesota
Full name: Leo Anger

Chess AI engine in 5 years.

Post by Leo »

How miuch better will chess AI engines be in 5 years? How much more room for improvement is there.
Advanced Micro Devices fan.
Dann Corbit
Posts: 12792
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Chess AI engine in 5 years.

Post by Dann Corbit »

The strength of hardware grows exponentially over time. Roughly speaking, computers double in strength every year.
The strength of chess engines also grows exponentially over time. Roughly speaking, this year's best chess engine will be twice as strong as last year's best engine running on identical hardware.
We can expect an AI chess engine on the top hardware with the best program to be around 100 times stronger.
Now, with Elo being an exponential scale, it will not seem as impressive as it deserves to be.
There are always reports that Moore's law will fail and is failing. But I believe that new technologies will be invented to keep it going.
Relays->vacuum tubes->transistors-> integrated circuits->SMP chiplets-> who knows what.
Look at the ludicrous power of the Cuda RTX 4090: 1,300 trillions of operations per second (TOPS)
When silicon runs out, maybe we will run on graphene.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Hai
Posts: 693
Joined: Sun Aug 04, 2013 1:19 pm

Re: Chess AI engine in 5 years.

Post by Hai »

Leo wrote: Sat Oct 12, 2024 11:25 pm How miuch better will chess AI engines be in 5 years? How much more room for improvement is there.
Better at:
-fortress detection
-detection of dead minor and major pieces
-testsuites
-tactical, positional, strategical understanding
-search
-creative thinking (position x, king will run from A8 to H8 to H1 to A1 to help his pieces)
-wider and deeper calculation
-evaluation
-using 8-piece syzygy endgame tablebases

NPU engines
Much stronger hardware
Jouni
Posts: 3652
Joined: Wed Mar 08, 2006 8:15 pm
Full name: Jouni Uski

Re: Chess AI engine in 5 years.

Post by Jouni »

The progress has almost stopped now (CCRL):

Stockfish 17 64-bit 8CPU 3808
Stockfish 16 64-bit 8CPU 3807
Stockfish 16.1 64-bit 8CPU 3804
Stockfish 15 64-bit 8CPU 3802
Jouni
chesskobra
Posts: 354
Joined: Thu Jul 21, 2022 12:30 am
Full name: Chesskobra

Re: Chess AI engine in 5 years.

Post by chesskobra »

I don't think there will be any fundamental progress. They will employ bigger and bigger neural networks to continue gaining a few Elo points.

Some people don't agree that looking at executable size is relevant. But look at how SF size has grown (most likely because of NN size):

SF17 - 78 MB
SF16 - 40 MB
SF13 - 22 MB

So in 5 years we will have 500 MB networks. But look at Alexandria - a top 10 or so engine, with an executable of 3.4 MB (of which the NN is hardly 3 MB). That is impressive.
Jouni wrote: Sun Oct 13, 2024 10:33 am The progress has almost stopped now (CCRL):

Stockfish 17 64-bit 8CPU 3808
Stockfish 16 64-bit 8CPU 3807
Stockfish 16.1 64-bit 8CPU 3804
Stockfish 15 64-bit 8CPU 3802
User avatar
Brunetti
Posts: 424
Joined: Tue Dec 08, 2009 1:37 pm
Location: Milan, Italy
Full name: Alex Brunetti

Re: Chess AI engine in 5 years.

Post by Brunetti »

Dann Corbit wrote: Sun Oct 13, 2024 5:22 am The strength of hardware grows exponentially over time. Roughly speaking, computers double in strength every year.
The strength of chess engines also grows exponentially over time. Roughly speaking, this year's best chess engine will be twice as strong as last year's best engine running on identical hardware.
We can expect an AI chess engine on the top hardware with the best program to be around 100 times stronger.
What does Chat GPT think about this? :) (And I agree)

The statement contains a mix of accurate ideas and some overly optimistic or simplified claims:

"The strength of hardware grows exponentially over time": This is an optimistic take based on Moore's Law, which predicted that the number of transistors on a chip would double roughly every two years. However, as mentioned earlier, this exponential growth has slowed significantly due to physical and technological limitations. While new technologies such as parallel processing, GPUs, and AI-specific hardware (like Tensor Processing Units) have boosted performance in certain areas, the general pace of exponential growth has declined. It’s also worth noting that increased transistor count doesn't always translate directly to proportional increases in performance, especially in modern systems where other factors, like memory bottlenecks and energy consumption, play a bigger role .

"Computers double in strength every year": This is an overstatement. While Moore's Law predicted doubling every two years, doubling every year is overly optimistic. Improvements today are also more specialized, focusing on specific areas like AI processing, which means performance gains can vary significantly depending on the application .

"Chess engines grow exponentially in strength": Chess engines have indeed become stronger over time, but their improvement isn't strictly exponential. Gains are tied to both hardware improvements and advances in algorithms. For example, major leaps have occurred when engines like AlphaZero used neural networks instead of brute-force searching, but we can't expect such exponential leaps every year. Additionally, Elo ratings for engines reflect diminishing returns; a doubling of computational power doesn’t directly translate to a doubling of Elo.

"AI chess engine on top hardware... 100 times stronger": This figure is highly speculative. While improvements in hardware and AI have led to stronger engines, the jump to “100 times stronger” is not realistic in terms of Elo. The Elo scale is logarithmic, so significant hardware improvements might result in smaller perceived improvements in Elo rating, making such jumps unlikely.

"Moore's Law will keep going due to new technologies": There are ongoing efforts to extend the principles of Moore’s Law through innovations like chip stacking, quantum computing, and the use of new materials like graphene. However, we are reaching the limits of silicon-based technologies, and breakthroughs in alternative materials are still in development. The progression from relays to vacuum tubes to transistors was revolutionary, but predicting the future is much harder—there’s no guarantee that similar leaps will continue indefinitely .

"Look at the power of RTX 4090": The NVIDIA RTX 4090 is indeed an example of how GPUs have evolved to handle massive computational loads, particularly in tasks like AI, deep learning, and rendering. It can perform trillions of operations per second, but these improvements are domain-specific and don’t necessarily translate into general CPU performance.

Conclusion:

The post is optimistic and contains valid points about the general improvement of hardware and chess engines, but it simplifies the rate of growth and overlooks the challenges and limits we are approaching with current technology. While the future holds promise, particularly with AI and potential new materials, the rate of growth may not be as explosive or consistent as described.

Alex
Viz
Posts: 223
Joined: Tue Apr 09, 2024 6:24 am
Full name: Michael Chaly

Re: Chess AI engine in 5 years.

Post by Viz »

Jouni wrote: Sun Oct 13, 2024 10:33 am The progress has almost stopped now (CCRL):

Stockfish 17 64-bit 8CPU 3808
Stockfish 16 64-bit 8CPU 3807
Stockfish 16.1 64-bit 8CPU 3804
Stockfish 15 64-bit 8CPU 3802
It's only because CCRL isn't a suitable list for the top engines and top is killed by elo compression.
If you look at play with unbalanced position there is a reasonable progress.
Well, even with balanced positions there is, just that CCRL has too big of error bars and long time controls to show -
https://tests.stockfishchess.org/tests/ ... f9b33d15a0
You may say that 7 elo is not a lot but in actual fact it wins 4,5 times more game pairs than it loses, which is a big deal. Just that it doesn't look as impressive if you draw 95% of the time.
User avatar
towforce
Posts: 12511
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK
Full name: Graham Laight

Re: Chess AI engine in 5 years.

Post by towforce »

Leo wrote: Sat Oct 12, 2024 11:25 pm How miuch better will chess AI engines be in 5 years? How much more room for improvement is there.

All chess engines are AI engines. There are two sources of knowledge:

1. Generating a game tree

2. The evaluation code

By "AI engines", you probably meant evaluation code powered by a trained NN.

Whatever the source of the engine's knowledge, there are diminishing returns to knowledge, and the computer chess world is already encountering the "death by draw" issue. It's therefore likely that the best gains in computer chess have already been taken.

If somebody wants to stun the computer chess world one more time, the best course of action would be to solve chess: many people say that this cannot be done - and especially not in the next 5 years. My analysis is that it could be: there will exist a relatively simple model for chess, but it won't be found by human analysis, building game trees, or training NNs on massive data sets.
Human chess is partly about tactics and strategy, but mostly about memory
Dann Corbit
Posts: 12792
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Chess AI engine in 5 years.

Post by Dann Corbit »

Brunetti wrote: Sun Oct 13, 2024 11:59 am
Dann Corbit wrote: Sun Oct 13, 2024 5:22 am The strength of hardware grows exponentially over time. Roughly speaking, computers double in strength every year.
The strength of chess engines also grows exponentially over time. Roughly speaking, this year's best chess engine will be twice as strong as last year's best engine running on identical hardware.
We can expect an AI chess engine on the top hardware with the best program to be around 100 times stronger.
What does Chat GPT think about this? :) (And I agree)

The statement contains a mix of accurate ideas and some overly optimistic or simplified claims:

"The strength of hardware grows exponentially over time": This is an optimistic take based on Moore's Law, which predicted that the number of transistors on a chip would double roughly every two years. However, as mentioned earlier, this exponential growth has slowed significantly due to physical and technological limitations. While new technologies such as parallel processing, GPUs, and AI-specific hardware (like Tensor Processing Units) have boosted performance in certain areas, the general pace of exponential growth has declined. It’s also worth noting that increased transistor count doesn't always translate directly to proportional increases in performance, especially in modern systems where other factors, like memory bottlenecks and energy consumption, play a bigger role .

"Computers double in strength every year": This is an overstatement. While Moore's Law predicted doubling every two years, doubling every year is overly optimistic. Improvements today are also more specialized, focusing on specific areas like AI processing, which means performance gains can vary significantly depending on the application .

"Chess engines grow exponentially in strength": Chess engines have indeed become stronger over time, but their improvement isn't strictly exponential. Gains are tied to both hardware improvements and advances in algorithms. For example, major leaps have occurred when engines like AlphaZero used neural networks instead of brute-force searching, but we can't expect such exponential leaps every year. Additionally, Elo ratings for engines reflect diminishing returns; a doubling of computational power doesn’t directly translate to a doubling of Elo.

"AI chess engine on top hardware... 100 times stronger": This figure is highly speculative. While improvements in hardware and AI have led to stronger engines, the jump to “100 times stronger” is not realistic in terms of Elo. The Elo scale is logarithmic, so significant hardware improvements might result in smaller perceived improvements in Elo rating, making such jumps unlikely.

"Moore's Law will keep going due to new technologies": There are ongoing efforts to extend the principles of Moore’s Law through innovations like chip stacking, quantum computing, and the use of new materials like graphene. However, we are reaching the limits of silicon-based technologies, and breakthroughs in alternative materials are still in development. The progression from relays to vacuum tubes to transistors was revolutionary, but predicting the future is much harder—there’s no guarantee that similar leaps will continue indefinitely .

"Look at the power of RTX 4090": The NVIDIA RTX 4090 is indeed an example of how GPUs have evolved to handle massive computational loads, particularly in tasks like AI, deep learning, and rendering. It can perform trillions of operations per second, but these improvements are domain-specific and don’t necessarily translate into general CPU performance.

Conclusion:

The post is optimistic and contains valid points about the general improvement of hardware and chess engines, but it simplifies the rate of growth and overlooks the challenges and limits we are approaching with current technology. While the future holds promise, particularly with AI and potential new materials, the rate of growth may not be as explosive or consistent as described.

Alex
Software improvements by the SF team are focused on improving the net to a large degree, but that will not produce any sort of exponential improvement in strength. However, branching factor improvements are exponential. The current top engines have a branching factor of about 1.4, so it sounds like chess cannot improve much more, since 1.0 is perfect. But if you achieved a branching factor of 1, you would instantly see all the way to the 12000 ply limit. So search improvements can easily maintain exponential progress.
See, for instance: viewtopic.php?t=55374

As far as GPU cards having specialized domains for problem solving, chess is clearly right in the wheelhouse. Put LC0 on a more powerful GPU and it instantly scales right along with the card. Furthermore, AMD is working on using the infinity fabric to share all memory resources (e.g. video RAM, computer RAM) transparently so there is no need to copy problems to and from the video RAM (the biggest current bottleneck for GPU based programs). This architecture already exists for the top end commercial cards. Eventually, it will work its way down to consumer cards.

As for technology failing, see Ray Kurzweil's ideas on exponential growth:
https://en.wikipedia.org/wiki/The_Singularity_Is_Near
Computation speed has been growing exponentially for a thousand years, not a couple hundred (e.g. Napier's bones, Pascal's mechanical calculator). When the relay was top dog, I am sure nobody imagine there would be integrated circuits?

As far as the factor of 100 being overly optimistic, I guess it is very safe. Of course, it won't look very impressive as far as Elo numbers, but those are meaningless anyway except as compared to a pool of contestants and they don't mean strength the way people think. Scaling Elo so that the bleeding edge Stockfish on a 10,000 node cluster has an Elo of 10 is just as valid as any other number.
(2^5)^2 = 1024 so assumption of double speed software and hardware each year still gives me a factor of ten.

There are dire predictions of technological advance hitting the wall every year. I remember back when CPUs were around 800MHz, there were predictions in this forum that 1GHZ was impossible due to trace creep on the ICs.

Currently, a stockfish 14.1 cluster has the following stats:
102,542,506,414 NPS on 131,072 threads (See Ipman chess site)
Eventually, our phones will be able to do that.

Look at this list:
https://en.wikipedia.org/wiki/History_of_supercomputing
A bit ragged, but it seems to be accelerating.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Viz
Posts: 223
Joined: Tue Apr 09, 2024 6:24 am
Full name: Michael Chaly

Re: Chess AI engine in 5 years.

Post by Viz »

Dann Corbit wrote: Sun Oct 13, 2024 1:13 pm Software improvements by the SF team are focused on improving the net to a large degree
This is just false.
When we had HCE improvements in strength were 80% search / 20% eval.
With NNUE and after we stabilized it a bit it's more like 60-70% search and the rest is eval. Which still means that search improves on average signifficantly faster than evaluation.