The Next Big Thing in Computer Chess?

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

smatovic
Posts: 2672
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

The Next Big Thing in Computer Chess?

Post by smatovic »

...some 2cents on this (just picking up what was alredy topic in different threads in here):

We are getting closer to the perfect chess oracle, a chess engine with perfect play and 100% draw rate.

The Centaurs reported already that their game is dead, Centaurs participate in tournaments and use all kind of computer assist to choose the best move, big hardware, multiple engines, huge opening books, end game tables, but meanwhile they get close to the 100% draw rate with common hardware, and therefore unbalanced opening books were introduced, where one side has an slight advantage, but again draws.

The #1 open source engine Stockfish lowered in the past years the effective branching factor of the search algorithm from ~2 to ~1.5 to now ~1.25, this indicates that the selective search heuristics and evaluation heuristics are getting closer to the optimum, where only one move per position has to be considered.

About a decade ago it was estimated that with about ~4000 Elo points we will have a 100% draw rate amongst engines on our computer rating lists, now the best engines are in the range of ~3750 Elo (CCRL), what translates estimated to ~3600 human FIDE Elo points (Magnus Carlsen is rated today 2852 Elo in Blitz). Larry Kaufman (grandmaster and computer chess legenda) mentioned that with the current techniques we might have still ~50 Elo to gain, and it seems everybody waits for the next bing thing in computer chess to happen.

We replaced the HCE, handcrafted evaluation function, of our computer chess engines with neural networks. We train now neural networks with billions of labeled chess positions, and they evaluate chess positions via pattern recognition better than what a human is able to encode by hand. The NNUE technique, neural networks used in AlphaBeta search engines, gave an boost of 100 to 200 Elo points.

What could be next thing, the next boost?

If we assume we still have 100 to 200 Elo points until perfect play (normal chess with standard opening and a draw), if we assume an effective branching factor ~1.25 with HCSH, hand crafted search heuristics, and that neural networks are superior in this regard, we could imagine to replace HCSH with neural networks too and lower the EBF further, closer to 1.

Such an technique was already proposed, NNOM++. Move Ordering Neural Networks, but until now it seems that the additional computation effort needed does not pay off.

What else?

We use neural networks in the classic way for pattern recognition in nowadays chess engines, but now the shift is to pattern creation, the so called generative AIs. They generate text, source code, images, audio, video and 3D models. I would say the race is now up for the next level, an AI which is able to code an chess engine and outperforms humans in this task.

An AI coding a chess engine has also a philosophical implication, such an event is what the Transhumanists call the takeoff of Technological Singularity, when the AI starts to feed its own development in an feedback loop and exceeds human understanding.

Moore's Law has still something in pipe, from currently 5nm to 3nm to maybe 2nm and 1+nm, so we can expect even larger and more performant neural networks for generative AIs in future. Maybe in ~6 years there will be a kind of peak or kind of silicon sweetspot (current transistor density/efficiency vs. needed financial investment in fab process/research), but currently there is so much money flowing into this domain that progress for the next couple of years seems assured.

Interesting times ahead.

--
Srdja
smatovic
Posts: 2672
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: The Next Big Thing in Computer Chess?

Post by smatovic »

...on bigger NNUE neural networks for position evaluation, there is a knowledge:search trade off, bigger neural networks need more compute cycles, lower nps and therefore search depth, you gain knowledge you loose search depth. With wider vector-units in upcoming CPUs, AVX-512, SVE2, etc., we will see IMO incremental progress in network size.

--
Srdja
smatovic
Posts: 2672
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: The Next Big Thing in Computer Chess?

Post by smatovic »

A question rises, if we say we have at least two further x2 increases in consumer compute power (VPU or GPU), what does an doubling of network size (with according training data) at same NPS will give in Elo strength? And where is the limit? Idk. Maybe the Lc0 or Stockfish guys can give a hint on this.

--
Srdja
Dann Corbit
Posts: 12545
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: The Next Big Thing in Computer Chess?

Post by Dann Corbit »

The next big thing will be when the GPUs and CPUs transparently share memory resources so that we do not have to copy to and from GPU memory.
Suddenly, engines like LC0 will become unbeatable.

It's not just the copy time that we save, it is a whole new programming paradigm.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
smatovic
Posts: 2672
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: The Next Big Thing in Computer Chess?

Post by smatovic »

Yes, a new programming paradigm, in the HPC realm there is already unified/coherent memory across CPU and GPU present via CXL, Infinity Fabric or NVLink, and they couple already beefy CPU+GPU(+TPU)+HBM on a single board via chiplets. Idk how this will descent to the consumer market, but there is already Apple M-series silicon, CPU+GPU+TPU with unified memory, new programming paradigm, new possibilities.

--
Srdja
User avatar
hgm
Posts: 27836
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: The Next Big Thing in Computer Chess?

Post by hgm »

Just let computers compete at playing a game that is more interesting (less drawish) than orthodox Chess.

Tenjiku Shogi would be an interesting candidate, as it requires extremely deep tactics.
User avatar
yurikvelo
Posts: 710
Joined: Sat Dec 06, 2014 1:53 pm

Re: The Next Big Thing in Computer Chess?

Post by yurikvelo »

smatovic wrote: Wed Apr 12, 2023 10:12 amWe are getting closer to the perfect chess oracle, a chess engine with perfect play and 100% draw rate.
Or we approach equal strength identical knowledge self-play noobs. They always draw, as it cannot see or exploit opponent's blunders.
smatovic wrote: Wed Apr 12, 2023 10:12 am 100% draw rate with common hardware ....
branching factor ... now ~1.25, this indicates that the selective search heuristics and evaluation heuristics are getting closer to the optimum, where only one move per position has to be considered.
or indicates that it early prune good moves and hence very small diminishing returns due to big hardware. 10x computing power cannot find better moves because algo early pruned them.

Select 1 million non-draw positions from TB7 and let pure SF-NNUE secure 1 million wins against pure TB7 opponent.
smatovic wrote: Wed Apr 12, 2023 10:12 am we could imagine to replace HCSH with neural networks too and lower the EBF further, closer to 1.
EBF=1 will mean that 100x computing power doesn't give ELO advantage, as everything was pruned, no need to spend time computing
Werewolf
Posts: 1797
Joined: Thu Sep 18, 2008 10:24 pm

Re: The Next Big Thing in Computer Chess?

Post by Werewolf »

Dann Corbit wrote: Wed Apr 12, 2023 4:13 pm The next big thing will be when the GPUs and CPUs transparently share memory resources so that we do not have to copy to and from GPU memory.
Suddenly, engines like LC0 will become unbeatable.

It's not just the copy time that we save, it is a whole new programming paradigm.
You mean SoC?
Dann Corbit
Posts: 12545
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: The Next Big Thing in Computer Chess?

Post by Dann Corbit »

Werewolf wrote: Wed Apr 12, 2023 11:41 pm
Dann Corbit wrote: Wed Apr 12, 2023 4:13 pm The next big thing will be when the GPUs and CPUs transparently share memory resources so that we do not have to copy to and from GPU memory.
Suddenly, engines like LC0 will become unbeatable.

It's not just the copy time that we save, it is a whole new programming paradigm.
You mean SoC?
A separate concept. It is possible for a system on a chip to use an architecture that has transparent memory access. However, most SoC implementations are not able to do that yet.

There are implementations where everything is a separate component and yet they have transparent memory access. Srdja mentioned three ways this is currently implemented in industrial systems.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Dann Corbit
Posts: 12545
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: The Next Big Thing in Computer Chess?

Post by Dann Corbit »

yurikvelo wrote: Wed Apr 12, 2023 11:06 pm
smatovic wrote: Wed Apr 12, 2023 10:12 amWe are getting closer to the perfect chess oracle, a chess engine with perfect play and 100% draw rate.
Or we approach equal strength identical knowledge self-play noobs. They always draw, as it cannot see or exploit opponent's blunders.
smatovic wrote: Wed Apr 12, 2023 10:12 am 100% draw rate with common hardware ....
branching factor ... now ~1.25, this indicates that the selective search heuristics and evaluation heuristics are getting closer to the optimum, where only one move per position has to be considered.
or indicates that it early prune good moves and hence very small diminishing returns due to big hardware. 10x computing power cannot find better moves because algo early pruned them.

Select 1 million non-draw positions from TB7 and let pure SF-NNUE secure 1 million wins against pure TB7 opponent.
smatovic wrote: Wed Apr 12, 2023 10:12 am we could imagine to replace HCSH with neural networks too and lower the EBF further, closer to 1.
EBF=1 will mean that 100x computing power doesn't give ELO advantage, as everything was pruned, no need to spend time computing
An EBF of 1 does not mean perfect chess (nor can it ever mean that, because the search cannot be exhaustive with speculative pruning like null move and late move reductions. It is trivial to make a program with an EBF of 1. Simply search only the first node. With current chess programs, it will actually play pretty well and I guess it could beat really old programs. The goal is to approach an EBF of 1 (attaining an EBF of 1.0 exactly means there are serious bugs in the program, because the program is not a perfect oracle). On the other hand, an EBF of 1.1 or even 1.05 might be achievable.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.