I don't see why an AB engine couldn't just use the exact same style of deep neural networks found in PUCT engines like Leela/AlphaZero. A DNN policy can be used in move ordering. And instead of calling a quiescence search at depth == 0, return the DNN evaluation instead.hgm wrote: ↑Fri Apr 14, 2023 10:15 pm BTW, using a 'policy network' in an AB engine should not be very difficult either. I think the main reason AlphaZero preferred to use MCTS rather than AB was that in the latter case it would be less obvious how to train such a network. But I guess that when you use a minimax search in the training, and then analyze the tree to order all moves that would have been able to produce a beta cutoff by the number of nodes it would have taken to search them through alpha-beta, you could train a NN with that info.
The Next Big Thing in Computer Chess?
Moderator: Ras
-
- Posts: 36
- Joined: Thu Mar 03, 2022 7:29 am
- Full name: Alvin Peng
Re: The Next Big Thing in Computer Chess?
-
- Posts: 648
- Joined: Mon Jun 20, 2022 4:08 am
- Full name: Brian D. Smith
Re: The Next Big Thing in Computer Chess?
The best evidence against NNs encoding deep patterns is their relative weakness in end games: with all that knowledge applied to a simple position, that's where they should be stronger!
[/quote]
I didn't flesh it out, but my first answer was an 8 man tablebase. Combine that with A/B search once material count is determined to be down to a count where A/B is statisticallysuperior...and you have a boost in endgame play.
-
- Posts: 12343
- Joined: Thu Mar 09, 2006 12:57 am
- Location: Birmingham UK
- Full name: Graham Laight
Re: The Next Big Thing in Computer Chess?
CornfedForever wrote: ↑Sat Apr 15, 2023 2:55 amI didn't flesh it out, but my first answer was an 8 man tablebase. Combine that with A/B search once material count is determined to be down to a count where A/B is statisticallysuperior...and you have a boost in endgame play.
To clarify: is this an improved way to train an NN (as opposed to to an improved way to play chess)?
Once again, for various reasons, I have come to believe that NNs tend to encode a large number of surface (shallow) patterns rather than a small number of deep patterns.
The 8-man endgame tablebase is expected to be larger than 700 Tb in size (link). Training an NN that is going to be tiny in comparison to this dataset is likely to miss a huge number of features IMO. I'm going to be cheeky and say that, IMO, there would probably be more value in finding the deep patterns in a smaller endgame tablebase!

Human chess is partly about tactics and strategy, but mostly about memory
-
- Posts: 5693
- Joined: Tue Feb 28, 2012 11:56 pm
Re: The Next Big Thing in Computer Chess?
Does a Centaur have an advantage of a standalone chess engine (both with same end game tables and opening book)?smatovic wrote: ↑Wed Apr 12, 2023 10:12 amThe Centaurs reported already that their game is dead, Centaurs participate in tournaments and use all kind of computer assist to choose the best move, big hardware, multiple engines, huge opening books, end game tables, but meanwhile they get close to the 100% draw rate with common hardware, and therefore unbalanced opening books were introduced, where one side has an slight advantage, but again draws.
If so, the chess engine has room for improvement (at least at long time controls). There is no reason why human plus engine should be stronger than engine alone.
-
- Posts: 3223
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: The Next Big Thing in Computer Chess?
Probably true, but the Centaurs picks the set-up to play against others.
--
Srdja
--
Srdja
-
- Posts: 3621
- Joined: Wed Mar 08, 2006 8:15 pm
- Full name: Jouni Uski
Re: The Next Big Thing in Computer Chess?
Lc0 has surprisingly improved +50 ELO in short time! Look at the TCEC. And there is even better net already Lc0_Dag-T1-3087500
. But needs superhardware to run.

Jouni
-
- Posts: 1990
- Joined: Thu Sep 18, 2008 10:24 pm
-
- Posts: 3223
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: The Next Big Thing in Computer Chess?
Idk how Lc0 and SF are trained according to end games nowadays, there are EGBTs, so there seems lil need to train NNs specifically for end games, a dedicated end game neural network trained against EGTB is for sure feasible but makes little sense. And, generally, end games contain IMO more "shallow chess patterns" then positions in the opening and middle game. Further, MCTS-PUCT as in Lc0 shines more in positional play and AB NNUE as in SF shines more in tactics, this is due to the underlying nature of the two algorithms.
And, again, an AB chess engine contains of an in depth limited search and an evaluation function, if AB NN engines do not shine in end game, they are probably just not trained accordingly.
Ever played against Lc0 with depth 1 search?towforce wrote: ↑Fri Apr 14, 2023 10:08 pm There are other bits and pieces of evidence, like NN's needing to generate more positions than human GMs to play to the same standard. Although NN computer chess easily beats human GMs, the evidence is that the human GMs "understand" most positions better - which strongly implies that they have deeper patterns encoded.
[..]
--
Srdja
-
- Posts: 3223
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: The Next Big Thing in Computer Chess?
To encode both, search and eval, in an perfect play engine, imagine an 5 dimensional cellular automaton -> project Iota

https://en.wikipedia.org/wiki/Cellular_automaton
--
Srdja
-
- Posts: 12343
- Joined: Thu Mar 09, 2006 12:57 am
- Location: Birmingham UK
- Full name: Graham Laight
Re: The Next Big Thing in Computer Chess?
smatovic wrote: ↑Fri Apr 14, 2023 10:17 pmI am yet not sure, but I have it on my project list, "Theta", issue is RAM, would need a lot of it, and I have the idea to use RDF/SPARQL as graph database, project is planned for some distant future...towforce wrote: ↑Fri Apr 14, 2023 10:08 pm [...]
Regarding a knowledge graph: it's basically a good idea. How are you going to find the best vertex on this graph, though? To put it another way, if you're at a given vertex, how are you going to decide which edge to travel along to get to the next vertex?
[...]
https://en.wikipedia.org/wiki/Resource_ ... _Framework
--
Srdja
I was pondering states, and I came up with a way to use a knowledge graph to do something characteristically human - come up with a plan!
In bare outline form:
* Each vertex on the graph represents a state (something like "a type of position")
* Nearby vertices are states that can be reached from the current state (obviously this is a directed graph: just because you can go from state A to state B doesn't necessarily mean you can go from state B to state A)
* your aim is to get from the current state to a connected state (or find a viable path to a nearby connected state) which is better than the current state
How does that sound?
Human chess is partly about tactics and strategy, but mostly about memory