The Next Big Thing in Computer Chess?

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

alvinypeng
Posts: 36
Joined: Thu Mar 03, 2022 7:29 am
Full name: Alvin Peng

Re: The Next Big Thing in Computer Chess?

Post by alvinypeng »

hgm wrote: Fri Apr 14, 2023 10:15 pm BTW, using a 'policy network' in an AB engine should not be very difficult either. I think the main reason AlphaZero preferred to use MCTS rather than AB was that in the latter case it would be less obvious how to train such a network. But I guess that when you use a minimax search in the training, and then analyze the tree to order all moves that would have been able to produce a beta cutoff by the number of nodes it would have taken to search them through alpha-beta, you could train a NN with that info.
I don't see why an AB engine couldn't just use the exact same style of deep neural networks found in PUCT engines like Leela/AlphaZero. A DNN policy can be used in move ordering. And instead of calling a quiescence search at depth == 0, return the DNN evaluation instead.
CornfedForever
Posts: 648
Joined: Mon Jun 20, 2022 4:08 am
Full name: Brian D. Smith

Re: The Next Big Thing in Computer Chess?

Post by CornfedForever »

towforce wrote: Fri Apr 14, 2023 10:08 pm esting project, extract chess knowledge pattern into a knowledge graph.

The best evidence against NNs encoding deep patterns is their relative weakness in end games: with all that knowledge applied to a simple position, that's where they should be stronger!

[/quote]

I didn't flesh it out, but my first answer was an 8 man tablebase. Combine that with A/B search once material count is determined to be down to a count where A/B is statisticallysuperior...and you have a boost in endgame play.
User avatar
towforce
Posts: 12344
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK
Full name: Graham Laight

Re: The Next Big Thing in Computer Chess?

Post by towforce »

CornfedForever wrote: Sat Apr 15, 2023 2:55 am
towforce wrote: Fri Apr 14, 2023 10:08 pmThe best evidence against NNs encoding deep patterns is their relative weakness in end games: with all that knowledge applied to a simple position, that's where they should be stronger!
I didn't flesh it out, but my first answer was an 8 man tablebase. Combine that with A/B search once material count is determined to be down to a count where A/B is statisticallysuperior...and you have a boost in endgame play.

To clarify: is this an improved way to train an NN (as opposed to to an improved way to play chess)?

Once again, for various reasons, I have come to believe that NNs tend to encode a large number of surface (shallow) patterns rather than a small number of deep patterns.

The 8-man endgame tablebase is expected to be larger than 700 Tb in size (link). Training an NN that is going to be tiny in comparison to this dataset is likely to miss a huge number of features IMO. I'm going to be cheeky and say that, IMO, there would probably be more value in finding the deep patterns in a smaller endgame tablebase! :)
Human chess is partly about tactics and strategy, but mostly about memory
syzygy
Posts: 5693
Joined: Tue Feb 28, 2012 11:56 pm

Re: The Next Big Thing in Computer Chess?

Post by syzygy »

smatovic wrote: Wed Apr 12, 2023 10:12 amThe Centaurs reported already that their game is dead, Centaurs participate in tournaments and use all kind of computer assist to choose the best move, big hardware, multiple engines, huge opening books, end game tables, but meanwhile they get close to the 100% draw rate with common hardware, and therefore unbalanced opening books were introduced, where one side has an slight advantage, but again draws.
Does a Centaur have an advantage of a standalone chess engine (both with same end game tables and opening book)?

If so, the chess engine has room for improvement (at least at long time controls). There is no reason why human plus engine should be stronger than engine alone.
smatovic
Posts: 3223
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: The Next Big Thing in Computer Chess?

Post by smatovic »

Probably true, but the Centaurs picks the set-up to play against others.

--
Srdja
Jouni
Posts: 3621
Joined: Wed Mar 08, 2006 8:15 pm
Full name: Jouni Uski

Re: The Next Big Thing in Computer Chess?

Post by Jouni »

Lc0 has surprisingly improved +50 ELO in short time! Look at the TCEC. And there is even better net already Lc0_Dag-T1-3087500 :o . But needs superhardware to run.
Jouni
Werewolf
Posts: 1990
Joined: Thu Sep 18, 2008 10:24 pm

Re: The Next Big Thing in Computer Chess?

Post by Werewolf »

Jouni wrote: Sun Apr 16, 2023 5:59 pm Lc0 has surprisingly improved +50 ELO in short time! Look at the TCEC. And there is even better net already Lc0_Dag-T1-3087500 :o . But needs superhardware to run.
Is that better than BT2?
smatovic
Posts: 3223
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: The Next Big Thing in Computer Chess?

Post by smatovic »

towforce wrote: Fri Apr 14, 2023 10:08 pm [..]
The best evidence against NNs encoding deep patterns is their relative weakness in end games: with all that knowledge applied to a simple position, that's where they should be stronger!
Idk how Lc0 and SF are trained according to end games nowadays, there are EGBTs, so there seems lil need to train NNs specifically for end games, a dedicated end game neural network trained against EGTB is for sure feasible but makes little sense. And, generally, end games contain IMO more "shallow chess patterns" then positions in the opening and middle game. Further, MCTS-PUCT as in Lc0 shines more in positional play and AB NNUE as in SF shines more in tactics, this is due to the underlying nature of the two algorithms.
towforce wrote: Fri Apr 14, 2023 10:08 pm Of course, the art of the end game is "knowing what's going to happen well ahead", and hence to know what's important in the position now: somehow, the human GMs do this, but the ANNs don't.
And, again, an AB chess engine contains of an in depth limited search and an evaluation function, if AB NN engines do not shine in end game, they are probably just not trained accordingly.
towforce wrote: Fri Apr 14, 2023 10:08 pm There are other bits and pieces of evidence, like NN's needing to generate more positions than human GMs to play to the same standard. Although NN computer chess easily beats human GMs, the evidence is that the human GMs "understand" most positions better - which strongly implies that they have deeper patterns encoded.
[..]
Ever played against Lc0 with depth 1 search?

--
Srdja
smatovic
Posts: 3223
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: The Next Big Thing in Computer Chess?

Post by smatovic »

towforce wrote: Fri Apr 14, 2023 10:08 pm
To encode both, search and eval, in an perfect play engine, imagine an 5 dimensional cellular automaton -> project Iota ;)

https://en.wikipedia.org/wiki/Cellular_automaton

--
Srdja
User avatar
towforce
Posts: 12344
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK
Full name: Graham Laight

Re: The Next Big Thing in Computer Chess?

Post by towforce »

smatovic wrote: Fri Apr 14, 2023 10:17 pm
towforce wrote: Fri Apr 14, 2023 10:08 pm [...]
Regarding a knowledge graph: it's basically a good idea. How are you going to find the best vertex on this graph, though? To put it another way, if you're at a given vertex, how are you going to decide which edge to travel along to get to the next vertex?
[...]
I am yet not sure, but I have it on my project list, "Theta", issue is RAM, would need a lot of it, and I have the idea to use RDF/SPARQL as graph database, project is planned for some distant future...
https://en.wikipedia.org/wiki/Resource_ ... _Framework

--
Srdja

I was pondering states, and I came up with a way to use a knowledge graph to do something characteristically human - come up with a plan!

In bare outline form:

* Each vertex on the graph represents a state (something like "a type of position")

* Nearby vertices are states that can be reached from the current state (obviously this is a directed graph: just because you can go from state A to state B doesn't necessarily mean you can go from state B to state A)

* your aim is to get from the current state to a connected state (or find a viable path to a nearby connected state) which is better than the current state

How does that sound?
Human chess is partly about tactics and strategy, but mostly about memory