The Next Big Thing in Computer Chess?

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

smatovic
Posts: 3223
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: The Next Big Thing in Computer Chess?

Post by smatovic »

towforce wrote: Mon May 08, 2023 1:04 pm I was pondering states, and I came up with a way to use a knowledge graph to do something characteristically human - come up with a plan!
[...]
As mentioned, I have this on my todo list for later, project Theta, I planned to use RDF/SPARQL to encode/query chess knowledge patterns of given positions, but did yet not made up how to implement the game tree search...as you mentioned (as far as I got it), you could parse the game tree and come up with rules how to reach certain positions in the tree, as mentioned, this would be analog to a multidimensional cellular automaton.

--
Srdja
DmitriyFrosty
Posts: 58
Joined: Mon Mar 27, 2023 8:29 pm
Full name: Dmitry Frosty

Re: The Next Big Thing in Computer Chess?

Post by DmitriyFrosty »

Large networks can harm Stockfish especially in blitz. Оn long controls it will also not be stronger than the same SF with a regular 47mb network
syzygy
Posts: 5693
Joined: Tue Feb 28, 2012 11:56 pm

Re: The Next Big Thing in Computer Chess?

Post by syzygy »

alvinypeng wrote: Fri Apr 14, 2023 11:09 pm
hgm wrote: Fri Apr 14, 2023 10:15 pm BTW, using a 'policy network' in an AB engine should not be very difficult either. I think the main reason AlphaZero preferred to use MCTS rather than AB was that in the latter case it would be less obvious how to train such a network. But I guess that when you use a minimax search in the training, and then analyze the tree to order all moves that would have been able to produce a beta cutoff by the number of nodes it would have taken to search them through alpha-beta, you could train a NN with that info.
I don't see why an AB engine couldn't just use the exact same style of deep neural networks found in PUCT engines like Leela/AlphaZero. A DNN policy can be used in move ordering. And instead of calling a quiescence search at depth == 0, return the DNN evaluation instead.
Because there is no hardware allowing us to evalute the Lc0-type neural networks without latency. PUCT can deal with latency, AB not.
alvinypeng
Posts: 36
Joined: Thu Mar 03, 2022 7:29 am
Full name: Alvin Peng

Re: The Next Big Thing in Computer Chess?

Post by alvinypeng »

syzygy wrote: Sat May 13, 2023 2:37 pm
alvinypeng wrote: Fri Apr 14, 2023 11:09 pm
hgm wrote: Fri Apr 14, 2023 10:15 pm BTW, using a 'policy network' in an AB engine should not be very difficult either. I think the main reason AlphaZero preferred to use MCTS rather than AB was that in the latter case it would be less obvious how to train such a network. But I guess that when you use a minimax search in the training, and then analyze the tree to order all moves that would have been able to produce a beta cutoff by the number of nodes it would have taken to search them through alpha-beta, you could train a NN with that info.
I don't see why an AB engine couldn't just use the exact same style of deep neural networks found in PUCT engines like Leela/AlphaZero. A DNN policy can be used in move ordering. And instead of calling a quiescence search at depth == 0, return the DNN evaluation instead.
Because there is no hardware allowing us to evalute the Lc0-type neural networks without latency. PUCT can deal with latency, AB not.
Latency just means the NPS won't be that high. But the idea should still work.
syzygy
Posts: 5693
Joined: Tue Feb 28, 2012 11:56 pm

Re: The Next Big Thing in Computer Chess?

Post by syzygy »

alvinypeng wrote: Sat May 13, 2023 4:04 pm
syzygy wrote: Sat May 13, 2023 2:37 pm
alvinypeng wrote: Fri Apr 14, 2023 11:09 pm
hgm wrote: Fri Apr 14, 2023 10:15 pm BTW, using a 'policy network' in an AB engine should not be very difficult either. I think the main reason AlphaZero preferred to use MCTS rather than AB was that in the latter case it would be less obvious how to train such a network. But I guess that when you use a minimax search in the training, and then analyze the tree to order all moves that would have been able to produce a beta cutoff by the number of nodes it would have taken to search them through alpha-beta, you could train a NN with that info.
I don't see why an AB engine couldn't just use the exact same style of deep neural networks found in PUCT engines like Leela/AlphaZero. A DNN policy can be used in move ordering. And instead of calling a quiescence search at depth == 0, return the DNN evaluation instead.
Because there is no hardware allowing us to evalute the Lc0-type neural networks without latency. PUCT can deal with latency, AB not.
Latency just means the NPS won't be that high. But the idea should still work.
It won't play well.
alvinypeng
Posts: 36
Joined: Thu Mar 03, 2022 7:29 am
Full name: Alvin Peng

Re: The Next Big Thing in Computer Chess?

Post by alvinypeng »

syzygy wrote: Tue May 16, 2023 10:00 pm
alvinypeng wrote: Sat May 13, 2023 4:04 pm
syzygy wrote: Sat May 13, 2023 2:37 pm
alvinypeng wrote: Fri Apr 14, 2023 11:09 pm
hgm wrote: Fri Apr 14, 2023 10:15 pm BTW, using a 'policy network' in an AB engine should not be very difficult either. I think the main reason AlphaZero preferred to use MCTS rather than AB was that in the latter case it would be less obvious how to train such a network. But I guess that when you use a minimax search in the training, and then analyze the tree to order all moves that would have been able to produce a beta cutoff by the number of nodes it would have taken to search them through alpha-beta, you could train a NN with that info.
I don't see why an AB engine couldn't just use the exact same style of deep neural networks found in PUCT engines like Leela/AlphaZero. A DNN policy can be used in move ordering. And instead of calling a quiescence search at depth == 0, return the DNN evaluation instead.
Because there is no hardware allowing us to evalute the Lc0-type neural networks without latency. PUCT can deal with latency, AB not.
Latency just means the NPS won't be that high. But the idea should still work.
It won't play well.
Lc0 GPU playing with 1 thread, minibatch size 1, and no prefetch still plays incredibly strong. In this situation, is there any advantage to using PUCT over AB, for latency reasons or otherwise?

My original statement only claimed using a Lc0-type net in AB was possible, and nothing about the strength of such an engine. However, I do believe it would be at least at superhuman level, and maybe even be able to beat SF8. Whether or not that qualifies as playing "well" depends on your definition of "well" I guess.
smatovic
Posts: 3223
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: The Next Big Thing in Computer Chess?

Post by smatovic »

alvinypeng wrote: Wed May 17, 2023 2:51 pm [...]
Well, that is the thing, how to make a predicator Lc0 like neural network make work (competitive) for AB search with NNUE...

NNOM++ - Move Ordering Neural Networks?
https://talkchess.com/forum3/viewtopic.php?f=7&t=80364

I am sure other people tried, but you have to invest compute cycles and loose nps resp. search depth resp. Elo again.

Just quitting QS won't work (horizon-effect), you have a (positional) predicator and then have to verify the predication via (tactical) AB search+QS or MCTS playouts.

--
Srdja
BrendanJNorman
Posts: 2583
Joined: Mon Feb 08, 2016 12:43 am
Full name: Brendan J Norman

Re: The Next Big Thing in Computer Chess?

Post by BrendanJNorman »

hgm wrote: Wed Apr 12, 2023 7:42 pm Just let computers compete at playing a game that is more interesting (less drawish) than orthodox Chess.

Tenjiku Shogi would be an interesting candidate, as it requires extremely deep tactics.
No need to corrupt chess so much.

Just make all chess like 30+20, like the highest time control on liChess.

Error rate much higher, so no "draw death".
syzygy
Posts: 5693
Joined: Tue Feb 28, 2012 11:56 pm

Re: The Next Big Thing in Computer Chess?

Post by syzygy »

alvinypeng wrote: Wed May 17, 2023 2:51 pmMy original statement only claimed using a Lc0-type net in AB was possible, and nothing about the strength of such an engine. However, I do believe it would be at least at superhuman level, and maybe even be able to beat SF8. Whether or not that qualifies as playing "well" depends on your definition of "well" I guess.
I told you the reason why it isn't done. I don't know why you are arguing.
alvinypeng
Posts: 36
Joined: Thu Mar 03, 2022 7:29 am
Full name: Alvin Peng

Re: The Next Big Thing in Computer Chess?

Post by alvinypeng »

syzygy wrote: Fri May 19, 2023 10:13 pm
alvinypeng wrote: Wed May 17, 2023 2:51 pmMy original statement only claimed using a Lc0-type net in AB was possible, and nothing about the strength of such an engine. However, I do believe it would be at least at superhuman level, and maybe even be able to beat SF8. Whether or not that qualifies as playing "well" depends on your definition of "well" I guess.
I told you the reason why it isn't done. I don't know why you are arguing.
I agree with what you've said, so I wasn't arguing with you. You just think I was arguing with you because you completely misunderstood the original point I was trying to make.