Neural Net Endgame technique

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
Alexander Lim
Posts: 24
Joined: Sun Mar 10, 2019 12:16 am
Full name: Alexander Lim

Neural Net Endgame technique

Post by Alexander Lim » Sun Jun 02, 2019 4:58 am

So can a neural net play the endgame or not? There seems to be allot of discussion going on with strong opinions and different criteria on what it means for a neural net to play the endgame. For example do we need it to be able replicate an endgame table base or just be able to finish off an endgame in a reasonable manner? Can it be an 'endgame only' NN or does it have to be proficient at all stages of play? And what about the trolling....

One thing missing from the discussions are actual examples of endgames being 'solved' by a neural net so people just assume Leela's endgame weaknesses are a neural net feature. I hope to rectify this in this post and hopefully any future discussions might take these examples into consideration.

(I'm aware of the efforts of the Ender project and was impressed with its ability to solve the KQ vs kr though it was unclear whether it can do, for example, NB or BB as well. Perhaps the author could comment?)

Anyway here are some examples of some standard 3 and 4 piece endgames. The original ChessFighter couldn't solve KNB vs k or KQ vs kr but later nets are able to (though it's still a little shaky sometimes). The nets used are all 'proper' nets in the sense that they can play the opening and middle game as well. The fact that they can do the endgame as well is just by the way.

Stockfish is not using any table bases though it gives a fairly optimal defence. ChessFighter is thinking for around 4-6 seconds per move at approx 6000 nps.

This message will be split up into multiple posts...

(Click on the 3 dots ... to see all the examples)

KNB vs k

Alexander Lim
Posts: 24
Joined: Sun Mar 10, 2019 12:16 am
Full name: Alexander Lim

Re: Neural Net Endgame technique

Post by Alexander Lim » Sun Jun 02, 2019 5:13 am

KQ vs kr

(Click on the 3 dots ... to see all the examples)


Alexander Lim
Posts: 24
Joined: Sun Mar 10, 2019 12:16 am
Full name: Alexander Lim

Re: Neural Net Endgame technique

Post by Alexander Lim » Sun Jun 02, 2019 5:22 am

KBB vs k


jp
Posts: 523
Joined: Mon Apr 23, 2018 5:54 am

Re: Neural Net Endgame technique

Post by jp » Mon Jun 17, 2019 9:02 am

Alexander Lim wrote:
Sun Jun 02, 2019 4:58 am
One thing missing from the discussions are actual examples of endgames being 'solved' by a neural net so people just assume Leela's endgame weaknesses are a neural net feature. I hope to rectify this in this post and hopefully any future discussions might take these examples into consideration.

(I'm aware of the efforts of the Ender project and was impressed with its ability to solve the KQ vs kr though it was unclear whether it can do, for example, NB or BB as well. Perhaps the author could comment?)
Alexander, what are the differences between your engine & Leela that might be causing different endgame performance? e.g. Are you training it the same way? (For Ender, it's obvious what the training differences are.)

User avatar
hgm
Posts: 23213
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Neural Net Endgame technique

Post by hgm » Mon Jun 17, 2019 9:31 am

A neural net of 3 neurons should already be enough (combined with PUCT search) to easily win KBNK. All that it has to know is that you have to drive the bare King into the corner, and which corner. For KBBK even a single neuron should be enough.

brianr
Posts: 338
Joined: Thu Mar 09, 2006 2:01 pm

Re: Neural Net Endgame technique

Post by brianr » Mon Jun 17, 2019 11:16 am

I did some lone king training with small nets in September last year. It was with a version of Leela that might still have had the 50 move draw issue. In any case, an 8x1 net was fine for KQ or KR v lone K. However, for KBNvK an 8x1 net could not mate. Needed to be at least 16x2 for about 2/3rds wins. But only trained with about 10% of entire sample set of all possible KBNvK positions. I also tried restricting the lone king to just one quadrant to reduce the possible combinations from nearly 11 M to 2.7. Generating test positions, training, and evaluating took about a week for each endgames, so by KBN I was testing with less than 200K games and probably many more samples are needed.

While I agree a far smaller net should be capable of mating with KBNvK, in practice training one was vastly more difficult than for the KQ and KR cases. Also, there was quite a bit of work on "ender" endgame nets using tablebases to train with something like 16 or fewer pieces that were quite quite successful, but these were much larger nets, IIRC. Look for posts by user dkappe.

Post Reply