SlowChess Blitz Classic 2.0

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
jonkr
Posts: 103
Joined: Wed Nov 13, 2019 12:36 am
Full name: Jonathan Kreuzer

Re: SlowChess Blitz Classic 2.0

Post by jonkr » Sat Sep 05, 2020 6:58 pm

I am planning to continue experiment with chess, just given slower rate of progress (making the time spent-per-elo equation not very exciting), and plans to try stuff that will initially make it weaker, I can't be sure if anything better we come from it.

I think the initial Blitz Classic 1.0 public release was June 2019, but started working on it again almost 2 years ago. Still I made way more progress than I initially expected, I was just hoping to clearly beat Fruit 2.1 when I started, and for a short time it dropped to just a bit above fairyMax level because I was converting a lot of stuff. (Bitboards for MoveGen and eval, computing evaluation symetrically for either color, "fixing" the old horrible code, and there were a lot of bugs. One of the bugs I remember from this time is I wasn't clearing my movelist, and each generation stage would just add more moves on the end.)

Figuring out how to code and train a simple Neural Net for Pawn Structure is the current item on my list. Then see if I have it output multiple values that represent human readable concepts for the structure and use it in the evaluation. Then maybe try some specific endgame training to see if it improves in a smaller game space, like rook endgames. (And could very easily use these instead of normal eval when appropriate.) Also my random hope is to train for mate-finding and see obvious improvements there, since I find that interesting and many engines settle for trolling type close outs especially neural net ones.

Also just training and experimenting with the best training process like I mentioned before. Trying without lichess.epd, from all 0 positional values, I got Dev up to 40 elo weaker than 2.3, but I had to manually set initial values for some of the complicated terms/multipliers to get them moving, and I left the piece sq tables at the 2.3 values, which is something prone to overfitting without enough varied data. So I want to try making it so it can be fully automated. It was interesting how some values got pretty different, but may have fit together about the same.

I saw the Ethereal tuning post, like most posts/articles I skim it, but unless I sit down to work on something related I usually miss/forget key points. Is the Ethereal method scoring positions by search or by game result? I think it said it performed a search, but don't know if that was used for scoring. I've been suspicious of game results for super fast games, some can be pretty wrong.

karger
Posts: 198
Joined: Tue Feb 02, 2010 1:27 am
Full name: John Karger

Re: SlowChess Blitz Classic 2.0

Post by karger » Mon Sep 07, 2020 5:31 pm

This engines style of play is amazing. It plays combos & positions no other engines , AB or NN or NN/AB hybrid , use or even consider making. Bravo ... Most unique engine ever.

User avatar
AdminX
Posts: 5605
Joined: Mon Mar 13, 2006 1:34 pm
Location: Acworth, GA
Contact:

Re: SlowChess Blitz Classic 2.0

Post by AdminX » Tue Sep 08, 2020 5:32 pm

I have only played / viewed to games using this engine, but I very much enjoy it's style of play. Here is a game that just finished with SlowChess using only 2 threads of my laptop i7-8550U vs LC0 running on a RTX 2070 Super,

"Good decisions come from experience, and experience comes from bad decisions."
__________________________________________________________________
Ted Summers

jonkr
Posts: 103
Joined: Wed Nov 13, 2019 12:36 am
Full name: Jonathan Kreuzer

Re: SlowChess Blitz Classic 2.0

Post by jonkr » Wed Sep 09, 2020 2:19 am

Thanks, style can be hit or miss depending on opponent and what positions show up in a particular game, but in my biased opinion Slow can play some interesting chess. Sometimes I like watching it dismantle older engines in FRC (eg. Fruit 2.2, Slow Chess Classic 1.5, etc.) to remind myself of how it plays if the opposition isn't thwarting its plans. Engines like Stockfish or LC0 can seem to scare it into making bad moves and ruining its position, although I know I'm just not following the intricacies of that level of game.

I did make my own Neural Network class to play with, I found that understanding the value calculation of neural networks was more straightforward and way quicker to implement than I expected. The actual training is much much harder, so far I managed have a net learn a PieceSquareTable (slowly, even more slowly with a 3-layer network.) To implement neural net learning and training that can improve play seems daunting.

pohl4711
Posts: 1455
Joined: Sat Sep 03, 2011 5:25 am
Location: Berlin, Germany
Contact:

Re: SlowChess Blitz Classic 2.0

Post by pohl4711 » Wed Sep 09, 2020 9:26 am

7000 games testrun of Slow Chess 2.3 finished. +11 Elo to Slow Chess 2.2

https://www.sp-cc.de

(Perhaps you have to clear your browsercache or reload the website)

jonkr
Posts: 103
Joined: Wed Nov 13, 2019 12:36 am
Full name: Jonathan Kreuzer

Re: SlowChess Blitz Classic 2.0

Post by jonkr » Mon Nov 02, 2020 6:26 pm

I have released a SlowChess 2.4 on the SlowChess webpage.
It scored +20 elo in the 8-moves book versus SlowChess 2.3.
From the update notes:

Version 2.4 (+20 elo)
- Endgame neural nets for one piece endgames (King/Rook/pawns, KQps, KBps, KNps, KBps v KNps, KRps v KBps, KRps v KNps)
- A general endgame network for up to 2 pieces.
- Some general eval tuning
Notes :
The rook one the most trained, the least trained is probably the general but it was still a bit better than the handcrafted eval.
The material hash stores the index of which neural net if any to use when evaluating.
All the data files are in same directory as the exe now, and the nets/bitbases are packed into single files.
slow64 is AVX, there is noAvx, and noPop(/noAvx)

This is the first version with endgame neural nets. They can can provide pretty impressive play sometimes, but don't expect crazy strong endgame yet. I didn't finish training, it was still -37 elo to Stockfish 11 in the overall endgame test. I think up to -10 elo with only more training and bit more experimenting for how to arrange/generalize the nets would be easily possible (based on results of smaller subsets that I spent more time on.) However the computer time taken to train/test eventually started to seem less exciting since I don't have much resources and I stopped trying to keep my second computer running chess all the time, so after a while I figured should release what I have.

I do see some human-like qualities in the late endgame play and eval with neural nets now that I've been watching. How it will confidently keep a sure win, press its advantage and trade down when needed. Or recognize some sure draws to save itself that I expected would be too complicated for just board input. Also noticed connected passed pawns esp. with friendly king are even more important in rook + pawns endgames than I expected. One downside is sometimes might take a few more moves to win. I think I've mostly avoided endgame trolling that I really don't care for, but if I had time would try harder to make sure eval always a clear progression to win.

Anyway, the main reason releasing the moderate improvement is while I'm not abandoning chess or anything, working on something else sounds more interesting at this point. With the neural nets I wanted to experiment with a repeatable process that while not strictly "from zero", since it uses skilled opponents to reduce training time, was close enough that I could apply the same technique (and some of my same code) to other things. As I was figuring it out and just slowly training, trying out machine learning in other areas started to seem a more interesting experiment, since chess already has the NNUE code and nets for evaluating and learning and would take a long time for me to even start to get competitive with that. (Also eval I think is probably biggest and most chess specific part of strength and style, and it's possible that chess engines start to become all NNUE engines, and then maybe some search code starts to be considered as plugin library, since that's the other main part after eval, and I think I read here that in Shogi they use the SF search. It's not a big deal or unexpected, but is another reminder that at this point I'm making slow progress for relatively a lot of effort.) I do still want to try out stuff like adding midgame pawn structure NN and king safety related NN sometime.

User avatar
CMCanavessi
Posts: 916
Joined: Thu Dec 28, 2017 3:06 pm
Location: Argentina

Re: SlowChess Blitz Classic 2.0

Post by CMCanavessi » Mon Nov 02, 2020 10:38 pm

Nice !!! Awesome job!
Follow my tournament and some Leela gauntlets live at http://twitch.tv/ccls

AndrewGrant
Posts: 940
Joined: Tue Apr 19, 2016 4:08 am
Location: U.S.A
Full name: Andrew Grant
Contact:

Re: SlowChess Blitz Classic 2.0

Post by AndrewGrant » Mon Nov 02, 2020 10:44 pm

jonkr wrote:
Mon Nov 02, 2020 6:26 pm
Version 2.4 (+20 elo)
- Endgame neural nets for one piece endgames (King/Rook/pawns, KQps, KBps, KNps, KBps v KNps, KRps v KBps, KRps v KNps)
What exactly was the structure of these Networks?

I tried very hard to do King+Rooks+Pawns, and found _maybe_ 2 elo.
I tried very hard to do King+Knight+Pawns, and found 0 elo.
I tried very hard to do King+Bishop+Pawns, and found 0 elo.
I tried very hard to do King+Queen+Pawns, and found 0 elo.

I had planned on doing a small NN for all of these endgames, but gave up in the end.

jonkr
Posts: 103
Joined: Wed Nov 13, 2019 12:36 am
Full name: Jonathan Kreuzer

Re: SlowChess Blitz Classic 2.0

Post by jonkr » Mon Nov 02, 2020 10:56 pm

AndrewGrant wrote:
Mon Nov 02, 2020 10:44 pm
What exactly was the structure of these Networks?
For the one piece :

Code: Select all

	whiteInputCount = 64 + 48 + 32;// white uses horizontal symettry for king, 48 pawn squares, 64 squares for piece type
	blackInputCount = 64 + 48 + 64;

	network.SetInputCount(whiteInputCount + blackInputCount + 1); // +1 for side-to-move
	network.AddLayer(192, AT_RELU, LT_INPUT_TO_OUTPUTS);
	network.AddLayer(32, AT_RELU);
	network.AddLayer(32, AT_RELU);
	network.AddLayer(1, AT_LINEAR);
	network.Build();
The inputs are all 0 or 1 for PieceType on Square. The weights are all converted to int16 fixed point after training.
They are trained on the results of games played from endgame start positions that fit the type of endgame and positions from those games.
Rook endgames are by far the most common I found (even more so than I thought) and found the most elo there.
The general endgame is 224 first layer and for inputs has 64 squares for ROOK, 64 for BISHOP, and 64 for KNIGHT, in addition to King and Pawns. (But doesn't include queens.)
I just posted some more info in the technical discussion forum too.

AndrewGrant
Posts: 940
Joined: Tue Apr 19, 2016 4:08 am
Location: U.S.A
Full name: Andrew Grant
Contact:

Re: SlowChess Blitz Classic 2.0

Post by AndrewGrant » Tue Nov 03, 2020 12:08 am

jonkr wrote:
Mon Nov 02, 2020 10:56 pm
AndrewGrant wrote:
Mon Nov 02, 2020 10:44 pm
What exactly was the structure of these Networks?
For the one piece :

Code: Select all

	whiteInputCount = 64 + 48 + 32;// white uses horizontal symettry for king, 48 pawn squares, 64 squares for piece type
	blackInputCount = 64 + 48 + 64;

	network.SetInputCount(whiteInputCount + blackInputCount + 1); // +1 for side-to-move
	network.AddLayer(192, AT_RELU, LT_INPUT_TO_OUTPUTS);
	network.AddLayer(32, AT_RELU);
	network.AddLayer(32, AT_RELU);
	network.AddLayer(1, AT_LINEAR);
	network.Build();
The inputs are all 0 or 1 for PieceType on Square. The weights are all converted to int16 fixed point after training.
They are trained on the results of games played from endgame start positions that fit the type of endgame and positions from those games.
Rook endgames are by far the most common I found (even more so than I thought) and found the most elo there.
The general endgame is 224 first layer and for inputs has 64 squares for ROOK, 64 for BISHOP, and 64 for KNIGHT, in addition to King and Pawns. (But doesn't include queens.)
I just posted some more info in the technical discussion forum too.
We both did the same thing then, mirroring in the dataset as well, except I never converted to int16_ts.
Would you be willing to run a test with all NNs enabled vs all NNs disabled, to get the cumulative impact?

I _had_ planned to go down this route exhaustively, but gave up, and then quite chess altogether.

Post Reply