Chess program with Artificial Neural Networks (ANN)?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Chess program with Artificial Neural Networks (ANN)?

Post by diep »

Volker Annuss wrote:Hi Stephan,

Hermann uses neural networks for material evaluation and for time allocation.

Material evaluation

A neural network with 11 input nodes, one hidden layer with 5 nodes and one output node. It calculates an average score you can expect for the material on the board. This avarage score is transformed to a normal centipawn evaluation.
The input nodes are the number of pieces for each type (except kings) and a flag for even/opposite coloured bishops.
A small hash table is sufficient to get a hit rate very close to 100%, so there are no problems with hundreds of floating point operations per evaluation.
It took me very much work until it gave 20 or 30 ELO. So if I would write another engine from scratch, I would not do that again.


Time allocation

Another neural network in Hermann is for time allocation. It has 22 input nodes one hidden layer with 7 nodes and one output node. It calculates the probability for a change of the best move when iterating one ply deeper.
Input nodes are
- Scores from the last 2 iterations
- Number of possible moves
- Changes of the best move in the last 2 iterations
- Search instabilities in the last 2 iterations
- Checks and captures in the first moves of the PV
- Differences and transpositions in the first moves of the last 2 PVs

I would like to add
- Time for searching the best move (in % of the total time)
- Times for the 2 not best moves that took most of the time
but this does not work with my primitive multi processor search with a shared hash table, because times are massively influenced by hash hits from moves that were searched by another thread.

Volker
I posted in wrong thread i see. Here is my reply:


I'm a bit amazed by the +20 elo claim i saw from Herman from a neural network as a gain for his material values. I really like to know the original values to understand the small elowin better.

When in 2005 i bugfixed my old material values of diep quite a bit, so i did NOT change the code at all. I just put the material values from diep from

pawn=1000 , knight = 3625, bishop 3675

to the current basic values, realize there is all kind of other influences in diep's material code, but this is the basic values. I did not change that influence code back in 2005 after world champs 2005.

{ 1000, 3875, 3875, 6175, 12350 }, /* 0 */

pawn=1000, knight = 3875

This is also the CURRENT values. Note they were initially a bit more polite like 3.825. So i did very slightly change it but not a lot.

Those slight changes i hardly noticed in elorating either. However the jump from 3.6 to 3.8 of course first took some readjusting values of other parameters. Instantly it got up like 200 elopoints, not 20.

A magic change it was, really. Not +20 elo.

So i don't really understand that change of +20 elo that Hermann got.

Small sample snip that your collegue Alex Trofimov dug up. Maybe it is time you take a photo from him at work?

Lots of code look like the blow code. If you look at the code does it look like produced by a neural network or by mankind?

Code: Select all


static UINT64
materialize_valuations (int white_pawns_count, int white_knight_count, int [white_bishop_count, int white_bishop_count_1, int white_bishop_count_2, int white_rook_count, int white_queen_count, int black_pawns_count, int black_knight_count, int black_bishop_count, int black_bishop_count_1, int black_bishop_count_2, int black_rook_count, int black_queen_count)
{
  UINT64 value = 0;
  value += &#40;white_bishop_count / 2 - black_bishop_count / 2&#41; * (((&#40;UINT64&#41; 55&#41; << 48&#41; + ((&#40;UINT64&#41; 50&#41; << 32&#41; + ((&#40;UINT64&#41; 40&#41; << 16&#41; + ((&#40;UINT64&#41; 35&#41; << 0&#41;);
  value += &#40;white_pawns_count - black_pawns_count&#41; * (((&#40;UINT64&#41; 125&#41; << 48&#41; + ((&#40;UINT64&#41; 110&#41; << 32&#41; + ((&#40;UINT64&#41; 90&#41; << 16&#41; + ((&#40;UINT64&#41; 80&#41; << 0&#41;);
  value += &#40;white_knight_count - black_knight_count&#41; * (((&#40;UINT64&#41; 355&#41; << 48&#41; + ((&#40;UINT64&#41; 320&#41; << 32&#41; + ((&#40;UINT64&#41; 280&#41; << 16&#41; + ((&#40;UINT64&#41; 265&#41; << 0&#41;);
  value += &#40;white_rook_count - black_rook_count&#41; * (((&#40;UINT64&#41; 610&#41; << 48&#41; + ((&#40;UINT64&#41; 550&#41; << 32&#41; + ((&#40;UINT64&#41; 450&#41; << 16&#41; + ((&#40;UINT64&#41; 405&#41; << 0&#41;);
  value += &#40;white_queen_count - black_queen_count&#41; * (((&#40;UINT64&#41; 1150&#41; << 48&#41; + ((&#40;UINT64&#41; 1025&#41; << 32&#41; + ((&#40;UINT64&#41; 875&#41; << 16&#41; + ((&#40;UINT64&#41; 800&#41; << 0&#41;);
  value += &#40;white_bishop_count - black_bishop_count&#41; * (((&#40;UINT64&#41; 360&#41; << 48&#41; + ((&#40;UINT64&#41; 325&#41; << 32&#41; + ((&#40;UINT64&#41; 295&#41; << 16&#41; + ((&#40;UINT64&#41; 280&#41; << 0&#41;);
  if &#40;white_rook_count == 2&#41;
    value -= (((&#40;UINT64&#41; 32&#41; << 48&#41; + ((&#40;UINT64&#41; 28&#41; << 32&#41; + ((&#40;UINT64&#41; 20&#41; << 16&#41; + ((&#40;UINT64&#41; 16&#41; << 0&#41;);
  if &#40;black_rook_count == 2&#41;
    value += (((&#40;UINT64&#41; 32&#41; << 48&#41; + ((&#40;UINT64&#41; 28&#41; << 32&#41; + ((&#40;UINT64&#41; 20&#41; << 16&#41; + ((&#40;UINT64&#41; 16&#41; << 0&#41;);
  if &#40;white_queen_count + white_rook_count >= 2&#41;
    value -= (((&#40;UINT64&#41; 16&#41; << 48&#41; + ((&#40;UINT64&#41; 14&#41; << 32&#41; + ((&#40;UINT64&#41; 10&#41; << 16&#41; + ((&#40;UINT64&#41; 8&#41; << 0&#41;);
  if &#40;black_queen_count + black_rook_count >= 2&#41;
    value += (((&#40;UINT64&#41; 16&#41; << 48&#41; + ((&#40;UINT64&#41; 14&#41; << 32&#41; + ((&#40;UINT64&#41; 10&#41; << 16&#41; + ((&#40;UINT64&#41; 8&#41; << 0&#41;);
  value -= &#40;white_pawns_count - 5&#41; * white_rook_count * (((&#40;UINT64&#41; 0&#41; << 48&#41; + ((&#40;UINT64&#41; 2&#41; << 32&#41; + ((&#40;UINT64&#41; 4&#41; << 16&#41; + ((&#40;UINT64&#41; 5&#41; << 0&#41;);
  value += &#40;white_pawns_count - 5&#41; * white_knight_count * (((&#40;UINT64&#41; 5&#41; << 48&#41; + ((&#40;UINT64&#41; 4&#41; << 32&#41; + ((&#40;UINT64&#41; 2&#41; << 16&#41; + ((&#40;UINT64&#41; 0&#41; << 0&#41;);
  value += &#40;black_pawns_count - 5&#41; * black_rook_count * (((&#40;UINT64&#41; 0&#41; << 48&#41; + ((&#40;UINT64&#41; 2&#41; << 32&#41; + ((&#40;UINT64&#41; 4&#41; << 16&#41; + ((&#40;UINT64&#41; 5&#41; << 0&#41;);
  value -= &#40;black_pawns_count - 5&#41; * black_knight_count * (((&#40;UINT64&#41; 5&#41; << 48&#41; + ((&#40;UINT64&#41; 4&#41; << 32&#41; + ((&#40;UINT64&#41; 2&#41; << 16&#41; + ((&#40;UINT64&#41; 0&#41; << 0&#41;);
  return value;
&#125;
comeon we can talk a lot of crap, this is a neural network in action. Maybe for herman things weren't statistical tested very well, or some other beginners mistake in the proces. You make those easily of course. Or is it just plain desinformation?

My point is, in some buggy diep version (all kind of bugs in passed pawns for example in 2005 diep which was disaster and co - still suffer from that to some extend, fixing it rapidly now though) it already mattered +200 elo.

If i'd put back the values i had back during and before world champs 2005 in diep's material evaluation, Diep would lose of course more than 200 elopoints nowadays.

You'll say now of course in choir: "but if you gave in 2004 the tip to put a piece at 4.2 for Fruit, why didn't you do it yourself".

Yeah you know, how silly can one be. I have other 'compensation code' of course when you have a bunch of pawns against a piece. That just didn't work in the case you have 1 pawn against a piece.

The compensation you have in that case with old values is +2.6 versus today it's +2.8. If you look at it like this it seems like peanuts to talk about and it is, but it mattered really a lot of elo for diep.

The 1930s values of 3 or 3.25 for 1 piece in number of pawns is just so so ugly bad elowise, and it takes up to 3.8-4.3 pawns to flatten off and really work well.

It really matters a lot of elo. Yet if you look to that code that the other functions around it, does it look like a neural network to you or does it look like someone tried to win the worldchampionships bitshifting?

Vincent
Stephan Vermeire (Brutus)
Posts: 34
Joined: Sun Oct 12, 2008 6:32 pm

Re: Chess program with Artificial Neural Networks (ANN)?

Post by Stephan Vermeire (Brutus) »

diep wrote: ...the jump from 3.6 to 3.8 of course first took some readjusting values of other parameters. Instantly it got up like 200 elopoints...

A magic change it was, really. Not +20 elo.
Interesting improvement. I am curious if this was just an improvement for Diep or that it will be a general improvement. I think it WILL depend greatly on the code of the specific engine.
On the other hand: your improvement is promising enough to give it a try. At present in Brutus I am using the following values:

{100, 300, 300, 500, 900}

Perfect! These are terribly outdated! I will change them to:

{100, 387, 387, 617, 1235}

Then I will use my standard tuning procedure to recalibrate all other settings. If this ajustment gives a boost anything near to 200 ELO (lets say 150+), I will buy you a beer at the programmers tournament in march! :D


Now about the ANN's:
diep wrote: Small sample snip that your collegue Alex Trofimov dug up...
Lots of code look like the blow code. If you look at the code does it look like produced by a neural network or by mankind?
Without any doubth! I think the most simple and effective implementation of ANN's is using many small networks to quickly calculate isolated fragments within the program. This is in line with the point that Gian-Carlo made earlier:
Gian-Carlo Pascutto wrote:The problem is that you are requiring the network to find for itself many layers of abstraction. It's also a bit of a waste as we already known some abstractions to be valuable (e.g. passed pawns).
I think a successful (but necessarily less ambitious) attempt involves adding to the inputs the abstractions you already know about.
This way you can use all the advantages of ANN's while still being in control.

Stephan
MikeGL
Posts: 1010
Joined: Thu Sep 01, 2011 2:49 pm

Forum search challenge

Post by MikeGL »

There seems to be intelligent discussions on this NNUE stuff since way back 2010. I missed that because I honestly disconnected from the chess world since 2002 when I was employed abroad with very difficult 12 hr work shifts.

Maybe a good search challenge is to find the earliest discussion on GPU and chess eval functions within this talkchess forum. Those guys who openned the earliest thread on this topic are probably geniuses. Openning the possibility that we are currently at. The nnue net era.
I told my wife that a husband is like a fine wine; he gets better with age. The next day, she locked me in the cellar.
smatovic
Posts: 2926
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: Forum search challenge

Post by smatovic »

MikeGL wrote: Thu Feb 11, 2021 7:30 pm There seems to be intelligent discussions on this NNUE stuff since way back 2010. I missed that because I honestly disconnected from the chess world since 2002 when I was employed abroad with very difficult 12 hr work shifts.

Maybe a good search challenge is to find the earliest discussion on GPU and chess eval functions within this talkchess forum. Those guys who openned the earliest thread on this topic are probably geniuses. Openning the possibility that we are currently at. The nnue net era.
Interesting thread :)

GPUs were considered for chess since CUDA and OpenCL emerged e.g.

"Monte carlo on a NVIDIA GPU ?"

http://www.talkchess.com/forum3/viewtopic.php?t=22732

but Deepmind A0 was in need to combine a MCTS-PUCT search on CPU with ANN eval on GPU, that was the novel thingy Lc0 did.

--
Srdja
brianr
Posts: 539
Joined: Thu Mar 09, 2006 3:01 pm

Re: Chess program with Artificial Neural Networks (ANN)?

Post by brianr »

Giraffe represents a turning point for me.

https://arxiv.org/abs/1509.01549

While not 20 years ago, in the fast-moving ML space 2015 seems pretty long ago.