Wasp 4.5 Released

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

jstanback
Posts: 130
Joined: Fri Jun 17, 2016 4:14 pm
Location: Colorado, USA
Full name: John Stanback

Re: Wasp 4.5 Released

Post by jstanback »

tomitank wrote: Sun Jan 03, 2021 6:54 am
jstanback wrote: Fri Jan 01, 2021 7:37 pm Tuning is now done in a similar fashion to back-propogation for neural networks rather than the gradient-descent method...
jstanback wrote: Fri Jan 01, 2021 7:37 pm ...each pertinent term is tweaked by a small amount in the direction to reduce error.
this is the gradient-descent. the neural network also uses gradient-descent. backprop propagate the error back for hidden layer(s) and.
When I did gradient-descent, for each position I tweaked every parameter a bit and ran the evaluation to compute the gradient gParam = dEval/dParam. Then I adjusted each parameter by lr*gParam where lr is the learn rate. For 1000 parameters this required doing 1000 evaluations for each training position. I know there are more efficient methods, but that's what I was doing.

The method I'm using now just does one evaluation for each training position but keeps a count for each parameter that gets used when evaluating the position. Since white and black use the same eval terms the count is incremented for white and decremented for black. For example, if a position has 8 white pawns and 6 black pawns then the count for pawn_material would be +2. If the game result was 1.0 (win for white) and the eval was 0 centipawns (which gets converted to 0.5 expected win-fraction) then the error is 0.5-1.0 = -0.5 and I increase pawn_material by 2*lr*0.5. I'm using a learn rate of about 1e-3, so in this case pawn_material would be increased by only 1e-3 centipawns for this single position. But at a rate of 250K positions per second the eval terms converge quite quickly.

John
jstanback
Posts: 130
Joined: Fri Jun 17, 2016 4:14 pm
Location: Colorado, USA
Full name: John Stanback

Re: Wasp 4.5 Released

Post by jstanback »

Angle wrote: Sun Jan 03, 2021 7:45 am
jstanback wrote: Fri Jan 01, 2021 7:37 pm Wasp 4.5 is released today. I estimate it to be about 50 Elo stronger than Wasp 4.0.

Many thanks to Frank Quisinsky for hosting Wasp on his website. Here is the link:

https://www.amateurschach.de/download/wasp_450.zip
Thank you, John! Nice news. What is the "proper" version number: 4.5 or 4.50 ?
Well, I don't know that there is a "proper" version number. But maybe to match the names for the binaries It would be best to use Wasp 4.50. I probably should have done that for my posts.

John
jstanback
Posts: 130
Joined: Fri Jun 17, 2016 4:14 pm
Location: Colorado, USA
Full name: John Stanback

Re: Wasp 4.5 Released

Post by jstanback »

JohnS wrote: Sun Jan 03, 2021 11:09 am
jstanback wrote: Fri Jan 01, 2021 7:37 pm Wasp 4.5 is released today. I estimate it to be about 50 Elo stronger than Wasp 4.0.
Many thanks John. How do you use the config file. I used example_config with UCI_Elo = 1200 and set ConfigFilePath=./example_config in the Shredder 12 GUI. But it seems to search much more deeply than I expect for 1200 elo and gets depth around 8 ply.
There is a lower limit of 1500 for UCI_Elo. Since Wasp mainly uses nodes/sec to reduce Elo it didn't seem reasonable to drop nodes/sec below about 50. For ConfigFilePath, Wasp doesn't add a ".txt" extension, so you need to put ConfigFilePath=./example_config.txt in the Shredder GUI.

John
tomitank
Posts: 276
Joined: Sat Mar 04, 2017 12:24 pm
Location: Hungary

Re: Wasp 4.5 Released

Post by tomitank »

jstanback wrote: Sun Jan 03, 2021 5:05 pm
tomitank wrote: Sun Jan 03, 2021 6:54 am
jstanback wrote: Fri Jan 01, 2021 7:37 pm Tuning is now done in a similar fashion to back-propogation for neural networks rather than the gradient-descent method...
jstanback wrote: Fri Jan 01, 2021 7:37 pm ...each pertinent term is tweaked by a small amount in the direction to reduce error.
this is the gradient-descent. the neural network also uses gradient-descent. backprop propagate the error back for hidden layer(s) and.
When I did gradient-descent, for each position I tweaked every parameter a bit and ran the evaluation to compute the gradient gParam = dEval/dParam. Then I adjusted each parameter by lr*gParam where lr is the learn rate. For 1000 parameters this required doing 1000 evaluations for each training position. I know there are more efficient methods, but that's what I was doing.

The method I'm using now just does one evaluation for each training position but keeps a count for each parameter that gets used when evaluating the position. Since white and black use the same eval terms the count is incremented for white and decremented for black. For example, if a position has 8 white pawns and 6 black pawns then the count for pawn_material would be +2. If the game result was 1.0 (win for white) and the eval was 0 centipawns (which gets converted to 0.5 expected win-fraction) then the error is 0.5-1.0 = -0.5 and I increase pawn_material by 2*lr*0.5. I'm using a learn rate of about 1e-3, so in this case pawn_material would be increased by only 1e-3 centipawns for this single position. But at a rate of 250K positions per second the eval terms converge quite quickly.

John
I did the same, but this is a gradient descent. Uses the derivative of error for the (weights update) direction. It just assigns a coefficient. eg: when the coefficient is zero the don't change the weight. I don't think that would have a different name. Backpropagation is another thing. There may be some specialized word to this, but not backpropagation. I think it's also called gradient descent.
Apart from that it’s irrelevant, the point is that you understand what you’re doing.

-Tamás
jstanback
Posts: 130
Joined: Fri Jun 17, 2016 4:14 pm
Location: Colorado, USA
Full name: John Stanback

Re: Wasp 4.5 Released

Post by jstanback »

tomitank wrote: Sun Jan 03, 2021 5:45 pm
jstanback wrote: Sun Jan 03, 2021 5:05 pm
tomitank wrote: Sun Jan 03, 2021 6:54 am
jstanback wrote: Fri Jan 01, 2021 7:37 pm Tuning is now done in a similar fashion to back-propogation for neural networks rather than the gradient-descent method...
jstanback wrote: Fri Jan 01, 2021 7:37 pm ...each pertinent term is tweaked by a small amount in the direction to reduce error.
this is the gradient-descent. the neural network also uses gradient-descent. backprop propagate the error back for hidden layer(s) and.
When I did gradient-descent, for each position I tweaked every parameter a bit and ran the evaluation to compute the gradient gParam = dEval/dParam. Then I adjusted each parameter by lr*gParam where lr is the learn rate. For 1000 parameters this required doing 1000 evaluations for each training position. I know there are more efficient methods, but that's what I was doing.

The method I'm using now just does one evaluation for each training position but keeps a count for each parameter that gets used when evaluating the position. Since white and black use the same eval terms the count is incremented for white and decremented for black. For example, if a position has 8 white pawns and 6 black pawns then the count for pawn_material would be +2. If the game result was 1.0 (win for white) and the eval was 0 centipawns (which gets converted to 0.5 expected win-fraction) then the error is 0.5-1.0 = -0.5 and I increase pawn_material by 2*lr*0.5. I'm using a learn rate of about 1e-3, so in this case pawn_material would be increased by only 1e-3 centipawns for this single position. But at a rate of 250K positions per second the eval terms converge quite quickly.

John
I did the same, but this is a gradient descent. Uses the derivative of error for the (weights update) direction. It just assigns a coefficient. eg: when the coefficient is zero the don't change the weight. I don't think that would have a different name. Backpropagation is another thing. There may be some specialized word to this, but not backpropagation. I think it's also called gradient descent.
Apart from that it’s irrelevant, the point is that you understand what you’re doing.

-Tamás
Hi Tamas,

Yes, it may amount to exactly the same thing, but it somehow seems different to me. I adopted this technique after experimenting with a tiny NN and learning to back-propogate. I realized that I could update the weights for an HCE exactly as is done for a single node of a NN except that I could eliminate using the derivative of the activation function since the HCE is just a linear sum of weights. Anyway, compared to my previous approact it sped up the tuning by a factor of 1000 and also made the tuning function much simpler. I have a method for using non-integer "counts" for some eval terms. For example I have a single value for king-safety and calculate a floating point scaling factor during the eval based on number enemy threats. The eval gets updated by scale*king_safety and the "count" for king_safety gets incremented by scale. It might actually be better to have separate terms for every possible number of enemy threats and let the training come up with the weight for each term. I did this initially to derive an appropriate scaling function, but I kind of like having a smoothly scaled feature.

John
Alayan
Posts: 550
Joined: Tue Nov 19, 2019 8:48 pm
Full name: Alayan Feh

Re: Wasp 4.5 Released

Post by Alayan »

How do you think this automated tuning affected Wasp's offensive middlegame style ? :)
jstanback
Posts: 130
Joined: Fri Jun 17, 2016 4:14 pm
Location: Colorado, USA
Full name: John Stanback

Re: Wasp 4.5 Released

Post by jstanback »

Alayan wrote: Sun Jan 03, 2021 8:23 pm How do you think this automated tuning affected Wasp's offensive middlegame style ? :)
I think it's about the same or maybe a little more aggressive than the previous version. I guess we'll find out from Frank's upcoming tournament. I should probably start keeping some statistics from my gauntlet testing such as wins/losses/draws before and after move 50-50 moves, similar to what Frank does.

John
JohnS
Posts: 215
Joined: Sun Feb 24, 2008 2:08 am

Re: Wasp 4.5 Released

Post by JohnS »

jstanback wrote: Sun Jan 03, 2021 5:23 pm
JohnS wrote: Sun Jan 03, 2021 11:09 am
jstanback wrote: Fri Jan 01, 2021 7:37 pm Wasp 4.5 is released today. I estimate it to be about 50 Elo stronger than Wasp 4.0.
Many thanks John. How do you use the config file. I used example_config with UCI_Elo = 1200 and set ConfigFilePath=./example_config in the Shredder 12 GUI. But it seems to search much more deeply than I expect for 1200 elo and gets depth around 8 ply.
There is a lower limit of 1500 for UCI_Elo. Since Wasp mainly uses nodes/sec to reduce Elo it didn't seem reasonable to drop nodes/sec below about 50. For ConfigFilePath, Wasp doesn't add a ".txt" extension, so you need to put ConfigFilePath=./example_config.txt in the Shredder GUI.

John
Thanks John, it works great now.
Frank Quisinsky
Posts: 6808
Joined: Wed Nov 18, 2009 7:16 pm
Location: Gutweiler, Germany
Full name: Frank Quisinsky

Re: Wasp 4.5 Released

Post by Frank Quisinsky »

Hi John S,

I added some examples in my "Engine Configuration".
The entry to Wasp 4.5 after release.

One of the most important points for myself.
Please thinking on it that Elo-Strength will be start with 1500 (same on DGT-Pi with 22 Levels, Picochess 3).

Engine Configuration:
https://www.amateurschach.de/main/_configuration.htm

Or with frame in menue system of my website:
https://www.amateurschach.de

Best
Frank
Frank Quisinsky
Posts: 6808
Joined: Wed Nov 18, 2009 7:16 pm
Location: Gutweiler, Germany
Full name: Frank Quisinsky

Re: Wasp 4.5 Released

Post by Frank Quisinsky »

Hi Alayan,

after my v4.08 test games the move-average without resign-mode goes back.
No engine have a lesser move average in FCP Qualify Tourney-2021 as Wasp 4.08 with 79,2 (without resign).

Same results Wasp 4.08 produced in test-games vs. stronger engines.
In my humble opinion the dynamic for mid-games is clearly improved and the pawn's in midgames are again more aggressively (if I compare it with Wasp 4.00). But all in all I think the more on Elo comes from a better endgame.

So, the style of Wasp 4.5 will be again more aggressively as in the predesscor's.
This make the engine very interesting for self-playing with 22 levels and Picochess 3 on DGT-Pi.

:-)

Best
Frank

Code: Select all

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Statistics to short won / draw games:
After round 8 out of 8 (current results) =   320 games per engine

  444 of  6.560 =  6,769% : Fast won/lost games below 60 moves (with mate ended)
  184 of  6.560 =  2,805% : Fast draw games below 40 moves
   72 of  6.560 =  1,098% : Fast draw games below 30 moves
   15 of  6.560 =  0,229% : Fast draw games below 20 moves, replayed
    5 of  6.560 =  0,077% : Games over 300 moves, replayed

  487 of    500 = 97,400% : ECO codes played


                                  won59 lost59 draw39        won59w won59b lost59w lost59b
01. Wasp 4.08 dev Modern x64         46      0     13            28     18       0       0    out of competition
02. chess22k 1.14 JAVA x64           27      2      6            17     10       0       2    qualified
03. Seer 1.2.1 NNUE Skylake x64      25      7      9            19      6       3       4    qualified
04. Cheng 0.40 dev AVX2 x64          21      3     10            16      5       0       3
05. ChessBrainVB 3.74 TCEC w32       20      3      5            13      7       0       3    disqualified
06. Protector 1.9.0 x64              19      1     11            10      9       0       1    qualified
07. Orion 0.8 NN POP AVX FMA x64     17      4      7            10      7       2       2    qualified
08. Koivisto 4.0 POPCNT AVX x64      15      3      8            11      4       0       3    qualified
09. Hakkapeliitta TCEC v2 x64        15      6      6             9      6       4       2
10. Topple 0.7.5 Skylake x64         14      6     14             9      5       0       6    qualified
11. Junior 13.3.00 x64               14     11      9             8      6       4       7
12. Nirvanachess 2.4 POPCNT x64      13      5     12             9      4       1       4    qualified
13. Hiarcs 14 WCSC w32               13     11     10             9      4       0      11
14. Lc0 0.26.3 x64                   12      1      6             9      3       0       1    qualified
15. Halogen 8.1 PEXT-AVX2 x64        12     12      5             8      4       3       9    qualified
16. Gödel 7.0 SSE32 x64              11      8     15             9      2       4       4    disqualified
17. Marvin 4.0.1 POPCNT x64          10      9     13             8      2       2       7    qualified
18. Francesca 0.29a x64              10     17      9             7      3       6      11
19. Bagatur 2.2 JAVA x64              9      5      6             6      3       2       3
20. Monolith 2.01 PEXT x64            9     10      4             8      1       5       5
21. Crafty 25.6 x64                   9     28      6             7      2      10      18    disqualified
22. Rodent IV 0.30 BMI2 x64           8      2     11             2      6       0       2
23. Dirty Cucumber x64                8      8     13             6      2       4       4
24. Cheese 2.2 POPCNT x64             8     16      4             7      1       9       7
25. Naum 4.6 x64                      7      4      7             7      0       1       3
26. FabChess 1.16 BMI2 x64            7      5     12             3      4       1       4
27. Spike 1.4 Leiden w32              7     11      8             3      4       4       7
28. pirarucu 3.3.5 JAVA x64           6     10      4             3      3       3       7    qualified
29. SmarThink 1.98 AVX2 x64           6     10     10             2      4       4       6
30. Atlas 0.91 POPCNT x64             6     15     10             2      4       5      10
31. Weiss 1.2 PEXT x64                6     15     12             5      1       7       8
32. Quazar 0.4 x64                    5     12      5             4      1       2      10
33. Gogobello 2.2 BMI2 x64            5     16     11             4      1       6      10
34. Mr Bob 0.9.0 POPCNT x64           4     19      8             4      0      10       9
35. Tucano 9.00 x64                   4     23     10             3      1       7      16
36. Gaviota 1.0 AVX x64               4     29      6             2      2      10      19
37. TheBaron 3.44.1 x64               3     16     18             1      2       8       8    disqualified
38. Invictus r305 PEXT x64            3     23      9             3      0       5      18
39. Stash 24.0 BMI2 x64               2     17      4             1      1       9       8
40. Asymptote 0.8 Broadwell x64       2     18      9             1      1       3      15
41. Amoeba 3.2 x64                    2     23     13             1      1       4      19

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Wasp won 46 of 320 games with mate below 60 moves = 14.375%