I estimate that Wasp 6.50 is about 50 Elo stronger than Wasp 6.00.
The neural network for evaluation is similar to Wasp 6.00 except
that the number of neurons in the hidden layer has been increased to
1536.
Positions from games played in the last 6 months were added to the
training data bringing the total to about 220M positions. The positions
were re-scored using a fixed 35K node search and a recent net. The
target used to train the net is now based 80% on the search score and
20% on the game result. To create the final network, two 1024 nets
were created and each were trained using about 300 "epochs" of
500M positions. These networks were then merged to create a 2048 neuron
net and trained with about 50 additional "epochs". This net was then
pruned to 1536 neurons and trained with about 50 more "epochs". I
have no idea if this merging and pruning technique is actually better
than just creating and training several 1536 node nets and picking
the best one.
Several relatively minor changes were made to the search:
- slightly less aggressive LMR at PV nodes
- slightly more aggressive LMR at non-PV nodes
- less aggressive LMP and LMR if eval is improving
- allow static eval and null-move pruning even if beta is
a "mated" score
- slight change to the move history statistics calculation
- increased the aspiration window when the score from the
previous iteration is a TB win/loss or mate/mated to
eliminate search failing high then low and speed up mates
- changed the conditions where slave threads skip search
iterations: if more than 15 threads are already searching
at a given depth, skip to depth+1, more than 45 threads,
skip to depth+2
Wasp now usses the "Pyrrhic" library to probe Syzygy TB's. Many thanks
to Basil Fucinelli, Jon Dart, Andrew Grant, and of course Ronald
de Man. This library allows Wasp to use TB's for endings up to 7 pieces.
I estimate that Wasp 6.50 is about 50 Elo stronger than Wasp 6.00.
The neural network for evaluation is similar to Wasp 6.00 except
that the number of neurons in the hidden layer has been increased to
1536.
Positions from games played in the last 6 months were added to the
training data bringing the total to about 220M positions. The positions
were re-scored using a fixed 35K node search and a recent net. The
target used to train the net is now based 80% on the search score and
20% on the game result. To create the final network, two 1024 nets
were created and each were trained using about 300 "epochs" of
500M positions. These networks were then merged to create a 2048 neuron
net and trained with about 50 additional "epochs". This net was then
pruned to 1536 neurons and trained with about 50 more "epochs". I
have no idea if this merging and pruning technique is actually better
than just creating and training several 1536 node nets and picking
the best one.
Several relatively minor changes were made to the search:
- slightly less aggressive LMR at PV nodes
- slightly more aggressive LMR at non-PV nodes
- less aggressive LMP and LMR if eval is improving
- allow static eval and null-move pruning even if beta is
a "mated" score
- slight change to the move history statistics calculation
- increased the aspiration window when the score from the
previous iteration is a TB win/loss or mate/mated to
eliminate search failing high then low and speed up mates
- changed the conditions where slave threads skip search
iterations: if more than 15 threads are already searching
at a given depth, skip to depth+1, more than 45 threads,
skip to depth+2
Wasp now usses the "Pyrrhic" library to probe Syzygy TB's. Many thanks
to Basil Fucinelli, Jon Dart, Andrew Grant, and of course Ronald
de Man. This library allows Wasp to use TB's for endings up to 7 pieces.
John
John, this is great news! Thank you for the latest Wasp
I estimate that Wasp 6.50 is about 50 Elo stronger than Wasp 6.00.
The neural network for evaluation is similar to Wasp 6.00 except
that the number of neurons in the hidden layer has been increased to
1536.
Positions from games played in the last 6 months were added to the
training data bringing the total to about 220M positions. The positions
were re-scored using a fixed 35K node search and a recent net. The
target used to train the net is now based 80% on the search score and
20% on the game result. To create the final network, two 1024 nets
were created and each were trained using about 300 "epochs" of
500M positions. These networks were then merged to create a 2048 neuron
net and trained with about 50 additional "epochs". This net was then
pruned to 1536 neurons and trained with about 50 more "epochs". I
have no idea if this merging and pruning technique is actually better
than just creating and training several 1536 node nets and picking
the best one.
Several relatively minor changes were made to the search:
- slightly less aggressive LMR at PV nodes
- slightly more aggressive LMR at non-PV nodes
- less aggressive LMP and LMR if eval is improving
- allow static eval and null-move pruning even if beta is
a "mated" score
- slight change to the move history statistics calculation
- increased the aspiration window when the score from the
previous iteration is a TB win/loss or mate/mated to
eliminate search failing high then low and speed up mates
- changed the conditions where slave threads skip search
iterations: if more than 15 threads are already searching
at a given depth, skip to depth+1, more than 45 threads,
skip to depth+2
Wasp now usses the "Pyrrhic" library to probe Syzygy TB's. Many thanks
to Basil Fucinelli, Jon Dart, Andrew Grant, and of course Ronald
de Man. This library allows Wasp to use TB's for endings up to 7 pieces.
John
John, this is great news! Thank you for the latest Wasp
thank you for your continously work on Wasp in the last months.
At the moment I am not a good help. New job and a move at the same time. The new job takes a lot of time. At the end of the day I'm usually knocked out from all the numbers and laws.
Much more important is that Wasp goes the typical way ... holds the attacking chess in the middle-game against the others. With meaningful ways to reduce the playing strength without looking artificially weakened is Wasp very special ... since many years!!
With BlackCore, Viridithas and Smallbrain different new engines are on the way to strong competitours in _our_ family of engines. Many interesting updates I saw in the last months and I am sure in a very short time I will test it "all-in-one" in a new tourney and will give you my report.
Thanks again!
Go Wasp go ...
From one of the biggest Wasp freaks in the World!
Friendly
Frank
+60 Elo is really a lot again.
I am not "up-to-date" but with +60 Wasp can hold a place in TOP-20!
Also take a look at the EAS-Ratinglist, the world's first engine-ratinglist not measuring strength of engines but engines's style of play: https://www.sp-cc.de/eas-ratinglist.htm
(Perhaps you have to clear your browsercache (press STRG+SHIFT+DEL) or reload the website))