Zangdar - petit sorcier deviendra grand

Discussion of chess software programming and technical issues.

Moderator: Ras

Carbec
Posts: 160
Joined: Thu Jan 20, 2022 9:42 am
Location: France
Full name: Philippe Chevalier

Re: Zangdar - petit sorcier deviendra grand

Post by Carbec »

Hello,

I released version 2.11.02. Main addition is the use of Magic Bitboards.
I tried several ideas to improve the evaluation, but without succes.
I think that I will have to search about "tuning". I tested against Sungorus
and Blunder 7.1, and I think Zangdar has an elo of about 2400.
Viewing some games, I can't imagine a human player of the same elo
playing so.

Thanks to the community ! Your are a great help.

Philippe
Carbec
Posts: 160
Joined: Thu Jan 20, 2022 9:42 am
Location: France
Full name: Philippe Chevalier

Re: Zangdar - petit sorcier deviendra grand

Post by Carbec »

Hello,

A new release today 2.14.03 with several additions

+ opening book; not very useful indeed, but interesting to learn
+ at last a better evaluation; first positional elements (passed pawns, isolated pawns, doubled pawns)
and second piece mobility. I tried to implement king safety, but didn't found something convincing.
This will be a future development.
+ deeper search with Late Move Reduction (replacing PVS), and Razoring.
+ and finally parrallel with Lazy SMP. I couldn't test fully if it's really effective, as I don't have
a computer with multiple cores. Only 4 cores with hyperthreading.

So, I think Zangdar is about 2550 elo. Extremely modest, when I look at the top engines.
But well, its interesting.

Philippe
Carbec
Posts: 160
Joined: Thu Jan 20, 2022 9:42 am
Location: France
Full name: Philippe Chevalier

Re: Zangdar - petit sorcier deviendra grand

Post by Carbec »

Hello,

After waiting a long time, I am now in the process to add a tuner to Zangdar.
I read some posts here, and it appears that there are at least two methods
used : Adagrad and Adam.

Is there a reason to use one instead of the other ?
I also read the paper from Andrew Grant, but its extremely difficult
to understand the mathematical thing.

Thanks for information

Philippe
Chessqueen
Posts: 5685
Joined: Wed Sep 05, 2018 2:16 am
Location: Moving
Full name: Jorge Picado

Re: Zangdar - petit sorcier deviendra grand

Post by Chessqueen »

Zangdar is a solid 2900 rated engine

Rank Engine Score Za Cl S-B
1 Zangdar-2.24-avx2 15.5/20 · ·· ·· ·· ·· ·· ·· ·· ·· =1111=1==1111=1==011 69.75
2 ClovisIII 4.5/20 =0000=0==0000=0==100 · ·· ·· ·· ·· ·· ·· ·· ·· 69.75

20 games played / Tournament is finished

Tournament start: 2023.12.24, 07:35:27
Latest update: 2024.01.04, 15:07:38
Site/ Country: DESKTOP-4QNC0GS, United States
Level: Blitz 3/2
Hardware: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 15.9 GB Memory
Operating system: Windows 10 Home Home Edition (Build 9200) 64 bit
PGN-File: 1100=1600.pgn
Table created with: Arena 3.5.1

==============================================================================================================================

Rank Engine Score Za Du S-B
1 Zangdar-2.24-avx2 29.5/40 · ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· · 111=11=110101=1=011==101=111==11=10111=1 309.75
2 Dumb-2.0 10.5/40 000=00=001010=0=100==010=000==00=01000=0 · ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· · 309.75


40 games played / Tournament is finished

Tournament start: 2023.12.24, 07:35:27
Latest update: 2024.01.06, 05:53:08
Site/ Country: DESKTOP-4QNC0GS, United States
Level: Blitz 3/2
Hardware: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 15.9 GB Memory
Operating system: Windows 10 Home Home Edition (Build 9200) 64 bit
PGN-File: 1100=1600.pgn
Table created with: Arena 3.5.1
User avatar
mvanthoor
Posts: 1784
Joined: Wed Jul 03, 2019 4:42 pm
Location: Netherlands
Full name: Marcel Vanthoor

Re: Zangdar - petit sorcier deviendra grand

Post by mvanthoor »

Carbec wrote: Sat Jan 06, 2024 1:30 pm Hello,

After waiting a long time, I am now in the process to add a tuner to Zangdar.
Zangdar is 2910 in the CCRL-list, and it isn't even tuned? Impressive. Where did you get your values from; or are you a very good chess player that can put them in yourself?

As a first attempt, you can write a Texel tuner. It is slow to tune, but it's fairly easy to write. Basically:

1. Convert your evaluation parameters (PST's, and others) into one list parameters.
2. Load the Zurichess Quiet Labeled data-set. (It contains FEN-strings, and game result for that position)
3. Make data points out of the Zurichess set, containing the FEN, the result, and an evaluation error
4. Now run through your data points.
5. Execute your evaluation function on each position.
6. Convert the evaluation to a [0..1] range using a Sigmoid function (See Texel Tuning page in the Chess Programming Wiki)
7. Take the difference between the game result and the Sigmoid-converted evaluation, and store this in your data point
8. Calculate the mean squared error: loop over the data points. Do eval_error * eval_error for each point, and add all the values. Then divide the result by the number of data points.
9 Save this as your best_mean_squared_error
10. Now start a loop over your list of tunable variables.
11. Change the first variable by +1, and run through your ENTIRE data-set again. Set the eval_error for each data point. Recalculate the MSE. If it improves, you're done. Go to the next variable. If not, try this variable with -1. If the MSE improves, you're done. go to the next variable. If not, reset the variable.
12. Keep doing this as long as the MSE keeps improving (or until a certain number of loops). As soon as the MSE doesn't improve anymore (or you hit the number of loops over your variable list), you convert the variable list back to your evaluation weights and then dump it to disk so you can include it in your engine.

If the above isn't correct then I've made a mistake in explaining it, or I'm implementing my tuner the wrong way. Hope it would be an explanation mistake then...
Author of Rustic, an engine written in Rust.
Releases | Code | Docs | Progress | CCRL
JacquesRW
Posts: 119
Joined: Sat Jul 30, 2022 12:12 pm
Full name: Jamie Whiting

Re: Zangdar - petit sorcier deviendra grand

Post by JacquesRW »

mvanthoor wrote: Sat Jan 06, 2024 4:44 pm Zangdar is 2910 in the CCRL-list, and it isn't even tuned? Impressive. Where did you get your values from; or are you a very good chess player that can put them in yourself?
Looks like it uses Weiss values: https://github.com/Carbecq/Zangdar/blob ... uate.h#L26

I'd argue being a good chess player has little bearing on whether you can produce good evaluation values, your first (bug-free) tune from purely hand-picked values should gain >100elo, as long as you use decent data, e.g. Zurichess as you mentioned, even if you only have PSTs.

Additionally, I'd recommend using gradient descent for tuning, its little effort to set up, and if you don't want to do the simple gradient calculation you can always just refer to https://github.com/AndyGrant/Ethereal/b ... Tuning.pdf. Not only is it much, much faster, there's also a lot more to experiment with that can potentially yield more elo.
User avatar
mvanthoor
Posts: 1784
Joined: Wed Jul 03, 2019 4:42 pm
Location: Netherlands
Full name: Marcel Vanthoor

Re: Zangdar - petit sorcier deviendra grand

Post by mvanthoor »

JacquesRW wrote: Sat Jan 06, 2024 7:48 pm Looks like it uses Weiss values: https://github.com/Carbecq/Zangdar/blob ... uate.h#L26
OK; lots of engines use Weiss' evaluation parameters. People naturally end up there when they discover that Weiss is a VICE descendant.
I'd argue being a good chess player has little bearing on whether you can produce good evaluation values
I disagree. The first version of my engine, using my own evaluation parameters, is about 70 Elo stronger as compared to using the parameters from the "Simple Evaluation page" of CPW.
your first (bug-free) tune from purely hand-picked values should gain >100elo, as long as you use decent data, e.g. Zurichess as you mentioned, even if you only have PSTs.
But still, it can indeed be improved. My current test-version uses MinimalChess' tapered values until I finish my tuner.
Additionally, I'd recommend using gradient descent for tuning, its little effort to set up, and if you don't want to do the simple gradient calculation you can always just refer to https://github.com/AndyGrant/Ethereal/b ... Tuning.pdf. Not only is it much, much faster, there's also a lot more to experiment with that can potentially yield more elo.
At some point I'll implement a (stochastic) gradient descent. I know how it works on a single function (basically find the lowest point where the derivative is 0), but I don't understand it yet for a large set of parameters. I'll first implement Texel tuning, optimize it, write a tutorial in Rustic's book, and then I'll look into faster/better ways of tuning. I'll probably end up with a NNUE, somewhere in the (far) future, but I want to go through ALL the steps for the purpose of writing said book.
Author of Rustic, an engine written in Rust.
Releases | Code | Docs | Progress | CCRL
JacquesRW
Posts: 119
Joined: Sat Jul 30, 2022 12:12 pm
Full name: Jamie Whiting

Re: Zangdar - petit sorcier deviendra grand

Post by JacquesRW »

mvanthoor wrote: Sat Jan 06, 2024 9:40 pm I disagree. The first version of my engine, using my own evaluation parameters, is about 70 Elo stronger as compared to using the parameters from the "Simple Evaluation page" of CPW.
The Simplified Evaluation Function page is pretty well established to be pure trash.
If you check that page: https://www.chessprogramming.org/Simpli ... n_Function
You'll see that eval function was created by this guy: https://www.chessprogramming.org/Tomasz_Michniewski
Who's FIDE profile is linked: https://ratings.fide.com/profile/1111167
Though he isn't active anymore, he appears to be rated 2173 - which I'd certainly describe as good, so I'd say this adds evidence to my view.
mvanthoor wrote: Sat Jan 06, 2024 9:40 pm I know how it works on a single function (basically find the lowest point where the derivative is 0), but I don't understand it yet for a large set of parameters.
Its exactly the same, rather than taking the derivative of your loss function w.r.t your single parameter, you're taking the partial derivative of your loss function w.r.t each parameter, which boils down to considering everything else as constant and just taking the derivative. Then your single step for a batch/epoch is adjusting each parameter the same way you would in the single variable case with its corresponding partial derivative.
User avatar
mvanthoor
Posts: 1784
Joined: Wed Jul 03, 2019 4:42 pm
Location: Netherlands
Full name: Marcel Vanthoor

Re: Zangdar - petit sorcier deviendra grand

Post by mvanthoor »

JacquesRW wrote: Sun Jan 07, 2024 1:31 am The Simplified Evaluation Function page is pretty well established to be pure trash.
If you check that page: https://www.chessprogramming.org/Simpli ... n_Function
You'll see that eval function was created by this guy: https://www.chessprogramming.org/Tomasz_Michniewski
Who's FIDE profile is linked: https://ratings.fide.com/profile/1111167
Though he isn't active anymore, he appears to be rated 2173 - which I'd certainly describe as good, so I'd say this adds evidence to my view.
Well; I'm not rated nearly 2200. Never was. As I said: even though the current (self-written) evaluation is better than most other engines in the same strength range, it can certainly be improved by tuning. My test-version with a tapered+tuned evaluation is about 250 Elo points stronger than the current Alpha 3 version. (Using MinimalChess's parameters.) That'd put it a roughly 2160 in the CCRL-list, and it still doesn't have anything else but alpha/beta, PVS, killer moves, and PSQT's.

If the tuner I'm writing can get it in the 2150 ballpark so I can replace the values with my own versions while gaining 250 Elo, I'm happy.
Its exactly the same, rather than taking the derivative of your loss function w.r.t your single parameter, you're taking the partial derivative of your loss function w.r.t each parameter, which boils down to considering everything else as constant and just taking the derivative. Then your single step for a batch/epoch is adjusting each parameter the same way you would in the single variable case with its corresponding partial derivative.
Thanks, I'll probably look into it. I still want to complete the 'standard' Texel tuner first, and then add the (stochastic) gradiënt descent tuner later. (It can reuse all the other parts of the Texel-tuner, obviously, except obviously the calculation function.)
Author of Rustic, an engine written in Rust.
Releases | Code | Docs | Progress | CCRL
Carbec
Posts: 160
Joined: Thu Jan 20, 2022 9:42 am
Location: France
Full name: Philippe Chevalier

Re: Zangdar - petit sorcier deviendra grand

Post by Carbec »

Hello,

Many thanks for the lenghty explanation. I already got files from Zurichess. (quiet-labeled.epd, v6 and v7).
There are many things I am unfamiliar. I will go step by step, and try to understand all.
I have already read Andrew Grant paper, well, its like ancient greek :?

In effect, I began with the PeSTO tables, than later used the Weiss ones. They are very good, so I didn't
change for a time. But I think that now, I have to implement something if I want to test different parameters
in the evaluation.

I am a decent club player, although I don't play OTB now, age and illness came... I could put non-stupid
values for the parameters, but good ones ?? Also, Im not sure that "human" ideas in strategy fully apply
for computing. Its sure I prefer look at master games than computer games !

Philippe