New Release of Barbarossa - 0.7.0

Discussion of chess software programming and technical issues.

Moderators: hgm, chrisw, Rebel

nionita
Posts: 179
Joined: Fri Oct 22, 2010 9:47 pm
Location: Austria
Full name: Niculae Ionita

New Release of Barbarossa - 0.7.0

Post by nionita »

Hey there,

I just released the version 0.7.0 of Barbarossa, see https://github.com/nionita/Barbarossa/releases (Windows and Linux Binaries inclusive). It seems that I hit a wall with the current architecture, because I tried a lot of ideas, but there is not much of an improvement in the play strength, probably under 30 Elo.

Regards, Nicu
User avatar
Ras
Posts: 2638
Joined: Tue Aug 30, 2016 8:19 pm
Full name: Rasmus Althoff

Re: New Release of Barbarossa - 0.7.0

Post by Ras »

nionita wrote: Mon Nov 04, 2024 11:19 pmIt seems that I hit a wall with the current architecture
I'd guess that with Texel tuned material values and PSQTs, using tapered eval for both, you might milk out another 150 Elo.
Rasmus Althoff
https://www.ct800.net
User avatar
a_node_uncut
Posts: 11
Joined: Sun Nov 10, 2024 9:58 am
Full name: Max Lewicki

Re: New Release of Barbarossa - 0.7.0

Post by a_node_uncut »

Nice project! One little nitpick in README:

Code: Select all

The last released version is Barbarossa v0.4.0 from December 2016.
Time to update this? :P
nionita
Posts: 179
Joined: Fri Oct 22, 2010 9:47 pm
Location: Austria
Full name: Niculae Ionita

Re: New Release of Barbarossa - 0.7.0

Post by nionita »

Yes, thanks, I completely forgot the README, I will update it asap.

Best regards, Nicu
nionita
Posts: 179
Joined: Fri Oct 22, 2010 9:47 pm
Location: Austria
Full name: Niculae Ionita

Re: New Release of Barbarossa - 0.7.0

Post by nionita »

Ras wrote: Tue Nov 12, 2024 12:07 am
nionita wrote: Mon Nov 04, 2024 11:19 pmIt seems that I hit a wall with the current architecture
I'd guess that with Texel tuned material values and PSQTs, using tapered eval for both, you might milk out another 150 Elo.
Everybody seems to have success with the Texel method, but for me it never worked. Last year I could not believe this and worked about 2 months to see what I'm doing wrong. At the end I came to the conclusion that the only thing I was doing different was the optimization method. With my method (Bayes) I could lower the error significantly, but there was no ELO gain. This was so absurd, that I gave up.

Nicu
User avatar
Ras
Posts: 2638
Joined: Tue Aug 30, 2016 8:19 pm
Full name: Rasmus Althoff

Re: New Release of Barbarossa - 0.7.0

Post by Ras »

nionita wrote: Thu Nov 14, 2024 7:50 pmEverybody seems to have success with the Texel method, but for me it never worked.
A lot depends on the training data. A very good starter is the Zurichess quiet set because it's small and still rocks. I didn't get good results with self-training data, not even against different engines.
the only thing I was doing different was the optimization method.
I didn't even use anything complicated, just step width and number of iterations as parameters, going from coarse to fine, and a little punishment for difference from default values, which prevents run-away. The actual optimisation is KISS: if within the current iteration with given step width, the total error over all positions for a given parameter goes down when going +/- step width, then add/subtract step width. The only somewhat smart thing is that I check whether the current parameter even has any impact in the current position so that I can omit the add/subtract steps in most cases.
Rasmus Althoff
https://www.ct800.net