Hey there,
I just released the version 0.7.0 of Barbarossa, see https://github.com/nionita/Barbarossa/releases (Windows and Linux Binaries inclusive). It seems that I hit a wall with the current architecture, because I tried a lot of ideas, but there is not much of an improvement in the play strength, probably under 30 Elo.
Regards, Nicu
New Release of Barbarossa - 0.7.0
Moderators: hgm, chrisw, Rebel
-
- Posts: 179
- Joined: Fri Oct 22, 2010 9:47 pm
- Location: Austria
- Full name: Niculae Ionita
-
- Posts: 2638
- Joined: Tue Aug 30, 2016 8:19 pm
- Full name: Rasmus Althoff
Re: New Release of Barbarossa - 0.7.0
I'd guess that with Texel tuned material values and PSQTs, using tapered eval for both, you might milk out another 150 Elo.
Rasmus Althoff
https://www.ct800.net
https://www.ct800.net
-
- Posts: 11
- Joined: Sun Nov 10, 2024 9:58 am
- Full name: Max Lewicki
Re: New Release of Barbarossa - 0.7.0
Nice project! One little nitpick in README:
Time to update this? :P
Code: Select all
The last released version is Barbarossa v0.4.0 from December 2016.
-
- Posts: 179
- Joined: Fri Oct 22, 2010 9:47 pm
- Location: Austria
- Full name: Niculae Ionita
Re: New Release of Barbarossa - 0.7.0
Yes, thanks, I completely forgot the README, I will update it asap.
Best regards, Nicu
Best regards, Nicu
-
- Posts: 179
- Joined: Fri Oct 22, 2010 9:47 pm
- Location: Austria
- Full name: Niculae Ionita
Re: New Release of Barbarossa - 0.7.0
Everybody seems to have success with the Texel method, but for me it never worked. Last year I could not believe this and worked about 2 months to see what I'm doing wrong. At the end I came to the conclusion that the only thing I was doing different was the optimization method. With my method (Bayes) I could lower the error significantly, but there was no ELO gain. This was so absurd, that I gave up.
Nicu
-
- Posts: 2638
- Joined: Tue Aug 30, 2016 8:19 pm
- Full name: Rasmus Althoff
Re: New Release of Barbarossa - 0.7.0
A lot depends on the training data. A very good starter is the Zurichess quiet set because it's small and still rocks. I didn't get good results with self-training data, not even against different engines.
I didn't even use anything complicated, just step width and number of iterations as parameters, going from coarse to fine, and a little punishment for difference from default values, which prevents run-away. The actual optimisation is KISS: if within the current iteration with given step width, the total error over all positions for a given parameter goes down when going +/- step width, then add/subtract step width. The only somewhat smart thing is that I check whether the current parameter even has any impact in the current position so that I can omit the add/subtract steps in most cases.the only thing I was doing different was the optimization method.
Rasmus Althoff
https://www.ct800.net
https://www.ct800.net