jesper_nielsen wrote:Hello TalkChess!
I wonder how many of you have had the same frustrating experience as me.
You can definitely add me to the club!
When i started writing my engine, Pupsi, it was steady improvements for roughly a year and a half! It was wonderful! Thrilling! Exciting!
But after the latest release, v0.18, I hit a road block.
Nothing gives any improvement! I have tried many, many things, but all of them resultet in roughly zero ELO points, so they were all discarded. Countless hours down the drain.
The frustration have set in!
In my experience, this depends on bugs in the code. I lived the same situation with Kiwi, where rewriting the whole evaluation function from scratch gave absolutely no elo change: not better, not worse. It took me a lot of time to find a _huge_ bug in the hash table implementation, but solving that bug gave me 120 elo instantly. The bug was dominating all other changes, by changing the evaluation in a sort of random way.
BTW I recently (after 1.5 years) had another look at the code and could immediately find another obvious bug... in the hash code!
I must be beyond redemption...
With Hamsters, I did a complete rewrite using entirely different data structures and algorithms. This engine is considerably simpler and faster than Kiwi, yet barely stronger and I got to this point only by... fixing bugs! In case of Hamsters, I must add: "...and small details".
Like Tord said once, there can be hundreds of tiny bugs or unoptimal details in a chess program, and even if they are worth 2 or 3 elo each we are talking about a huge difference in the end. I spent a lot of time looking for these, but eventually this (boring) job did pay out in terms of elo strength.
If I compare say Hamsters and Fruit I wonder: where's the difference? We have more or less the same feature set at a high level yet there are 250+ elo points between the two... it makes sense to look at bugs and details IMO.