Argh, my program gets worse test suite results after bugfix

Discussion of chess software programming and technical issues.

Moderator: Ras

Tony

Re: Argh, my program gets worse test suite results after bug

Post by Tony »

Ratta wrote:
Tony wrote: Not sure how much time I'll spend on this but a couple of things. (Tips vary from essential to optimization. You do the categorizing yourself)
Wow, you are amazing. Thanks a lot!
Tony wrote: - Your hashtable writing code overwrites deep entries with shallow entries (when the key is the same). Pretty desastrous in endgames
Mh, no, this is not being done (up to a programming mistake). The only deeper entries that can be overwritten are those "old", ie resulting from a previous call to "find_best_move" (thus this could make "pondering" less effective, but it is required to avoid stucking the hashtable with useless positions).
Tony wrote: - I seriously question doing a nullmove when only 1 piece is present (and you have 2 pawns)
Yeah, my null-move checking function is still very rough.
Tony wrote: - (Specially for the FIRST ply in quiescence) Don't take the evaluation as best score when you're in check. (Maybe it can't happen in your code )
This can't happen (up to programming mistake, as usual).
Tony wrote: - Do the (material)counting stuff incremental.
Yeah, let's say that at the moment i'm just trying to achieve the highest strength/speed ratio :)
Tony wrote: - Don't know if I understood this correctly, but giving more than half a pawn bonus for attacking a piece with a pawn, gives serious horizon effects.
The bonus is given only if the attacking pawn is on move, or there are two (or more) pawns that attack two (or more) different pieces.
Tony wrote: - I would suggest splitting normal search and quiescence search in your code. They behave quite differently.
Yeah, there is still a lot of cleanup waiting :)

Regards!

Maybe I understand wrong

Code: Select all

    for(int i=0;i<REHASH;i++)
    {
        HashEntry* tmp = &hash_table[(hk.index + i) & HMASK];
        if(!tmp->check || tmp->check == hk.check)
        {
            h = tmp;
            break;
        }
    }
Ok, found the entry

Code: Select all



    if(!h) /* look for an entry searched at lower depth */
h!=0 so skip

Code: Select all


    if(!h) /* look for an old entry and take the one searched at lowest depth */
 
h!=0 so skip

Code: Select all



    if(!h)
        return;
#if 0
    if(h->check == hk.check && h->depth>depth)
        return;

    if(h->check == hk.check && h->depth==depth)
    {
        /* same entry, improve bounds */
        h->up = MIN(h->up, up);
        h->lo = MAX(h->lo, lo);
    }
    else
    {
        /* replace bounds */
        h->up = up;
        h->lo = lo;
    }
#endif

This code will correctly not overwrite old entries, but it isn't active

Code: Select all



    h->up = up;
    h->lo = lo;

    h->check = hk.check;
    h->depth = depth;
    h->best_mv = best_cont;
    h->no_good_moves = no_good_moves;
    h->is_old = 0;
}
This code howeveer is active


Tony wrote: - Don't adjust alpha and beta based on the hashtable scores. It seems theoreticly correct but in practice it isn't
Mh, i would like to better understand this issue. Suppose i store the lower bound "correcty", ie when a previous seach at the same depth fails high (IIUC this should mean that the "true" value of the position, the value that can be calculated with a [-INF,+INF] window, is higher). If so when doing a PV search i should be able to adjust the window, because anyway i'm removing value ranges that cannot contain the "true value". Is there anything wrong in this? Or is there any other kind of "pratical" issue?
IIRC corectly, the issue is researches. A research might be done with different (extension) rules.
But I think the most important is that when I get a score back of beta, on a research I'll adjust alpha to this beta, but this will make the search fail low, when the real score is exactly beta. This happens specially in a fail soft searcher, because these tend to return exact score more often.

It can be solved by raising alpha to hashtablescore-1 (ie beta-1) but this still doesn't solve problem 1. (Which doesn't matter of coarse if you don't do 1.)


Tony wrote: - (Specially for the FIRST ply in quiescence) Don't take the evaluation as best score when you're in check. (Maybe it can't happen in your code )
This can't happen (up to programming mistake, as usual).
Could very well be, I couldn't figure it out easily, but as long as you never ENTER quiescence when in check, this won't be a problem.

Cheers,

Tony
GothicChessInventor

Re: Argh, my program gets worse test suite results after bug

Post by GothicChessInventor »

worse case scenario: put the bug back in :D
Ratta

Re: Argh, my program gets worse test suite results after bug

Post by Ratta »

@Tony:
Ah, i had misunderstood what you where saying about overwriting hashtable entries, and yes, you are right about what rattatechess currently does.
When i wrote the code I was thinking that it would not be a bad thing, and that it was unlikely that a deep search could be replaced with a shallow one because i would first query the deep search and use it. But at the moment of writing this post i'm realizing that it may not be true, and that because of alpha/beta bounds i may find a deep search with bounds that i cannot use, and actually replace it with a shallow one. Is this the problem?

@Ed:
nah, i already started an almost complete rewrite of my engine, we'll see after that :)