Enpass + Castling for Zorbist hashes

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
Evert
Posts: 2923
Joined: Fri Jan 21, 2011 11:42 pm
Location: NL
Contact:

Re: Enpass + Castling for Zorbist hashes

Post by Evert » Mon Jan 09, 2017 2:21 pm

noobpwnftw wrote:If you think it the other way, you will figure out that some of these "bugs" are just a way to achieve better strength but you just don't yet know why.
That happens, but it's certainly the exception. Typically fixing a bug will increase playing strength. If not immediately, then after you correctly tune evaluation parameters that were trying to compensate for the bug.
First of all, Zorbist keys itself collides, although it happens very very very rare. Censoring en-passant & castling gives them more chances to collide, this is all I think of it.
This is not obvious. Can you proof this?
If you consider every "incorrectness" a bug then things like LMR and ProbCut shouldn't exist, similar to TT hash keys colliding since the ways they work are based on statistic results rather than being "right".
There is a distinction to be made between speculative search techniques and bugs.
At the very least, the first are by design (the design may or may not be any good, but poor design is not the same as a bug).
Then, there are other cheap ways to work around such colliding keys, i.e. you can verify that a TT move is indeed valid during search, if it gives better performance after all.
This is a poor analogy. Key collisions cannot help, and since they cause an incorrect evaluation to be returned when they do happen, they have to hurt the program. The question is: how much? As it turns out, very little.
If you can eliminate hash collisions for free (ie, without increasing the size of a TT entry), it will improve the program (but very little, since the effect is small).
Lastly, search & evaluation cannot be exactly precise, there are many trade-offs between performance & precision, you may write an evaluation function that gives you results 10x more precisely or use no pruning strategies that introducing loss of information and you might find that it hurts your elo rather than improving it, simply because it runs slower.
Yeah. Again, that has nothing to do with "helpful bugs".
Someone may say it is "dumb" not using the robust version, but in fact it just sounds "dumb".
You can be smart and never generate under promotions. Most of the time these are useless anyway. The question is whether you want your program to handle problems where the solution involves an under promotion. If you do, then not generating them doesn't just sound dumb.
In the same way: do you want to detect repetitions correctly, or not? If you do, then there is no short-cut, it has to do it correctly.

I could speed up my in-check detection by storing the in-check result in the transposition table (and skip it the second time a position is entered). However, I know that my program will blow up if it ever reaches a position where it incorrectly thinks that it is (or is not) in check. Do I think that is acceptable? Well, I don't, but someone else might make a different decision.

In short, not fixing bugs because they lose Elo is stupid. Accepting side-effects of a design intended to increase Elo is not necessarily stupid (but might be considered a mis-feature).

noobpwnftw
Posts: 360
Joined: Sun Nov 08, 2015 10:10 pm

Re: Enpass + Castling for Zorbist hashes

Post by noobpwnftw » Mon Jan 09, 2017 2:27 pm

Ok, after I review the western chess rules, repetition draw requires the available moves for the repeated position to be exactly the same. I was under the impression that if one does not make such a castling/EP move when it is possible but moved something else, after three occurrences, it is considered a draw.

According to proper understanding of the rules, you are right about the necessity to count them in.

My definition of a bug does not include things that would make things go wrong, at least there must be a way that handles consequences it introduced properly. For your example of in-check detection, if you aware that stored TT results may be collided, causing probed in-check results being incorrect, then it is also possible that you would get a wrong PV from TT but there is a verification search to avoid it.

Post Reply