So if the programmer+grandmaster can't explain the losing move it might not be a bug in the code. However, it could still be a coding bug, couldn't it? Further, if the programmer+grandmaster feel that the Neural Net has so many games (i.e. so much data) in its history that the losing move is very surprising, would that not suggest that the problem is a coding bug rather than insufficient learning?A neural net can make mistakes with code that is entirely bug-free ...
A sample bug might be if the original code indicated that the two bishops are a disadvantage rather than an advantage.
Perhaps I am mis-interpreting your response and mis-interpreting what the Chess-AI is doing. If a subtle bug was made, would the Chess AI's learning be so self-correcting that eventually the coding bug would be neutralized? Does this mean that in such a situation, the programmer would assume that further AI-learning would neutralize the effect of a coding bug?