I realize this will depend on many factors, and I expect the differences between mailbox (my engine) and bitboard (most (all?) top engines) may be pretty large.
I guess I'm looking for a short list of evaluation factors that are generally viewed as "worth it". For example, here is the Javascript example for the middle game evaluation from the above guide:
Code: Select all
function middle_game_evaluation(pos, nowinnable) {
var v = 0;
v += piece_value_mg(pos) - piece_value_mg(colorflip(pos));
v += psqt_mg(pos) - psqt_mg(colorflip(pos));
v += imbalance_total(pos);
v += pawns_mg(pos) - pawns_mg(colorflip(pos));
v += pieces_mg(pos) - pieces_mg(colorflip(pos));
v += mobility_mg(pos) - mobility_mg(colorflip(pos));
v += threats_mg(pos) - threats_mg(colorflip(pos));
v += passed_mg(pos) - passed_mg(colorflip(pos));
v += space(pos) - space(colorflip(pos));
v += king_mg(pos) - king_mg(colorflip(pos));
if (!nowinnable) v += winnable_total_mg(pos, v);
return v;
}Is the only way to answer this question to make a change, and then have your engine play N games to see if it improves, and keep repeating that process? I guess I need to invest the time to code up the stuff necessary to stick my engine in an app that can automate the playing of many games overnight.
