klx wrote: ↑Thu Sep 02, 2021 5:15 pm
Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that
we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.
Code: Select all
long Emanuel;
int hypothesisPlayerToWin;
void iterativeDeepening() {
// Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
// suitable value for EmanuelFactor. We can start multiple search threads here.
long EmanuelFactor = 100; // example
for (int depth = 1; ; ++depth) {
Emanuel = max(historyHeuristics) / EmanuelFactor;
alphaBeta(-INF, INF, depth);
}
}
long alphaBeta(long alpha, long beta, int depthleft) {
if (depthleft == 0) return quiesce(alpha, beta);
movesToTry = allAvailableMoves();
// Here is the genius approach:
if (player == hypothesisPlayerToWin) {
movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
}
for (move in movesToTry) {
score = -alphaBeta(-beta, -alpha, depthleft - 1);
if (score >= beta) return beta;
if (score > alpha) alpha = score;
}
return alpha;
}
I just told you that AB is proven to be optimal, despite for sometime SSS* was thought to search for fewer nodes, which it did!
However it is shown that SSS* reformulated to a depth-first search is equivalent to MTD(f) or PVS with tiny aspiration windows.
So there is nothing to improve there, if you understood what a mathematically proven fact is.
If you are talking about unsound prunining techinques (e.g. let search just one move at a ply), then that is a different proposition all in all.
MCTS comes close to the most aggressive pruning techinque, that is proven to converge to minmax with more simulations.
So here is your chance to prove if you would like to engage in productive discussions.