pedrojdm2021 wrote: ↑Wed Jun 23, 2021 12:50 am
i wish that there would be an simple to understand approarch and also that works just fine
A principal variation is the best move in a node, with all the best moves of the nodes below it appended after it. I store the PV like this:
- Give the Alpha-beta function a "pv" argument variable, which is a reference to a vector of moves.
- In Iterative Deepening, create a variable called "root_pv", which a vector of moves. Send a reference to this variable into alpha-beta, in the "pv" spot. The "root_pv" variable will hold the PV for the search.
- In the Alpha-Beta function, create a "node_pv" variable to store the pv for the node you are in. For each recursive call of alpha-beta, you send a reference to _that_ variable ("node_pv") into alpha-beta, in the "pv" argument. This will build the PV for the node you are in.
- Then, when Alpha improves, you do exactly as I described: the PV is the best move, with the PV of the node appended behind it:
Code: Select all
// We found a better move for us.
if eval_score > alpha {
//... (some code)
// Update the Principal Variation.
pv.clear();
pv.push(current_move);
pv.append(&mut node_pv);
}