As I have written above: "So if we expect a node to fail-high at depth n, then we assume it should fail-high also at depth n-1, if this doesn't happen it means position is tricky enough to deserve a research at depth n+1."Michel wrote: How could reducing the quality of the search possibly detect issues?
I will try to be even more clear.
Question: Ok we have this position that should fail high at depth n. How can be verified if the position does not hide threats or risks ?
Answer: Search at n-1 and see if it _still_ fails high. If it fails high at n-1 then position assumed to be safe. If not then it means we eventually need even up to the last ply to prove it, so position is at border line and it is risky
Question: Ok, this position is risky because at n-1 it does not fails high. What we do next ?
Answer: We research at n+1
