Uri Blass wrote: ↑Sat Mar 12, 2022 11:00 amI am not sure what Turing and Shannon could do in case they had powerful computers of today but trying to use a similiar ideas to ideas chess players use when they play is trivial and it is obvious that chess players do not use minimax but some type of alpha beta and not alpha beta with fixed number of plies(and they search important lines more plies forward).
It is also trivial that important lines are lines with bigger probability to be best so moves with bigger probability to be best are searched to bigger depth.
Humans do not think about the exact probabilities but it is only because the human brain is not build for working in this way and computers work with numbers.
Going from such grand ideas to an actual algorithm is highly non-trivial.
If Turing and Shannon had dedicated their lives to chess programming, I don't really doubt that both of these geniuses would have come up with alpha-beta, but they would not have had enough time to come up with all the other ideas that we now know work (and to discard all the other ideas that we now know do not work).
Fact is that their papers showed no sign of any awareness of cutoffs.
Shannon accepts a branching factor of 30, points out that one move by white and one by black (so 2 ply) already gives about 10^3 possibilities, and that a full 40-move game therefore requires 10^120 variations to be calculated from the initial position (when searching to terminal nodes; of course 40 moves is far too optimistic here). He then suggests using a heuristic evaluation function in non-terminal positions, but notes that this still requires 10^9 calculations for a 3-move (6-ply) search, which he "very optimistically" estimates at taking 16 minutes.
Turing describes minimax, mentions that it would be desirable to search lines deeper than other lines, but there is nothing that suggests he knew about beta cutoffs. (However, he does not focus so much on the combinatorial search explosion problem as Shannon does.)
The first real chess programs that ran on actual computers did not implement alpha-beta or some lesser variant of it. Bernsteins program, around 1956, considered 7 moves per position and examined 2500 final positions 2 full moves deep, so it implemented pure minimax (indeed 7^4 is about 50^2 = 2500). The 1958 Newell/Shaw/Simon paper mentions that searching 3 full moves deep would have taken 50 (7^2) times longer for Bernstein's program (to show the "power" of selective search: only 50 times as long, not 1000 times). This paper also describes the authors' own program which
might have implemented some form of beta cutoff, but it is not really explained (there is a discusison of a search tree example which focusses mostly on whether a position is "dead" or has to be further expanded, but which mentions that if one child node had been searched before the other, the other would not have needed to be searched because this branch was already bad enough). Kotok's 1962 paper mentions that Prof. McCarthy had proposed the alpha-beta "heuristic", which had led Kotok to implement in the summer of 1961 an alpha-beta version of the program he had written in the spring of 1961.
Maybe most of these people were aware that sometimes you can skip a branch, but they probably did not realise the magnitude of the speed up that could be attained this way, and therefore also did not spend much thought on how to precisely pin down this trick (with two bounds, alpha and beta).