chess programming - predictions for the next 5 years
Posted: Sun Feb 03, 2019 3:50 pm
I'm posting my bold predictions, in hope of open, constructive discussion without trashing and trolling... hopefully, you have something to say and have predictions of yours, too...
Right off the bat, let's address A0/leela chess phenomenon. Obviously, they are here to stay... it is practically THE first alternative approach to chess to successfully challenge classical alpha-beta minimax searchers... regardless of the fact that most certainly there were some shady things going on there, regardless of the fact if the A0/leela is/are stronger than Stockfish or not... it is still a success story... I really don't like the fact that the project was initiated and financed by huge multi-billion company with its own agenda, I don´t like the PR around it, the glitter, the claims, I don't like any of it, but the facts are... well, facts! A0 and Leela are the success story and that's a fact.
My first (not so bold) prediction is that for the next few years ANN+MCTS engines and classical AB searchers will live and develop in parallel, side by side... no side will clearly dominate... and if you are looking for the definition of the word "dominate", just remember how it looked like when Rybka first showed on the scene... drawing a match with Komodo, and then winning in tie-breaks is not a sign of domination by Leela, it is a sign that it is just a bit better... which is a great accomplishment by itself, if you manage to disregard and tone down the hype, expectations and public opinion around internet today...
My next, much more bold prediction concerns the so-called "hybrid" approaches... I think they will all turn out to be failure... it's pretty strong and bold claim, so let me explain my thinking...
Many people noticed that Leela is much stronger positionally and much weaker tactically, so it was very natural that people without technical background in chess programming would instantly think that if you combine approaches, you would get engine that is ultra strong both tactically and positionally... but things don't work that way...
We are talking about 2 different paradigms here, and oil and water don't mix... there are 2 cardinal differences between MCTS/UCT search and AB search (let's ignore ANN part for the moment). the first difference is depth-first versus best-first expansion rule, and the second difference is value backup rule (minimaxing versus "averaging")... you cannot have a little bit of both, you can't just put them together and "combine" them... you need a new paradigm, a new approach that will INTEGRATE them, a new over-arching idea... all the ideas where you use AB for "probing", either at he root or at the leaves will not work... all the approaches combining values of minimaxing and MCTS at the node MUST be worse than each in isolation... tactical and strategic thinking are completely different, you can't "mix" them... MCTS/UCT is good for strategic thinking, because "averaging" backup rule is almost direct translation of strategic thinking into computation... in strategy, you choose move that is good "on average", regardless of what your opponent does... you don't chase the "best" move, because there ain't none... actually this reminds me of a funny one-liner that says "strategy is knowing what to do, when there is nothing to do"... this is diametrically opposite of how tactics works and how minimaxing works, choosing only one "best" move... how could you just "mix" this an hope it will somehow turn out right?
Let's look for a moment how a human GMs solve this problem... granted, bringing humans into argument will not resonate well with some of you that read this, but I don't look at humans as humans, but as a chess-playing agents with limited resources and weak hardware that have solved some of the problems that AB searchers and MCTS searchers haven't yet... So if we go back to how humans think for a second, we can see that they clearly separate tactical and strategic thinking... As a chess player, I prioritize tactics, and first thing that I do when I look at the position (after some basic counting and inventory) is activating my "tactics detectors"... are there loose (unprotected) pieces? exposed king? pieces on weird positions? unusual configurations of pieces? if yes, I go into tactical analysis... if not, I go into positional analysis... I don't do both at the same time, and I do not "combine" results, I prioritize them...
I don't want to discourage attempts at "hybrid" approach that are going on right now (Scorpio, Komodo MCTS,...) but it looks obvious to me that they will fail... until there is a new breaktrough, a new paradigm shift... and I don't think it will happen any time soon...
Moving on to my next prediction... let's discuss the direction in which AB searchers will evolve... Most of the improvement in the last 10 years was in the area of search... and the most notable single improvement was introduction of LMR... I think that improvement in the search technology will continue, but will (actually, it already has) hit into diminishing returns wall... now, if everybody hits the same wall and everybody is affected in the same way, it doesn't matter... we all grow slowly together and face the same problems together, it doesn't matter if the results are not proportional to the effort, results are there and only results matter... but if there is an alternative evolutionary line, like MCTS is, then infinitesimally small improvements in search will not matter that much...
so what will matter? how can AB searchers respond to the MCTS challenge?
I think there are basically only 2 ways, and the first is, ironically, improving tactics... the second is obvious, improving evaluation functions, something that should have happened a long time ago, but we were all "blinded by the light", surprised at how well our engines play chess if we only improve search... there was no incentive to put an effort into eval functions... until now...
let's address the first point, improving tactics... it seems ironic that I not only suggest but RECOMMEND improving tactics... aren't AB searchers tactical beasts? well, not really... what happened was that, once engines bypassed humans in tactics, their tactical sharpness started to degrade... adding all those search improvements made engines more and more tactically blind... but it payed ELO-wise, because they traded missing some tactical shot for getting more depth and being better positionally... and it was a good trade off... that is, until now... think about it, if every engine gradually started to be more blind, and all of them did it in parallel, and there was no other chess entity that could exploit that, there was only upside, there was no downside... there was no one-eyed king to rule in the land of the blind... but of course, things have now changed, and the opponent is TRULY terrible at tactics, and you get most "bangs for the bucks" if you get good at it... getting additional plies will not help AB searchers that much, but exploiting tactical oversights made by A0/Leela will... and since A0/Leela can't get better at tactics, this is your quickest route to their heart...
so my bold prediction is that AB searchers will become much better at tactics, they will prune and reduce much more carefully, and search will become much sharper, because it will pay off... in fact, in short term it is the only thing that will pay off against A0/Leela...
Looking at the things long term, AB searchers will either improve their evaluation functions, or they will die out, just like dinosaurs... I think that the beginning of the trend is here, and it started with the popularization of so-called "Texel tuning" method... I think it was a small step for Texel, but it could be a large one (or the first one) for the AB searchers... a nudge in the right direction...
I have a few more things to say, but this post already became too long, so I will stop now... maybe we can continue discussion after... you present your predictions?
Right off the bat, let's address A0/leela chess phenomenon. Obviously, they are here to stay... it is practically THE first alternative approach to chess to successfully challenge classical alpha-beta minimax searchers... regardless of the fact that most certainly there were some shady things going on there, regardless of the fact if the A0/leela is/are stronger than Stockfish or not... it is still a success story... I really don't like the fact that the project was initiated and financed by huge multi-billion company with its own agenda, I don´t like the PR around it, the glitter, the claims, I don't like any of it, but the facts are... well, facts! A0 and Leela are the success story and that's a fact.
My first (not so bold) prediction is that for the next few years ANN+MCTS engines and classical AB searchers will live and develop in parallel, side by side... no side will clearly dominate... and if you are looking for the definition of the word "dominate", just remember how it looked like when Rybka first showed on the scene... drawing a match with Komodo, and then winning in tie-breaks is not a sign of domination by Leela, it is a sign that it is just a bit better... which is a great accomplishment by itself, if you manage to disregard and tone down the hype, expectations and public opinion around internet today...
My next, much more bold prediction concerns the so-called "hybrid" approaches... I think they will all turn out to be failure... it's pretty strong and bold claim, so let me explain my thinking...
Many people noticed that Leela is much stronger positionally and much weaker tactically, so it was very natural that people without technical background in chess programming would instantly think that if you combine approaches, you would get engine that is ultra strong both tactically and positionally... but things don't work that way...
We are talking about 2 different paradigms here, and oil and water don't mix... there are 2 cardinal differences between MCTS/UCT search and AB search (let's ignore ANN part for the moment). the first difference is depth-first versus best-first expansion rule, and the second difference is value backup rule (minimaxing versus "averaging")... you cannot have a little bit of both, you can't just put them together and "combine" them... you need a new paradigm, a new approach that will INTEGRATE them, a new over-arching idea... all the ideas where you use AB for "probing", either at he root or at the leaves will not work... all the approaches combining values of minimaxing and MCTS at the node MUST be worse than each in isolation... tactical and strategic thinking are completely different, you can't "mix" them... MCTS/UCT is good for strategic thinking, because "averaging" backup rule is almost direct translation of strategic thinking into computation... in strategy, you choose move that is good "on average", regardless of what your opponent does... you don't chase the "best" move, because there ain't none... actually this reminds me of a funny one-liner that says "strategy is knowing what to do, when there is nothing to do"... this is diametrically opposite of how tactics works and how minimaxing works, choosing only one "best" move... how could you just "mix" this an hope it will somehow turn out right?
Let's look for a moment how a human GMs solve this problem... granted, bringing humans into argument will not resonate well with some of you that read this, but I don't look at humans as humans, but as a chess-playing agents with limited resources and weak hardware that have solved some of the problems that AB searchers and MCTS searchers haven't yet... So if we go back to how humans think for a second, we can see that they clearly separate tactical and strategic thinking... As a chess player, I prioritize tactics, and first thing that I do when I look at the position (after some basic counting and inventory) is activating my "tactics detectors"... are there loose (unprotected) pieces? exposed king? pieces on weird positions? unusual configurations of pieces? if yes, I go into tactical analysis... if not, I go into positional analysis... I don't do both at the same time, and I do not "combine" results, I prioritize them...
I don't want to discourage attempts at "hybrid" approach that are going on right now (Scorpio, Komodo MCTS,...) but it looks obvious to me that they will fail... until there is a new breaktrough, a new paradigm shift... and I don't think it will happen any time soon...
Moving on to my next prediction... let's discuss the direction in which AB searchers will evolve... Most of the improvement in the last 10 years was in the area of search... and the most notable single improvement was introduction of LMR... I think that improvement in the search technology will continue, but will (actually, it already has) hit into diminishing returns wall... now, if everybody hits the same wall and everybody is affected in the same way, it doesn't matter... we all grow slowly together and face the same problems together, it doesn't matter if the results are not proportional to the effort, results are there and only results matter... but if there is an alternative evolutionary line, like MCTS is, then infinitesimally small improvements in search will not matter that much...
so what will matter? how can AB searchers respond to the MCTS challenge?
I think there are basically only 2 ways, and the first is, ironically, improving tactics... the second is obvious, improving evaluation functions, something that should have happened a long time ago, but we were all "blinded by the light", surprised at how well our engines play chess if we only improve search... there was no incentive to put an effort into eval functions... until now...
let's address the first point, improving tactics... it seems ironic that I not only suggest but RECOMMEND improving tactics... aren't AB searchers tactical beasts? well, not really... what happened was that, once engines bypassed humans in tactics, their tactical sharpness started to degrade... adding all those search improvements made engines more and more tactically blind... but it payed ELO-wise, because they traded missing some tactical shot for getting more depth and being better positionally... and it was a good trade off... that is, until now... think about it, if every engine gradually started to be more blind, and all of them did it in parallel, and there was no other chess entity that could exploit that, there was only upside, there was no downside... there was no one-eyed king to rule in the land of the blind... but of course, things have now changed, and the opponent is TRULY terrible at tactics, and you get most "bangs for the bucks" if you get good at it... getting additional plies will not help AB searchers that much, but exploiting tactical oversights made by A0/Leela will... and since A0/Leela can't get better at tactics, this is your quickest route to their heart...
so my bold prediction is that AB searchers will become much better at tactics, they will prune and reduce much more carefully, and search will become much sharper, because it will pay off... in fact, in short term it is the only thing that will pay off against A0/Leela...
Looking at the things long term, AB searchers will either improve their evaluation functions, or they will die out, just like dinosaurs... I think that the beginning of the trend is here, and it started with the popularization of so-called "Texel tuning" method... I think it was a small step for Texel, but it could be a large one (or the first one) for the AB searchers... a nudge in the right direction...
I have a few more things to say, but this post already became too long, so I will stop now... maybe we can continue discussion after... you present your predictions?