noobpwnftw wrote: ↑
Sun Dec 09, 2018 5:31 pm
Brute-forcing of those 28 million parameters of a evaluation function in a black-box style
is neither efficient nor intelligent.
Isn't it obvious that it is very easy to trick people into certain interpretations of my one liner, if I do not put any emphasis mark. I learned that only recently thanks to the papers.
All your discussions around "brute-force", "efficient" and "intelligent" are pointless without context, which is "what do you get out of such effort?"
Talking about building a tablebase, its efficiency is not important because I think none of you is actually going to re-generate them again, so it does not matter, whether it is intelligent is based on the metric used, some may argue that DTM50 is the most intelligent way and some may argue otherwise. By the end of the day, what you get out of it is a complete and mathematically sound solution to the problem, once and for all.
Hand-crafting an eval function, what you really get form a project like Stockfish is the understanding of how to write code to make sense of all the aspects putting together in a way that is statistically sound in practice.
Approaches like A0, given a black-box of 28 million floating numbers, which is neither perfect nor give much information on how it would come up with such evaluations, is more of a "yet another closed-source chess engine" scenario in practice, where you have to "understand" things by looking at it's output, seems to me neither efficient nor intelligent. I think if you do have such things in hand, it may be better just sleep with those numbers and brag about it all day.