For at least a decade, NVIDIA GPUs (and probably other GPUs) have done integer calculations as fast as they do floating point. I have no idea where the GPU integer myth came from, or why it has been so persistent.
I don't think there is a dramatic change in anything except the nature of the hype that surrounds GPUs.
I don't understand that.
OK. You seem to be describing SF rewritten for a GPU, roughly speaking. The SF evaluation function does all the 'pinned piece, back rank mate, overloaded piece, x ray, passed queening pawn' sort of thing. The 'you just try every single check first, then every single capture possible and so on' is like a bigger version of SF's quiescent search. Machine learning might help, but not a lot.M ANSARI wrote: ↑Tue Feb 19, 2019 8:07 am This, along with the fact that the new GPU's will be able to do 100's of trillion operations per second, makes me think that this would be perfect for tactics solving. It would be nice to see an effort to make an engine using this new hardware to solve tactics ... not really play strong chess ... just solve tactics. Tactics follow some pretty basic rules ... pinned piece, back rank mate, overloaded piece, x ray, passed queening pawn etc... This could be the knowledge base and then you just try every single check first, then every single capture possible and so on. With 100's of Trillions of Operations per second, it would seem that quite a few variations can be tried out and if there is a tactical shot it will be found very quickly. Most tactics become obviously good or bad after maybe 3 or 4 moves at most ... and they follow forced lines. I don't know why, but it just seems that these new types of chips will be very good at solving these puzzles. If you listen to a GM analyze a game after it has been played, on many occasions they can feel that a certain move is tactically correct as they see or feel the "pattern". Even a weak chess player can feel there is a tactical shot in the position, but he just cannot calculate through it. I think these patterns can be "learned" and identified and then quickly probed. The first step would be to create a tactical engine that is a puzzle solver and just have it try out various tactical suites until it does very well on them ... then use that module to independently probe positions as the main engine is playing and flag a fail high move that wants to be played, or flag a pass high move that is not being considered. The second step would be to actually do this in the search of the main engine. If step 1 is done, at least this would prevent the many 1 move blunders that Lc0 is missing. I feel this must improve Lc0 by at least 50 or maybe even 100 ELO.
The hard thing to do with a GPU is the search tree. A0 and LC0 avoid the problem by using a heavyweight evaluation and move-ordering, leaving a small(ish) search tree for the CPU to look after. I explained more about the problem in the SF forum https://groups.google.com/d/msg/fishcoo ... UNOxzxFQAJ. I described my own approach here http://indriid.com/2019/2019-01-06-tinsmith.pdf.