M ANSARI wrote: ↑Mon Feb 18, 2019 3:09 pm
I think Monte Carlo search should actually be much stronger in tactical suites than AB engines as it can cover many more positions quicker...
How do you imagine that might work? What algorithm do you have in mind?
Oh, I would have no clue about how to do that. But I think that there is very big potential with the new hardware that is coming for AI. For a long time GPU's were considered very poor for chess as they do not do integer based calculation.
For at least a decade, NVIDIA GPUs (and probably other GPUs) have done integer calculations as fast as they do floating point. I have no idea where the GPU integer myth came from, or why it has been so persistent.
M ANSARI wrote: ↑Tue Feb 19, 2019 8:07 am
However there is a huge change in how GPU's are made and a dramatic push to produce new very high performance chips for AI.
I don't think there is a dramatic change in anything except the nature of the hype that surrounds GPUs.
M ANSARI wrote: ↑Tue Feb 19, 2019 8:07 am
The thing that makes things different is that the new chips can do "interference" or can use a knowledge base and make local deductions based on the knowledge base on the fly.
I don't understand that.
M ANSARI wrote: ↑Tue Feb 19, 2019 8:07 am
This, along with the fact that the new GPU's will be able to do 100's of trillion operations per second, makes me think that this would be perfect for tactics solving. It would be nice to see an effort to make an engine using this new hardware to solve tactics ... not really play strong chess ... just solve tactics. Tactics follow some pretty basic rules ... pinned piece, back rank mate, overloaded piece, x ray, passed queening pawn etc... This could be the knowledge base and then you just try every single check first, then every single capture possible and so on. With 100's of Trillions of Operations per second, it would seem that quite a few variations can be tried out and if there is a tactical shot it will be found very quickly. Most tactics become obviously good or bad after maybe 3 or 4 moves at most ... and they follow forced lines. I don't know why, but it just seems that these new types of chips will be very good at solving these puzzles. If you listen to a GM analyze a game after it has been played, on many occasions they can feel that a certain move is tactically correct as they see or feel the "pattern". Even a weak chess player can feel there is a tactical shot in the position, but he just cannot calculate through it. I think these patterns can be "learned" and identified and then quickly probed. The first step would be to create a tactical engine that is a puzzle solver and just have it try out various tactical suites until it does very well on them ... then use that module to independently probe positions as the main engine is playing and flag a fail high move that wants to be played, or flag a pass high move that is not being considered. The second step would be to actually do this in the search of the main engine. If step 1 is done, at least this would prevent the many 1 move blunders that Lc0 is missing. I feel this must improve Lc0 by at least 50 or maybe even 100 ELO.
OK. You seem to be describing SF rewritten for a GPU, roughly speaking. The SF evaluation function does all the 'pinned piece, back rank mate, overloaded piece, x ray, passed queening pawn' sort of thing. The 'you just try every single check first, then every single capture possible and so on' is like a bigger version of SF's quiescent search. Machine learning might help, but not a lot.
M ANSARI wrote: ↑Tue Feb 19, 2019 8:07 am
Oh, I would have no clue about how to do that. But I think that there is very big potential with the new hardware that is coming for AI. For a long time GPU's were considered very poor for chess as they do not do integer based calculation.
For at least a decade, NVIDIA GPUs (and probably other GPUs) have done integer calculations as fast as they do floating point. I have no idea where the GPU integer myth came from, or why it has been so persistent.
Maybe cos Nvidia and AMD used the FLOP peak throughput as gaming and gpgpu marketing numbers?
Ankan showed with his gpu perft how fast a 64 bit integer move generator can perform on gpu:
One more "Bad" thing with Lc0 play was how it totally mis-evaluates an extra queen. I had seen this in several games in my tourneys, but yesterday this also happened in TCEC. Lc0 happily went into an endgame where SF had an extra Queen for a rook and thought things were just fine. Here is the position
[d]2r1n3/3q2bk/p5p1/1p2P1Pp/1Q3P1P/1P2Q1B1/P1r5/5BK1 b - - 0 1
M ANSARI wrote: ↑Wed Feb 20, 2019 5:25 pm
One more "Bad" thing with Lc0 play was how it totally mis-evaluates an extra queen. I had seen this in several games in my tourneys, but yesterday this also happened in TCEC. Lc0 happily went into an endgame where SF had an extra Queen for a rook and thought things were just fine. Here is the position
[d]2r1n3/3q2bk/p5p1/1p2P1Pp/1Q3P1P/1P2Q1B1/P1r5/5BK1 b - - 0 1
I don't understand why you repeatedly say 'Q for a rook', while Black has 2! rooks for the Q?
Yes you are correct my apologies! I had several positions open and mixed up things. But the clip analysis is accurate for that position and Lc0 seems to way undervalue the extra Queen (by more than +4). I saw this quite a few times and really not sure if this is just part of the tactical problem or if it is just that 2 queens don't come up often in training and thus Lc0 has problems with 2 queens (or maybe more queens). Another thing that comes to mind is that Lc0 sometimes promotes to a Rook rather than a Queen when no stalemate danger is there. Not sure if that is related and maybe there is another reason for that ... but I find that also unusual.