Cornfed wrote: ↑Sat Oct 03, 2020 11:41 pm HOWEVER...the 3rd best move, 12. Qe2 lets you keep a small edge with best play from your opponent while giving your opponent far more chances to go wrong. This idea of 'wrong' would need to be within a reasonable 'human evaluation' of those replies. In other words it would not count a queen drop or a move which allowed a 2 move mate among the 6 - things no human would intentionally do as 'bad' replies for your opponent. It might give 6 humanly reasonable 'bad' replies that lead to a range of +.50 to +1.25 or something along those lines. Understand? THAT is how a human might like to evaluate a position as chess is a 'game of mistakes'.
To me, this is variety (and if you think about it...equates to 'personality' for an engine).
Right now, when preparing an opening repertoire (for OTB or online play), I try to do this manually - looking at engine evals then looking at multi-pv and seeing how many lines let me keep an edge and how many weak replies might be out there for my opponent. I make a note of what is objectively best...but may incorporate what is worth a punt (as in 12 Qe2 above) because it gives my opponent enough chances to go wrong more often. It is one reason I was looking at Komodo MTSC...but so much is 'hidden' that I'm not sure it gives me what I am really after.
Automating that process....oh, that would be a truly wonderful thing for the end user!
Can I recommend the Bad Gyal/Good Gyal/Evil Gyal neural nets? They are a blend of the lichess great unwashed and shallow sf10 search. Evil Gyal especially plays a very sleezy style and can point the way toward practical chances against human opponents.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
mclane wrote: ↑Sat Oct 03, 2020 10:56 pm
At least we have one area where we can agree Brendan.
That makes me hope that one day we could be friends,
and that we can see many interesting new engines,
Such as corona chess engine by a known programmer who began on 8 bit hardware...
Of course, we can be friends, Thorsten.
I believe we are both also fans of Graham Hancock's historical work?
I have tons of friends I disagree with politically...makes the beers together more fun!
Cornfed wrote: ↑Sat Oct 03, 2020 11:41 pm HOWEVER...the 3rd best move, 12. Qe2 lets you keep a small edge with best play from your opponent while giving your opponent far more chances to go wrong. This idea of 'wrong' would need to be within a reasonable 'human evaluation' of those replies. In other words it would not count a queen drop or a move which allowed a 2 move mate among the 6 - things no human would intentionally do as 'bad' replies for your opponent. It might give 6 humanly reasonable 'bad' replies that lead to a range of +.50 to +1.25 or something along those lines. Understand? THAT is how a human might like to evaluate a position as chess is a 'game of mistakes'.
To me, this is variety (and if you think about it...equates to 'personality' for an engine).
Right now, when preparing an opening repertoire (for OTB or online play), I try to do this manually - looking at engine evals then looking at multi-pv and seeing how many lines let me keep an edge and how many weak replies might be out there for my opponent. I make a note of what is objectively best...but may incorporate what is worth a punt (as in 12 Qe2 above) because it gives my opponent enough chances to go wrong more often. It is one reason I was looking at Komodo MTSC...but so much is 'hidden' that I'm not sure it gives me what I am really after.
Automating that process....oh, that would be a truly wonderful thing for the end user!
Can I recommend the Bad Gyal/Good Gyal/Evil Gyal neural nets? They are a blend of the lichess great unwashed and shallow sf10 search. Evil Gyal especially plays a very sleezy style and can point the way toward practical chances against human opponents.
mclane wrote: ↑Sat Oct 03, 2020 10:17 pm
I would call b strategy the educated guess.
Why do programmers continue to make chess programs perfect when they begin a new engine ?
Isn’t it more interesting to create interesting engines instead of engine that lose to stockfish ?
Whatever you try out , it is losing against Stockfish or LC0.
And in the moment you reach the same strength, it’s clear you almost clones them.
I agree with this somewhat.
And its why I mostly use old school engines with unique style (and weaker strength) like Gandalf, Zarkov, Thinker, Ktulu, Amyan, Baron (old versions), The King, Crafty (old versions), Quark, Rebel and others...
...as well as my Rodent personalities and another engine project with Pawel.
Stockfish vs Stockfish gets boring pretty quick if you're actually WATCHING the games and not collecting stats.
Yes.
Truly the 'future' of chess engine programming should be about giving the end-user variety. 'Programmers' pushing for a few elo here and there is kind of a dead end in the real world of human chess (in particular).
For me, something which looks at say, move 12 in a line for YOUR SIDE, ranking the top few moves like I show below. However, in addition to generating a simple objective evaluation for each.... also, shows how many/what percentage of the time reasonable replies to those moves will give your opponent more chances to go wrong. How narrow a rope they must walk, you might say.
For Example (and I hope there is no formatting issue, so I'll keep it short):
In other words, with a pretty deep search and good evaluation the engine tells you 12.Nf4 is objectively 'best'...
HOWEVER...the 3rd best move, 12. Qe2 lets you keep a small edge with best play from your opponent while giving your opponent far more chances to go wrong. This idea of 'wrong' would need to be within a reasonable 'human evaluation' of those replies. In other words it would not count a queen drop or a move which allowed a 2 move mate among the 6 - things no human would intentionally do as 'bad' replies for your opponent. It might give 6 humanly reasonable 'bad' replies that lead to a range of +.50 to +1.25 or something along those lines. Understand? THAT is how a human might like to evaluate a position as chess is a 'game of mistakes'.
To me, this is variety (and if you think about it...equates to 'personality' for an engine).
Right now, when preparing an opening repertoire (for OTB or online play), I try to do this manually - looking at engine evals then looking at multi-pv and seeing how many lines let me keep an edge and how many weak replies within those likes might be out there for my opponent. I make a note of what is objectively best...but may incorporate what is worth a punt (as in 12 Qe2 above) because it gives my opponent enough chances to go wrong more often. It is one reason I was looking at Komodo MCTS...but so much is 'hidden' that I'm not sure it really gives me what I am really after.
Automating that process....oh, that would be a truly wonderful thing for the end user!
I love what you've described here...but isn't the "% for future success/number of bad replies" score you've described just MCTS in a nutshell?
BrendanJNorman wrote: ↑Sun Oct 04, 2020 12:02 am
Yes! Your nets are great.
Which would you say is best for a 2200 player like me to train against?
Which is influenced more by the human liChess games?
BTW...was very close to supporting you on Patreaon the other day, but got distracted by my kids. Love your work...still doing it?
No need to support me. I just use it as a convenient download platform.
I’m about a 1800 FIDE player (2100 lichess) and train against Tiny Gyal, the 16x2 net. You can turn the npm up and down quite nicely. Even 32x4 nets get pretty brutal.
For opening analysis I use Bad Gyal 9, though the most recent Evil Gyal network is pretty interesting (0.25 q-ratio, so more human sneakiness in the net).
I’m still training Bad Gyal and Night Nurse NNUE from that.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
Cornfed wrote: ↑Sat Oct 03, 2020 11:41 pm HOWEVER...the 3rd best move, 12. Qe2 lets you keep a small edge with best play from your opponent while giving your opponent far more chances to go wrong. This idea of 'wrong' would need to be within a reasonable 'human evaluation' of those replies. In other words it would not count a queen drop or a move which allowed a 2 move mate among the 6 - things no human would intentionally do as 'bad' replies for your opponent. It might give 6 humanly reasonable 'bad' replies that lead to a range of +.50 to +1.25 or something along those lines. Understand? THAT is how a human might like to evaluate a position as chess is a 'game of mistakes'.
To me, this is variety (and if you think about it...equates to 'personality' for an engine).
Right now, when preparing an opening repertoire (for OTB or online play), I try to do this manually - looking at engine evals then looking at multi-pv and seeing how many lines let me keep an edge and how many weak replies might be out there for my opponent. I make a note of what is objectively best...but may incorporate what is worth a punt (as in 12 Qe2 above) because it gives my opponent enough chances to go wrong more often. It is one reason I was looking at Komodo MTSC...but so much is 'hidden' that I'm not sure it gives me what I am really after.
Automating that process....oh, that would be a truly wonderful thing for the end user!
Can I recommend the Bad Gyal/Good Gyal/Evil Gyal neural nets? They are a blend of the lichess great unwashed and shallow sf10 search. Evil Gyal especially plays a very sleezy style and can point the way toward practical chances against human opponents.
Yes indeed...this does look interesting. I have downloaded it and will try to find some time tomorrow to check it out for the purposes I have mentioned.
My various lichess ratings are all over 2100...find it perversely interesting that...in a weird way, if I were to play against it, I could... to a degree... be playing myself!?!
mclane wrote: ↑Sat Oct 03, 2020 10:17 pm
I would call b strategy the educated guess.
Why do programmers continue to make chess programs perfect when they begin a new engine ?
Isn’t it more interesting to create interesting engines instead of engine that lose to stockfish ?
Whatever you try out , it is losing against Stockfish or LC0.
And in the moment you reach the same strength, it’s clear you almost clones them.
Instead I would create a different path because at the top there is no space anymore.
I would create an engine that is different.
The power of chess programs are determined by the power of the hardware. In every case those programs are the most stronger what utilize the the most best mode of the power of hardware. The chess power and the power of hardware is not separable from each others.
If somebody want to create a new type of strong chess program he needs to find for it a new powerful hardware platform as it happened in the case of NN engines.
mclane wrote: ↑Sat Oct 03, 2020 10:17 pm
I would call b strategy the educated guess.
Why do programmers continue to make chess programs perfect when they begin a new engine ?
Isn’t it more interesting to create interesting engines instead of engine that lose to stockfish ?
Whatever you try out , it is losing against Stockfish or LC0.
And in the moment you reach the same strength, it’s clear you almost clones them.
Instead I would create a different path because at the top there is no space anymore.
I would create an engine that is different.
The power of chess programs are determined by the power of the hardware. In every case those programs are the most stronger what utilize the the most best mode of the power of hardware. The chess power and the power of hardware is not separable from each others.
If somebody want to create a new type of strong chess program he needs to find for it a new powerful hardware platform as it happened in the case of NN engines.
You missed his point. He doesn't care how *strong* it is, only that is "feels" like a human when you play against it (planning, not a complete tactical monster, makes small mistakes and so on).