carldaman wrote: ↑Sun Aug 09, 2020 7:41 pmThanks, it's quite useful to know about lambda. It should mean that one could train primarily on style, with less regard to the actual outcome (whether the style actually wins most games or not).
Well, that's the thing. The net learns from two things: eval and result. If you used Tal (the player) as an example, you would want to set Lambda to 0, since while many of his moves might be dodgy per pure computer analysis, they won him a ton of games. So training only on the result would mean it would ignore the pure evaluations of the moves he played, and instead only focus on the result those moves procured.
Plus, there are other ways to increase the engine's speculative or attacking tendencies through training.
"Tactics are the bricks and sticks that make up a game, but positional play is the architectural blueprint."
Cornfed wrote: ↑Sun Sep 06, 2020 5:18 pm
Kind of like ShasChess....but regardless of the initial eval of any given position...playing according to 'generalized' characteristics of a given famous player, but in all positions instead?
Yeah, at least ShashChess 6.1.3. Future versions seem to have broken the Petrosian/Tal/Capablanca modules, you can compare their preferences for positions and they seem nothing alike.
Stockfish is at 3600 elo level, the humans we want to imitate play at less than 2800 elo, that's 800 elo that can be easily sacrificed for this, that's why I emphasize making it play "moves that are losing but that imitate these players", that's the key point missing in all current implementations.
Karpov would blunder, Rodent Karpov wouldn't. It loses because Rodent blunders, we want it to make intentional blunders in the style of Karpov, no settings tweaking will give you that. Making it believe those blunders win, could.
Albert Silver wrote: ↑Sun Sep 06, 2020 7:26 pm
Well, that's the thing. The net learns from two things: eval and result. If you used Tal (the player) as an example, you would want to set Lambda to 0, since while many of his moves might be dodgy per pure computer analysis, they won him a ton of games. So training only on the result would mean it would ignore the pure evaluations of the moves he played, and instead only focus on the result those moves procured.
The thing is that Tal didn't play any move that could help the network play better chess, so the obvious answer is to ommit Tal from the training data.
No, if you are using Tal's data, it's because you want a network that plays like Tal, including the blunders. So if you manage to do it, you don't mind if the result can't beat Frenzee 3.5.19, because strength doesn't matter when we care about the style.
Cornfed wrote: ↑Sun Sep 06, 2020 5:18 pm
Kind of like ShasChess....but regardless of the initial eval of any given position...playing according to 'generalized' characteristics of a given famous player, but in all positions instead?
Yeah, at least ShashChess 6.1.3. Future versions seem to have broken the Petrosian/Tal/Capablanca modules, you can compare their preferences for positions and they seem nothing alike.
Stockfish is at 3600 elo level, the humans we want to imitate play at less than 2800 elo, that's 800 elo that can be easily sacrificed for this, that's why I emphasize making it play "moves that are losing but that imitate these players", that's the key point missing in all current implementations.
Karpov would blunder, Rodent Karpov wouldn't. It loses because Rodent blunders, we want it to make intentional blunders in the style of Karpov, no settings tweaking will give you that. Making it believe those blunders win, could.
Of course, no one is talking 'blunders'...Tal, Carlsen, etc all make 'blunders'. 'Style' is more personal in that, again, Tal or even Kasparov might chose a certain 'approach' while a Karpov or Capablanca would chose another...both would lead to perfectly playable positions which gave the first player chances to use their own approach to the game to maneuver towards the desired goal.
Albert Silver wrote: ↑Sun Sep 06, 2020 7:26 pm
Well, that's the thing. The net learns from two things: eval and result. If you used Tal (the player) as an example, you would want to set Lambda to 0, since while many of his moves might be dodgy per pure computer analysis, they won him a ton of games. So training only on the result would mean it would ignore the pure evaluations of the moves he played, and instead only focus on the result those moves procured.
The thing is that Tal didn't play any move that could help the network play better chess, so the obvious answer is to ommit Tal from the training data.
No, if you are using Tal's data, it's because you want a network that plays like Tal, including the blunders. So if you manage to do it, you don't mind if the result can't beat Frenzee 3.5.19, because strength doesn't matter when we care about the style.
I don't understand your point at all. Who said anything about helping the network play better chess? The obvious answer to developing a style to play like Tal is to focus on Tal in the training data, not exclude him. Add Tal, Spielmann, Kasparov, Kupreichik, Nezhmehtdinov, etc. Not that this would ever be enough to train a net anyhow.
"Tactics are the bricks and sticks that make up a game, but positional play is the architectural blueprint."
Cornfed wrote: ↑Sun Sep 06, 2020 5:18 pm
Kind of like ShasChess....but regardless of the initial eval of any given position...playing according to 'generalized' characteristics of a given famous player, but in all positions instead?
Yeah, at least ShashChess 6.1.3. Future versions seem to have broken the Petrosian/Tal/Capablanca modules, you can compare their preferences for positions and they seem nothing alike.
Stockfish is at 3600 elo level, the humans we want to imitate play at less than 2800 elo, that's 800 elo that can be easily sacrificed for this, that's why I emphasize making it play "moves that are losing but that imitate these players", that's the key point missing in all current implementations.
Karpov would blunder, Rodent Karpov wouldn't. It loses because Rodent blunders, we want it to make intentional blunders in the style of Karpov, no settings tweaking will give you that. Making it believe those blunders win, could.
Of course, no one is talking 'blunders'...Tal, Carlsen, etc all make 'blunders'. 'Style' is more personal in that, again, Tal or even Kasparov might chose a certain 'approach' while a Karpov or Capablanca would chose another...both would lead to perfectly playable positions which gave the first player chances to use their own approach to the game to maneuver towards the desired goal.
Gross blunders are usually few and far between in top human chess, not counting faster time controls where they can occur more often. It's mostly with the help of computer assistance that we can even tell that a GM is making a mistake.
It is one of the distinguishing characteristic of grandmaster chess that their mistakes will actually look like strong moves to an average human player; certainly, there's a thought behind each move, even not the best ones. So one can argue that even their mistakes are strong, relatively speaking. Weaker players armed only with computer assistance may not get this realization.
Consequently, a replicated realistic human-like style can and probably should include mistakes, preferably as long as they're not gross blunders such as hanging a Queen, or that sort of thing.
Albert Silver wrote: ↑Sun Sep 06, 2020 7:26 pm
Well, that's the thing. The net learns from two things: eval and result. If you used Tal (the player) as an example, you would want to set Lambda to 0, since while many of his moves might be dodgy per pure computer analysis, they won him a ton of games. So training only on the result would mean it would ignore the pure evaluations of the moves he played, and instead only focus on the result those moves procured.
The thing is that Tal didn't play any move that could help the network play better chess, so the obvious answer is to ommit Tal from the training data.
No, if you are using Tal's data, it's because you want a network that plays like Tal, including the blunders. So if you manage to do it, you don't mind if the result can't beat Frenzee 3.5.19, because strength doesn't matter when we care about the style.
I don't understand your point at all. Who said anything about helping the network play better chess? The obvious answer to developing a style to play like Tal is to focus on Tal in the training data, not exclude him. Add Tal, Spielmann, Kasparov, Kupreichik, Nezhmehtdinov, etc. Not that this would ever be enough to train a net anyhow.
I think that the idea would be to train a net using an engine like OpenTal or CyberNezh, that already plays with a style reminiscent of those players. That could provide enough data to train a net, but the trick is to get the net to like the same moves as the engine it is based on.
As far as the great human attacking players are concerned, maybe one can auto-analyze their games with OpenTal, or another similar engine, and get evaluations of their moves and then grab that pgn file as an additional part of the training. This latter idea would likely be a long process with questionable returns, but not impossible.
Cornfed wrote: ↑Sun Sep 06, 2020 5:18 pm
Kind of like ShasChess....but regardless of the initial eval of any given position...playing according to 'generalized' characteristics of a given famous player, but in all positions instead?
Yeah, at least ShashChess 6.1.3. Future versions seem to have broken the Petrosian/Tal/Capablanca modules, you can compare their preferences for positions and they seem nothing alike.
Stockfish is at 3600 elo level, the humans we want to imitate play at less than 2800 elo, that's 800 elo that can be easily sacrificed for this, that's why I emphasize making it play "moves that are losing but that imitate these players", that's the key point missing in all current implementations.
Karpov would blunder, Rodent Karpov wouldn't. It loses because Rodent blunders, we want it to make intentional blunders in the style of Karpov, no settings tweaking will give you that. Making it believe those blunders win, could.
Interesting observation re: ShashChess 6.1.3 - have you brought it to Andrea's attention?
Rather than 'sacrificing' strength [a wording that may turn off a few overly sensitive-to-Elo programmers], maybe it is better to think of it as parlaying or converting strength into style.
Also, I would not choose the overused word "blunder", even if it may be technically correct in certain cases. As I wrote in a previous post, gross blunders (the real blunders, such as hanging a Queen) are not that frequent. Most GM mistakes are hard to refute, except by chess engines, and sometimes by other GMs. It is true that an engine may show a mistake to be a losing blunder, but it's effectively only so if the opponent can refute it over the board.
We probably should not mind a stylistically-strong engine exhibiting some of these hard-to-refute mistakes as well, if they enrich play or create more winning chances.
MikeB wrote: ↑Sun Sep 06, 2020 7:00 pm
It would not work well - I agree . Seems like a useless exercise anyway - since the point of using NN is to ge to the a more accurate of the truth and what they want, is an engine that plays a certain type of fiction with style. Good luck with that... - they can let me know how that out works for them ...
So, you would seem to think that there is truly no 'style' in chess?
...
Did i say that? No I did not , you are reading the thread out of context and misinterpreted what i meant and perhaps i did not say it well enough. Anyway, I am moving on.
MikeB wrote: ↑Sun Sep 06, 2020 7:00 pm
It would not work well - I agree . Seems like a useless exercise anyway - since the point of using NN is to ge to the a more accurate of the truth and what they want, is an engine that plays a certain type of fiction with style. Good luck with that... - they can let me know how that out works for them ...
So, you would seem to think that there is truly no 'style' in chess?
...
Did i say that? No I did not , you are reading the thread out of context and misinterpreted what i meant and perhaps i did not say it well enough. Anyway, I am moving on.
Move on.
You were not clear so I simply asked.
The phrase "...what they want, is an engine that plays a certain type of fiction with style" Very opague....the a phrase 'type of fiction with style', is probably a phrase that has never been uttered in the history of mankind....that I am aware of.
Anyway, by saying "the point of using nn is to ge(t?) to a more accurate (more accurate what?) of the truth... was poorly written and just sounded like maybe you were trying to indicate 'style' was not possible, only 'the truth'...whatever that is.
Sorry....hope you can see now why as I asked that.