Predicting a humans move

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

JBNielsen
Posts: 267
Joined: Thu Jul 07, 2011 10:31 pm
Location: Denmark

Predicting a humans move

Post by JBNielsen »

Has anyone programmed a function that can predict a humans move?

In a given position, this function should for each move give the probability for a move is played:
move 1: 60%
move 2: 20%
move 3: 10%
move 4: 2%
etc.

The function should at least have these two parameters:
The humans rating.
The number of seconds for the move.

Example.
Look at the position here: http://74.220.23.57/forum/viewtopic.php ... 59&t=47102

Will white play 2.Qh5!, 2.Ne2?? or something else?

This is important to know if we want to optimize the outcome of the computer versus human games.
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Predicting a humans move

Post by Don »

JBNielsen wrote:Has anyone programmed a function that can predict a humans move?

In a given position, this function should for each move give the probability for a move is played:
move 1: 60%
move 2: 20%
move 3: 10%
move 4: 2%
etc.

The function should at least have these two parameters:
The humans rating.
The number of seconds for the move.
The move that is played is based much more on personal playing style that it is the ELO of the player. The ELO is correlated, but only very weakly. So you can be sure that a GM is much likely to play certain moves than a beginner but there is not much difference between a 2300 and a 2800 player.

The only parameters are 2 that effect the quality of the move so you could build such a function but it's power of discrimination would be very weak indeed.

A relatively good predictor is just to predict the move Komodo would play and you would be right perhaps 60% of the time for most players above 1800.


Example.
Look at the position here: http://74.220.23.57/forum/viewtopic.php ... 59&t=47102

Will white play 2.Qh5!, 2.Ne2?? or something else?

This is important to know if we want to optimize the outcome of the computer versus human games.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
JBNielsen
Posts: 267
Joined: Thu Jul 07, 2011 10:31 pm
Location: Denmark

Re: Predicting a humans move

Post by JBNielsen »

I think you are right that there is not much difference between a 2300 player and a 2800 player.
But there must be some differences.

I forgot to mention another important parameter: How strong am I (can be human or computer) - see below.

Let me use my example to illustrate, how a computer could simulate the thinking of a human.

In my calculations I use the mark (*) to the numbers we today don't really know how to calculate.
But humans do it somehow when they decide for a move.

Now to my example.

[d]r1bqr1k1/ppp1p1bp/2np1n2/8/2PPP2p/1PNBBQ2/PR3PPN/4K1R1 b - - 1 1

more info: http://74.220.23.57/forum/viewtopic.php ... 59&t=47102

Black (at the move) has a rating of 1900.
White (a human) has a rating of 2300.

If black plays -,e5 he has an advantage of 0,20 (Rybka4)

If black plays -,Nd7 we calculate the probability for whites answer:
Ne2?? 30% (*)
Qh5! 15% (*)
Kd2 (or a similar move) 55% (*)

We can calculate the scores after these moves (Rybka4):
Ne2?? -3.20
Qh5! +0.50
Kd2 (or a similar move) -0.30


In the following calculations we must consider white and blacks rating (1900 and 2300) and how complex the position is.
If the position is complex and with many pieces on the board, the strongest player is likely to improve his position during the game.

The expected average gamescore after -,e5 is 0.35(*)
(although black has a 0.20 better position, he is 400 rating weaker and must expect to score less (<0.50) than white)

The expected average gamescore after -,Nd7 is:
(30% * 0.95(*)) + (15% * 0.20(*)) + (55% * 0.40(*)) = 0.285 + 0.03 + 0.22 = 0.535

The conclusion is, that black should play -,Nd7 with an expected gamescore of 0.535.
The move -,e5 (recommended by Rybka4) only gives an expected gamescore of 0.35.

(It is a little strange calculating with probabilities in chess.
It belongs to backgammon with dice rolls.)

- - -

We need to simulate the play of humans if we:
* will create the strongest opponent for humans ever made
(if the cost of score is low, it will set up traps and seek for open and complex positions)
* will create a perfect(?) humanlike training partner for humans
(it can be set to play 'natural' mistakes if we want, and it will set up more traps)
* will generate relevant game-annotations for humans
* will have better tools to prepare openings against a human player
(finding traps in the opponents preferred opening lines)
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Predicting a humans move

Post by Don »

JBNielsen wrote:I think you are right that there is not much difference between a 2300 player and a 2800 player.
But there must be some differences.
Yes, of course there is. Here is an idealized model that is not perfect but maybe makes it clearer:

Suppose that in any position there is only 1 correct move. The stronger you are the more likely you will find that one move. A player who just learned the rules may find it almost randomly and would be doing good to find it 5% of the time. However an average amateur tournament player who understand basic tactics might find it 70% of the time. There is a world of difference in the missing 30% however. Our 2300 player may increase this to 90%, picking up 2 out of 3 the weaker played missed. After 90% there is not much left to pick up, the 2800 player might pick up half of what is missing. I'm making up these numbers in an arbitrary way, but the point is basically that the difference between playing good and playing great is only 1 move out of 20 or 30. But there is a difference of course.

When using a computer to judge this it's all noise and playing style. I have started an analysis but for example matching Houdini's move is not a very good way to estimate the strength of a top player because a much weaker player may play more like Houdini that a strong player simply due to style. Evidently there is a lot of room for personal style at the top. A lot of people are under the misconception that the stronger the players are, the more alike they will play and eventually would play identically, for example two perfect players. But that is not the case. Two perfect players would only be guaranteed to play identical chess if there was only one winning (or one drawing) move on the board and that is not rare, but only a fraction of the positions in a game are such that only one move is good.

Even if you pretend Houdini is playing perfect and trying to see how often a player also plays "perfectly" according to Houdini, you would have to set Houdini for very long searches. If you make any program the "standard" for what should be played it needs to be FAR stronger than the thing you are measuring and so running Houdini or any good program for a few ply or a second or two is not enough to even match the top players.


I forgot to mention another important parameter: How strong am I (can be human or computer) - see below.

Let me use my example to illustrate, how a computer could simulate the thinking of a human.

In my calculations I use the mark (*) to the numbers we today don't really know how to calculate.
But humans do it somehow when they decide for a move.

Now to my example.

[d]r1bqr1k1/ppp1p1bp/2np1n2/8/2PPP2p/1PNBBQ2/PR3PPN/4K1R1 b - - 1 1

more info: http://74.220.23.57/forum/viewtopic.php ... 59&t=47102

Black (at the move) has a rating of 1900.
White (a human) has a rating of 2300.

If black plays -,e5 he has an advantage of 0,20 (Rybka4)

If black plays -,Nd7 we calculate the probability for whites answer:
Ne2?? 30% (*)
Qh5! 15% (*)
Kd2 (or a similar move) 55% (*)

We can calculate the scores after these moves (Rybka4):
Ne2?? -3.20
Qh5! +0.50
Kd2 (or a similar move) -0.30


In the following calculations we must consider white and blacks rating (1900 and 2300) and how complex the position is.
If the position is complex and with many pieces on the board, the strongest player is likely to improve his position during the game.

The expected average gamescore after -,e5 is 0.35(*)
(although black has a 0.20 better position, he is 400 rating weaker and must expect to score less (<0.50) than white)

The expected average gamescore after -,Nd7 is:
(30% * 0.95(*)) + (15% * 0.20(*)) + (55% * 0.40(*)) = 0.285 + 0.03 + 0.22 = 0.535

The conclusion is, that black should play -,Nd7 with an expected gamescore of 0.535.
The move -,e5 (recommended by Rybka4) only gives an expected gamescore of 0.35.

(It is a little strange calculating with probabilities in chess.
It belongs to backgammon with dice rolls.)

- - -

We need to simulate the play of humans if we:
* will create the strongest opponent for humans ever made
(if the cost of score is low, it will set up traps and seek for open and complex positions)
* will create a perfect(?) humanlike training partner for humans
(it can be set to play 'natural' mistakes if we want, and it will set up more traps)
* will generate relevant game-annotations for humans
* will have better tools to prepare openings against a human player
(finding traps in the opponents preferred opening lines)
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
JBNielsen
Posts: 267
Joined: Thu Jul 07, 2011 10:31 pm
Location: Denmark

Re: Predicting a humans move

Post by JBNielsen »

Don wrote:
JBNielsen wrote:I think you are right that there is not much difference between a 2300 player and a 2800 player.
But there must be some differences.
Yes, of course there is. Here is an idealized model that is not perfect but maybe makes it clearer:

Suppose that in any position there is only 1 correct move. The stronger you are the more likely you will find that one move. A player who just learned the rules may find it almost randomly and would be doing good to find it 5% of the time. However an average amateur tournament player who understand basic tactics might find it 70% of the time. There is a world of difference in the missing 30% however. Our 2300 player may increase this to 90%, picking up 2 out of 3 the weaker played missed. After 90% there is not much left to pick up, the 2800 player might pick up half of what is missing. I'm making up these numbers in an arbitrary way, but the point is basically that the difference between playing good and playing great is only 1 move out of 20 or 30. But there is a difference of course.

When using a computer to judge this it's all noise and playing style. I have started an analysis but for example matching Houdini's move is not a very good way to estimate the strength of a top player because a much weaker player may play more like Houdini that a strong player simply due to style. Evidently there is a lot of room for personal style at the top. A lot of people are under the misconception that the stronger the players are, the more alike they will play and eventually would play identically, for example two perfect players. But that is not the case. Two perfect players would only be guaranteed to play identical chess if there was only one winning (or one drawing) move on the board and that is not rare, but only a fraction of the positions in a game are such that only one move is good.

Even if you pretend Houdini is playing perfect and trying to see how often a player also plays "perfectly" according to Houdini, you would have to set Houdini for very long searches. If you make any program the "standard" for what should be played it needs to be FAR stronger than the thing you are measuring and so running Houdini or any good program for a few ply or a second or two is not enough to even match the top players.

I understand what you write.

But my main goal is NOT to predict most of the human players moves.

Please study my example.
I want to play positions, where the human player is more or less LIKELY to make an error (Ne2??).
I do that at the cost of playing -,Nd7!? instead of the computeroptimal move -,e5.
White can take advantage of -,Nd7!? by playing Qh5!
But it is not LIKELY he finds exactly that move.


Computers do not always find the best move against humans.
Not even if they searched to a depth 100.
That is because they expect the human plays like a computer.

[d]2kr4/1ppb1pp1/1b4r1/pP2p3/P3P3/5qN1/2Q2P1P/2R1B1KR w - - 0 1

(From the book "The joys of chess")

Black, a human player, has a nasty threat of -,Bh3 and -,Qg2+ mate.
A computer will probably play Qc3 or Qd1 here, but is still lost.

The position occured in the game Troitzky-Vogt (St. Petersburg 1896)

The game continued:
Rd1(!!), Bh3
Black did not care about a few spite checks from white
Rxd8+, Kxd8
Qd1+!!
Black must now play -,Qxd1 if he won't loose his own queen and the game.

Stalemate!!
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Predicting a humans move

Post by Don »

JBNielsen wrote:
Don wrote:
JBNielsen wrote:I think you are right that there is not much difference between a 2300 player and a 2800 player.
But there must be some differences.
Yes, of course there is. Here is an idealized model that is not perfect but maybe makes it clearer:

Suppose that in any position there is only 1 correct move. The stronger you are the more likely you will find that one move. A player who just learned the rules may find it almost randomly and would be doing good to find it 5% of the time. However an average amateur tournament player who understand basic tactics might find it 70% of the time. There is a world of difference in the missing 30% however. Our 2300 player may increase this to 90%, picking up 2 out of 3 the weaker played missed. After 90% there is not much left to pick up, the 2800 player might pick up half of what is missing. I'm making up these numbers in an arbitrary way, but the point is basically that the difference between playing good and playing great is only 1 move out of 20 or 30. But there is a difference of course.

When using a computer to judge this it's all noise and playing style. I have started an analysis but for example matching Houdini's move is not a very good way to estimate the strength of a top player because a much weaker player may play more like Houdini that a strong player simply due to style. Evidently there is a lot of room for personal style at the top. A lot of people are under the misconception that the stronger the players are, the more alike they will play and eventually would play identically, for example two perfect players. But that is not the case. Two perfect players would only be guaranteed to play identical chess if there was only one winning (or one drawing) move on the board and that is not rare, but only a fraction of the positions in a game are such that only one move is good.

Even if you pretend Houdini is playing perfect and trying to see how often a player also plays "perfectly" according to Houdini, you would have to set Houdini for very long searches. If you make any program the "standard" for what should be played it needs to be FAR stronger than the thing you are measuring and so running Houdini or any good program for a few ply or a second or two is not enough to even match the top players.

I understand what you write.

But my main goal is NOT to predict most of the human players moves.

Please study my example.
I want to play positions, where the human player is more or less LIKELY to make an error (Ne2??).
I do that at the cost of playing -,Nd7!? instead of the computeroptimal move -,e5.
White can take advantage of -,Nd7!? by playing Qh5!
But it is not LIKELY he finds exactly that move.


Computers do not always find the best move against humans.
Not even if they searched to a depth 100.
That is because they expect the human plays like a computer.
It's computationally expensive to play for cheap shots and swindles though. First, you have to identify them and secondly, the swindle will generally be a worse move, even if it's only slightly worse.

And the need to do that does not seem to exist. Computers already are far superior to humans and humans hardly have any chance against computers these days. I don't think computers are used much to play games against humans any longer. I would spend the energy making the program stronger if you want to beat humans even more than you do.

I understand what you are saying though. We have all seen positions where the computer "clarifies" a complicated position where the opponent could easily go wrong. The general solution to this is simply to score positions which are stylistically favorable to a computer a bit higher. Try to get the kind of game YOU play better when all else is equal.

It's well known that computers play closed position very poorly, so even a closed position that is subjectively better may not be as good as one with less advantage but that allows the computer to play into it's own strengths.

I think what you are suggesting is hard to do in any sort of controlled way. It's almost like saying, "just play the best move" when all the resources of a program is dedicated to doing that. So saying, "play a tricky move" is a similar problem, we don't have a reliable way to measure it and measuring it will be computationally very expensive.


[d]2kr4/1ppb1pp1/1b4r1/pP2p3/P3P3/5qN1/2Q2P1P/2R1B1KR w - - 0 1

(From the book "The joys of chess")

Black, a human player, has a nasty threat of -,Bh3 and -,Qg2+ mate.
A computer will probably play Qc3 or Qd1 here, but is still lost.

The position occured in the game Troitzky-Vogt (St. Petersburg 1896)

The game continued:
Rd1(!!), Bh3
Black did not care about a few spite checks from white
Rxd8+, Kxd8
Qd1+!!
Black must now play -,Qxd1 if he won't loose his own queen and the game.

Stalemate!!
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
JBNielsen
Posts: 267
Joined: Thu Jul 07, 2011 10:31 pm
Location: Denmark

Re: Predicting a humans move

Post by JBNielsen »

You are right that "the need to do that does not seem to exist" if we want a program to defeat the human VM as Deep Blue did in 1997.
The strongest engines can easily do that today in their own style.

"I don't think computers are used much to play games against humans any longer".
No, exactly!
They are too strong and plays in a non-human style.
They should have parameters like:
- make few or many human-like errors
- make few or many traps
- take small or big chances
- seek more or less for open and complicated position
That would be an attractive opponent for many humans

"Try to get the kind of game YOU play better when all else is equal".
Agree.
So in this case you are also willing not to play the best move - the move that would normally get the highest score :wink:
I tried that with my Dabbaba a few moths ago, and it worked fine.
You might have seen my posts here.

"I think what you are suggesting is hard to do in any sort of controlled way" and "we don't have a reliable way to measure it"
Exactly.
We have even not described how it could be done yet!
I know it will not be easy.
But it is challenging to solve because it is hard!

"It's computationally expensive to play for cheap shots and swindles though" and "measuring it will be computationally very expensive"
How can you claim that when we don't know how to do it?
And even if it took 80% of the computing time it would only cost one ply in depth search.
The benefits might be much bigger than the loss of one ply in depth.

Try to think 30 years back to 1983.
Experts had for a decade or two tried to make a good program.
But compared with today the software result was rather silly.
People kept working, though, and things happened.
After minimax came alpha-beta, quiescense search, simple and advanced killers, hashtables, better movegenerators, better evaluations, null-move, LMR etc.
Huge improvements, and here 30 years later we still see improvements every year.

I wonder what could happen if we at least STARTED giving chessengines a human understanding.
Can it be incorporated in the alpha-beta search, or should they run parallel?
How should the alpha-beta search be modified?
Can we make cut-offs when the loss in score is bigger than the max chance we will accept?
Etc.

I return to the "the need to do that does not seem to exist"
And I repeat this from my earlier mail:

We need to simulate the play of humans if we:
* will create the strongest opponent for humans ever made
(if the cost of score is low, it will set up traps and seek for open and complex positions)
* will create a perfect(?) humanlike training partner for humans
(it can be set to play 'natural' mistakes if we want, and it will set up more traps)
* will generate relevant game-annotations for humans
* will have better tools to prepare openings against a human player
(finding traps in the opponents preferred opening lines)

It would be very nice, if the automatic annotator mentioned the Troitzky line if he had not played it!
It might find some good traps in some old mastergames noone has discovered yet.

Some day one or several bright people will make this artificial human chessunderstanding.
And hopefully earn a lot of money.

While everyone wonder why it should take so long...
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Predicting a humans move

Post by Don »

I hear you, but I have been doing this a very long time and have a good sense of what is reasonable.

The idea of finding a great move has proved to take everything we have, fast computers and powerful hardware. All resource devoted to playing a really good move. Now which do you think is easier to do, find a great move or find a great move that has all sorts of mine-fields built into it? So you are trying to solve a problem that is even harder to solve than just playing strong chess.

If you are looking for ways to play weaker chess while playing in a style unrecognizable from weak humans, then there are some possibilities and even some papers written on it. So there is a lot of room for experimentation.

There was a computer/computer game played a few decades ago where the computer seemed to blunder a queen outright in what appeared to be a winning position. The position was very interesting because it turned out the computer was actually getting mated and the queen giveaway was just a horizon (delaying) move. But what was also interesting is that no human would have given away the game because the mate was far from obvious. If you are going to lose the game anyway, why make it obvious to the other player?

So there are all sort of hacks possible to deal with these things. What was suggested here was a simple hack, if you see you are getting checkmated or losing huge material and you suddenly change the move you are playing at the same time, just revert to the move you were going to play before you saw the big loss and hope your opponent doesn't see the problem.

Another hack, if your goal is to make it tricky for the opponent is to favor positions with lots of legal moves. You can do that with asymmetrical evaluation, such as giving just the computers side a bonus based on total mobility.

If you want to make a tricky player you can do a special analysis which involves searches to find positions where the opponent has many reasonable moves that are wrong. Reasonable on low depths, wrong on high depths. Such a player might be a real hoot to play, but the overhead of doing such stuff results in far weaker play. But that would be ok if the goal was to produce an extremely interesting player or to simulate one of those players every club has, the hack who loves sacrificial play and doesn't care if it's not quite sound or not.

There are other things you can do too. With the correct adjustments to the evaluation we can make Komodo play on the "other side" of soundness. For example if we lower the value of a pawn we can make the program quite willing to sacrifice pawns at the drop of a hat. You can also lower the value of the rook to make the program willing to make exchange sacrifices. If you want the program to really play wild and crazy you can lower the value of ALL the pieces and pawns so that positional factors dominate.

JBNielsen wrote:You are right that "the need to do that does not seem to exist" if we want a program to defeat the human VM as Deep Blue did in 1997.
The strongest engines can easily do that today in their own style.

"I don't think computers are used much to play games against humans any longer".
No, exactly!
They are too strong and plays in a non-human style.
They should have parameters like:
- make few or many human-like errors
- make few or many traps
- take small or big chances
- seek more or less for open and complicated position
That would be an attractive opponent for many humans

"Try to get the kind of game YOU play better when all else is equal".
Agree.
So in this case you are also willing not to play the best move - the move that would normally get the highest score :wink:
I tried that with my Dabbaba a few moths ago, and it worked fine.
You might have seen my posts here.

"I think what you are suggesting is hard to do in any sort of controlled way" and "we don't have a reliable way to measure it"
Exactly.
We have even not described how it could be done yet!
I know it will not be easy.
But it is challenging to solve because it is hard!

"It's computationally expensive to play for cheap shots and swindles though" and "measuring it will be computationally very expensive"
How can you claim that when we don't know how to do it?
And even if it took 80% of the computing time it would only cost one ply in depth search.
The benefits might be much bigger than the loss of one ply in depth.

Try to think 30 years back to 1983.
Experts had for a decade or two tried to make a good program.
But compared with today the software result was rather silly.
People kept working, though, and things happened.
After minimax came alpha-beta, quiescense search, simple and advanced killers, hashtables, better movegenerators, better evaluations, null-move, LMR etc.
Huge improvements, and here 30 years later we still see improvements every year.

I wonder what could happen if we at least STARTED giving chessengines a human understanding.
Can it be incorporated in the alpha-beta search, or should they run parallel?
How should the alpha-beta search be modified?
Can we make cut-offs when the loss in score is bigger than the max chance we will accept?
Etc.

I return to the "the need to do that does not seem to exist"
And I repeat this from my earlier mail:

We need to simulate the play of humans if we:
* will create the strongest opponent for humans ever made
(if the cost of score is low, it will set up traps and seek for open and complex positions)
* will create a perfect(?) humanlike training partner for humans
(it can be set to play 'natural' mistakes if we want, and it will set up more traps)
* will generate relevant game-annotations for humans
* will have better tools to prepare openings against a human player
(finding traps in the opponents preferred opening lines)

It would be very nice, if the automatic annotator mentioned the Troitzky line if he had not played it!
It might find some good traps in some old mastergames noone has discovered yet.

Some day one or several bright people will make this artificial human chessunderstanding.
And hopefully earn a lot of money.

While everyone wonder why it should take so long...
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
carldaman
Posts: 2287
Joined: Sat Jun 02, 2012 2:13 am

Re: Predicting a humans move

Post by carldaman »

Excellent post, Jens! :) You've perfectly articulated a synthesis of the things discussed here earlier in the year, and then some.. I think the developer community needs to not lose sight of these factors. It's not something that even has to interfere with the (legitimate) aim creating the strongest possible engine, as this can be achieved separately and is not mutually-exclusive in an absolute sense.

Weaker engine personalities can be great tools for the club player if they can come up with (sometimes risky) ideas that are hard to refute over-the-board. Often the 'best' move is the one most challenging/difficult to the opponent, even if not technically the best (since it would require 3000+ strength to refute it.) This is in line with how Lasker and a young Tal, plus many other greats played.

I like to analyze with the Zappa Aggressor personality, and also like to use an engine like Hannibal, to come up with interesting and playable ideas, besides using Komodo, Houdini, Hiarcs etc.

"Dubious, therefore playable" is a humorous but profound saying by GM Tartakower.

Regards,
CL
carldaman
Posts: 2287
Joined: Sat Jun 02, 2012 2:13 am

Re: Predicting a humans move

Post by carldaman »

Don wrote:I hear you, but I have been doing this a very long time and have a good sense of what is reasonable.

The idea of finding a great move has proved to take everything we have, fast computers and powerful hardware. All resource devoted to playing a really good move. Now which do you think is easier to do, find a great move or find a great move that has all sorts of mine-fields built into it? So you are trying to solve a problem that is even harder to solve than just playing strong chess.

---

There are other things you can do too. With the correct adjustments to the evaluation we can make Komodo play on the "other side" of soundness. For example if we lower the value of a pawn we can make the program quite willing to sacrifice pawns at the drop of a hat. You can also lower the value of the rook to make the program willing to make exchange sacrifices. If you want the program to really play wild and crazy you can lower the value of ALL the pieces and pawns so that positional factors dominate.

Robert Flesher published modified settings for Zappa Mexico II (dubbed the 'Dissident Aggressor" that make Zappa play in a crazy, but very effective style, but at the expense of being about 400-500 rating points weaker. However, since Zappa is quite strong (close to 3000 on CCRL 40/40, running on 4 cores), subtracting all those points still leaves us with a 2500 rated attacking monster that even destroys not only strong human players, but also 'weaker' engines below 2500 CCRL elo. It is also a great analysis/preparation tool.

This is very significant, since it did not involve any actual code changes, but only tweaking of parameters. It's interesting that Robert actually lowered the value of the pieces, and not the pawns, to get Zappa to be very willing to sacrifice material.
Once in a while, Zappa Aggressor will lose to a lower rated player, having run out of pieces and pawns to throw away, but most of the time its risky play is very effective and the (strong) opponent will be overwhelmed by the complications. This works as intended, and shows it's far from an impossible task to achieve an effective aggressive style.

I've been able to get similar results with other engines, such as Little Goliath Evolution, by tweaking its parameters to get it to play very risky, but yet strong chess (at around 2200-2300 level strength which is quite acceptable as sparring partner for a club player).

There are other examples -- Pawel Koziol has also worked hard to implement a swindle mode and various weaker personalities in his engines. I think he will admit it's not easy to do this (especially a swindle mode that works properly). Thinker is another engine that tries to use risky tactics in its Active personality. And then JB Nielsen's current work and enthusiasm with Dabbaba must be commended as well.

Regards,
CL