It is not an advanced chess engine (I know it wasn't ever supposed to be), but its rating gives me good hopes for my own engine
Seeing that such a basic engine as Wukong already scores 1474 CCRL, I wonder what one needs to do (or omit) to build an engine that scores in the 1200's...
Well, when i was getting started on my chess variant AI, it was VERY weak. At that time i was yet to watch VICE video and never heard of this forum.
So my engine didnt have Quiescence search, made/unmade moves directly on the UI board (one can only imagine how slow it was), and the only move ordering technique it used was my pretty strange implementation of the Killer moves.
I never measured its strength, but i bet it would be pretty low.
Also i must note that any bugs in the code can contribute to the strenght loss very significantly (i often fixed bugs that cost like ~60 elo).
Another significant elo-eater might be inefficient make\unmake or MoveGen. For example my current AI for unity game has very convoluted Make\Unmake functions, because there are pieces with very different move rules, and it should be considered (f.e. some pieces do not move if they are capturing smth, some do not disappear while promoting, but can do it only once in a game, some do not disseapear if they are captured... etc - this brings in many additional IFs in the code and i bet slowdown engine hard).
No4b wrote: ↑Sun Sep 27, 2020 6:48 pm
Well, when i was getting started on my chess variant AI, it was VERY weak. At that time i was yet to watch VICE video and never heard of this forum.
So my engine didnt have Quiescence search, made/unmade moves directly on the UI board (one can only imagine how slow it was), and the only move ordering technique it used was my pretty strange implementation of the Killer moves.
I never measured its strength, but i bet it would be pretty low.
Also i must note that any bugs in the code can contribute to the strenght loss very significantly (i often fixed bugs that cost like ~60 elo).
Another significant elo-eater might be inefficient make\unmake or MoveGen. For example my current AI for unity game has very convoluted Make\Unmake functions, because there are pieces with very different move rules, and it should be considered (f.e. some pieces do not move if they are capturing smth, some do not disappear while promoting, but can do it only once in a game, some do not disseapear if they are captured... etc - this brings in many additional IFs in the code and i bet slowdown engine hard).
That seems to be quite a complicated chess variant. Can I play this somewhere/somehow?
(I did try some chess variants, but some just have too many different pieces with too many different capabilities; it becomes hard to remember what piece can actually perform which moves, and in what situations.)
Gabor Szots wrote: ↑Mon Sep 28, 2020 10:53 am
I am just testing BBC. First results do not show +600 compared to TSCP, though.
Thanks you so much Gabor.
It shouldn't be +600)))
BBC 1.0 Should be +100/150 greater than TSCP
The current development version is already as strong a VICE (after tuning evaluation) but I need to do lots of tests before releasing next version.
So it should be just stronger than TSCP.
Also I played to little games.
How many games did BBC already play?
Did it crush?
maksimKorzh wrote: ↑Mon Sep 28, 2020 10:58 am
How many games did BBC already play?
Did it crush?
66 games, no crash.
In another thread you wrote it beat TSCP 15,5-0,5. Based upon that and remaining on the cautious side I selected opponents around 2200. I'm going to change that a bit. I assess in the end its rating will be somewhere near 2000.
No4b wrote: ↑Sun Sep 27, 2020 6:48 pm
Well, when i was getting started on my chess variant AI, it was VERY weak. At that time i was yet to watch VICE video and never heard of this forum.
So my engine didnt have Quiescence search, made/unmade moves directly on the UI board (one can only imagine how slow it was), and the only move ordering technique it used was my pretty strange implementation of the Killer moves.
I never measured its strength, but i bet it would be pretty low.
Also i must note that any bugs in the code can contribute to the strenght loss very significantly (i often fixed bugs that cost like ~60 elo).
Another significant elo-eater might be inefficient make\unmake or MoveGen. For example my current AI for unity game has very convoluted Make\Unmake functions, because there are pieces with very different move rules, and it should be considered (f.e. some pieces do not move if they are capturing smth, some do not disappear while promoting, but can do it only once in a game, some do not disseapear if they are captured... etc - this brings in many additional IFs in the code and i bet slowdown engine hard).
That seems to be quite a complicated chess variant. Can I play this somewhere/somehow?
(I did try some chess variants, but some just have too many different pieces with too many different capabilities; it becomes hard to remember what piece can actually perform which moves, and in what situations.)
Well, its a Unity game and its currently work in progress.
I can PM you a link to a previous test version i made for my friends back in june (there are some progress after it, but i didnt do all i wanted yet), the only problem i can see is that all text regarding movesets of the pieces are currently only in Russian, althought i suppose i can briefly describe each one, dont know.
Gabor Szots wrote: ↑Mon Sep 28, 2020 10:53 am
I am just testing BBC. First results do not show +600 compared to TSCP, though.
Thanks you so much Gabor.
It shouldn't be +600)))
BBC 1.0 Should be +100/150 greater than TSCP
The current development version is already as strong a VICE (after tuning evaluation) but I need to do lots of tests before releasing next version.
So it should be just stronger than TSCP.
Also I played to little games.
How many games did BBC already play?
Did it crush?
I decided to have a quick match of the BBC 1.0 against Drofa 1.0 (lunix compile vs lunix compile)
Score of Drofa_v.1.0 vs bbc_1.0_64bit_linux: 13 - 3 - 4 [0.750]
Elo difference: 190.85 +/- 172.61
20 of 20 games finished.
It somewhat confirm your ~100-150 suggestion, although for an accurate result much more games are needed.
As i watched some games unfold, i came to my attention that BBC 1.0 has some sort of a bug, where it prints 0.00 score even in a completely lost positions (see game below). I suppose it is either repetition or TT issue, but could be excessive pruning as well. If this is not fixed yet, i have a feeling that such bug may have really big negative impact on overall strength. If you want, i can PM you archive with all games played.
Gabor Szots wrote: ↑Mon Sep 28, 2020 10:53 am
I am just testing BBC. First results do not show +600 compared to TSCP, though.
Thanks you so much Gabor.
It shouldn't be +600)))
BBC 1.0 Should be +100/150 greater than TSCP
The current development version is already as strong a VICE (after tuning evaluation) but I need to do lots of tests before releasing next version.
So it should be just stronger than TSCP.
Also I played to little games.
How many games did BBC already play?
Did it crush?
I decided to have a quick match of the BBC 1.0 against Drofa 1.0 (lunix compile vs lunix compile)
Score of Drofa_v.1.0 vs bbc_1.0_64bit_linux: 13 - 3 - 4 [0.750]
Elo difference: 190.85 +/- 172.61
20 of 20 games finished.
It somewhat confirm your ~100-150 suggestion, although for an accurate result much more games are needed.
As i watched some games unfold, i came to my attention that BBC 1.0 has some sort of a bug, where it prints 0.00 score even in a completely lost positions (see game below). I suppose it is either repetition or TT issue, but could be excessive pruning as well. If this is not fixed yet, i have a feeling that such bug may have really big negative impact on overall strength. If you want, i can PM you archive with all games played.
maksimKorzh wrote: ↑Mon Sep 28, 2020 10:58 am
How many games did BBC already play?
Did it crush?
66 games, no crash.
In another thread you wrote it beat TSCP 15,5-0,5. Based upon that and remaining on the cautious side I selected opponents around 2200. I'm going to change that a bit. I assess in the end its rating will be somewhere near 2000.
Thank you Gabor. Even 2000 seems a bit too much. I think it should be around 1950 because version 1.0 is weaker than VICE which is around 2000. I'm now fixing bugs and also improved evaluation so next version should be much stronger.
Re: result vs TSCP
- that was 30 sec + 0, in 2min +1sec result should be worth for bbc. It happens due to the difference in search depth, it's more critical on ultra short time controls.
maksimKorzh wrote: ↑Mon Sep 28, 2020 6:03 pmRe: result vs TSCP
- that was 30 sec + 0, in 2min +1sec result should be worth for bbc. It happens due to the difference in search depth, it's more critical on ultra short time controls.
Christophe Théron, author of Chess Tiger said once: if an engine is sensitive to the time control, then it is badly written.