zullil wrote: ↑Sat Sep 07, 2019 4:30 pm
Forgive me, but it seems all you've done is add a "permanent" hash table. This has been done many times before, even for Stockfish. How is this different? I mean, who would want to publish this?
We are using the entry level techniques in RL:Q learning.
The search and evaluation function provides the penalties and bonuses you want.
But we are not here to teach you reinforcent learning
zullil wrote: ↑Sat Sep 07, 2019 4:30 pm
Forgive me, but it seems all you've done is add a "permanent" hash table. This has been done many times before, even for Stockfish. How is this different? I mean, who would want to publish this?
We are using the entry level techniques in RL:Q learning.
The search and evaluation function provides the penalties and bonuses you want.
But we are not here to teach you reinforcent learning
Never asked for you to teach me anything. If nothing else, you might want to review your article/abstract, since there's absolutely nothing in it that seems to qualify as reinforcement learning. What you present in the "article" seems to be nothing more than a hash table saved on disk, an idea that's been around for a long time. Perhaps if I review the source code I'll see something more... .
You might as well review you understanding on q learning because it is the simplest algorithm in reinforcement learning. At first you wanted penalties and bonuses, we gave you. You can tell us what you want next and we shall give you. Well, it looks like your definition of reinforcement learning is questionable
amchess wrote: ↑Wed Sep 11, 2019 12:05 am
You might as well review you understanding on q learning because it is the simplest algorithm in reinforcement learning. At first you wanted penalties and bonuses, we gave you. You can tell us what you want next and we shall give you. Well, it looks like your definition of reinforcement learning is questionable
I never mentioned bonuses or penalties, so you must have me confused with someone else. Good luck with your project.
Builds for gcc 8.
Bug corrected.
Now, the file is written only when the gui sends the quit command.
Important
An infinite analysis must be stopped before any other operation.
If not, the learning HashTable is not filled and the experience.bin file
is incorrectly written. https://github.com/amchess/BrainLearn/r ... /tag/4.2.1
The main novelties are
-support for live chessdb
-a new offline learning based on pattern recognition. It can be obtained via a private application. Without it, the behaviour is the same as the previous version.
The details in ReadMe file: https://github.com/amchess/BrainLearn/b ... /Readme.md
The main novelties are
-support for live chessdb
-a new offline learning based on pattern recognition. It can be obtained via a private application. Without it, the behaviour is the same as the previous version.
The details in ReadMe file: https://github.com/amchess/BrainLearn/b ... /Readme.md
How do I obtain the private offline learning application?