Komodo 12 and MCTS

Discussion of anything and everything relating to chess playing software and machines.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
Ozymandias
Posts: 1126
Joined: Sun Oct 25, 2009 12:30 am

Re: Komodo 12 and MCTS

Post by Ozymandias » Mon May 14, 2018 4:04 pm

lkaufman wrote:
Mon May 14, 2018 3:54 pm
Ozymandias wrote:
Mon May 14, 2018 9:10 am
lkaufman wrote:
Mon May 14, 2018 6:11 am
we have spent the last month creating a Komodo MCTS (Monte-Carlo Tree Search) option for it. It is really a second engine, and I understand that ChessBase will treat it as an independent engine when they release their version of Komodo 12 (soon)
I guess they plan to offer free upgrades, at least for this new engine? Otherwise they'll be selling an early prototype that will soon become outdated.
We plan to give ChessBase at least one free upgrade for Komodo MCTS. I expect they would offer it to their customers free, but it is their decision, not ours.
That clarifies things.

Kurt Meyer
Posts: 18
Joined: Mon Jun 19, 2017 2:37 pm

Re: Komodo 12 and MCTS

Post by Kurt Meyer » Mon May 14, 2018 4:30 pm

There is no point taking Komodo from ChessBase. I wont be fooled again.

Not once or twice.... just shortly after the support period from Komodo expires a brand new version pops up.... and guess what? Pay for it again...60 bucks.

Business right?

Gian-Carlo Pascutto
Posts: 1184
Joined: Sat Dec 13, 2008 6:00 pm
Contact:

Re: Komodo 12 and MCTS

Post by Gian-Carlo Pascutto » Mon May 14, 2018 4:41 pm

There are some papers about Randomized Best First Mini Max in chess. If you understand how MCTS/UCT works you'll see the similarities.

Gian-Carlo Pascutto
Posts: 1184
Joined: Sat Dec 13, 2008 6:00 pm
Contact:

Re: Komodo 12 and MCTS

Post by Gian-Carlo Pascutto » Mon May 14, 2018 4:50 pm

lkaufman wrote:
Mon May 14, 2018 6:11 am
we are reasonably certain that it is the strongest MCTS engine available at this time for the pc. We estimate that its rating on the CCRL 40/40 scale will be about 3000 on one thread, 3050 on two, and 3070 on three or more.
So the search loses about 330 Elo? I am not sure how terrible this is. It's hard to compare given the relative development effort done on both.

But winning back 330 Elo is not going to be easy after the initial quick gains.

Gian-Carlo Pascutto
Posts: 1184
Joined: Sat Dec 13, 2008 6:00 pm
Contact:

Re: Komodo 12 and MCTS

Post by Gian-Carlo Pascutto » Mon May 14, 2018 5:02 pm

mjlef wrote:
Mon May 14, 2018 1:46 pm
We do it the normal MCTS way (non-min-max) except for one small case in the tree.
You don't average mates or draws-by-rule, I presume :D
they use a neural network, which Komodo doesn't. MCTS has nothing to do with it.
Well, yes and no. There's reasons why people tried MCTS with playouts and then later neural networks, over alpha-beta. You can't entirely decouple those concepts. If it was so easy, they would not lose 330 Elo.

But that doesn't mean mixing them up won't work. For sure a neural network evaluation in an alpha-beta searcher works fine. As for the other way around, that's up to the Komodo guys to prove, right.

Daniel Shawul
Posts: 3757
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Komodo 12 and MCTS

Post by Daniel Shawul » Mon May 14, 2018 5:07 pm

lkaufman wrote:
Mon May 14, 2018 3:51 pm
Werewolf wrote:
Mon May 14, 2018 6:52 am
So Larry, is this new version using the old Komodo hand-tuned evaluation function, but with MCTS "search"?

There's no NN in the pipeline..?
The eval for Komodo MCTS is different than the eval for normal Komodo, but they are related. No NN planned as of now, but that doesn't mean we won't try it. As I wrote in my article for New In Chess (about AlphaZero), I was not convinced that their success was due to NN, more likely due to MCTS and hardware. I'm doubtful that NN can do as well as five centuries of accumulated human knowledge about chess.
GPU is not useful for MCTS, only for NN. If we find a way to make good use of GPU, we will do so.
You are absolutely wrong about the NN unable to capture five centuries of human chess knowledge. Though I sympathize with the fact that your job in komodo might be taken over by computers now, I have no doubt the NN eval can be better than anyy hand written evaluation and infact LCZero already proved it IMO -- Kai estimates LCZero is 3300 elo positional but only 2000 elos tactical.

The only problem is NN evaluation s too slow that it needs massive hardware acceleration. I have a NN scorpio running with Tensorflow now. Even using single neuron NN (all inputs are weighed and summed just like in standard evaluaiton), the nps goes down by a factor of 50x. This is because of the massive overhead of 20 micro-second per session evaluation call in TF which brought down the nps from 1.2 Mnps to just 30 knps. Then when I had a 1 block x 64 filters resnet it went down to 5 knps ect. In terms of nps, LCZero is doing pretty well with its big 15x192 network.

The NN evaluation is going to be so slow that the only feasible search becomes the highly selective MCTS not the full-width alpha-beta. The A0 guys have no choice in that regard. So when you say that MCTS is the key to their success, it is actually not, there are better algorithms for chess but most are not feasible with such a slow evaluation even after acceleration with GPU.

Daniel Shawul
Posts: 3757
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Komodo 12 and MCTS

Post by Daniel Shawul » Mon May 14, 2018 5:20 pm

mjlef wrote:
Mon May 14, 2018 3:13 pm

I and Larry have read about MCTS for years, following progress of the Go programs (Don Dailey had a Go program using Monte Carlo). But I am not claiming anything like "years of research". Although the the basic scheme we are using has been discussed between us for several years, we have only actively worked on it for the last month or two. We tried several variants and tuned the initial method, which was only giving us elos in the mid 2000s. But we found ways to improve it, with some changes giving us 100+ elo gains. The Exploit/Explore ratio is particularly important.
This is all very basic MCTS staff. I have a dynamic exploration coefficient that decreases at time runs out. Using lower exploration coefficient makes your search very selective and often helps tactical strength.
As for search, you can still use aspiration on a search even if you do not have especially useful bounds.
No you can't. Plain MCTS converges to minmax tree not an alpha-beta tree because it has no concept of bounds at all. Your MCTS search is very basic with the only modification you mentioned that formula is slightly different. That begs the question of how you mixed it with alpha-beta. You need some sort of best-first alpha-beta searcher (i use alpha-beta rollouts MCTS) to give you bounds period.
We also found tuning search parameters to be a big help. As for elo, although we have followed your recent postings, what we are doing is similar, but with a lot of differences from what you have posted, so it is not surprising we get different results. We found sticking with our initial idea to be pretty good. But we have more to learn.

All MCTS schemes seems to expand the tree a lot slower than the nps a typical chess engine gets. The neural network engines use an evaluation capable of predicting things like piece swapoffs. But there are other ways of getting that.
Sure, use a qsearch() like I do. That is better than hoping for a NN to solve tactics. But to solve shallow tactics like 4-plies or 8-plies, not having proper bounds for those shallow searches becomes a problem. As I mentioned the stockifsh mcts used 8-plies alpha-beta search at the leaves but didn't do that well because one has no idea of bounds at the leaves other than (-Mate, Mate)
As for "commercial stunt", that is simply untrue. Before passing judgement, how about taking a look at how the program behaves? Releasing an MCTS mode that is hundreds of elo weaker than what a program gets with standard search is not exactly a headline grabber. But we find its moves/search interesting and useful.
Time will tell, I won't be surprized if it actually lost 500-600 elo with the kind of vanilla MCTS search you described.

shrapnel
Posts: 1198
Joined: Fri Nov 02, 2012 8:43 am
Location: New Delhi, India

Re: Komodo 12 and MCTS

Post by shrapnel » Mon May 14, 2018 5:51 pm

CMCanavessi wrote:
Mon May 14, 2018 1:13 pm
No, you're completely wrong. The only reason why Leela and A0 benefit from GPU (or TPU) is because they use a neural network, which Komodo doesn't. MCTS has nothing to do with it.
MCTS or Neural Network, I couldn't care less ; if Komodo CAN'T use the GPU (or TPUs), its simply not upto snuff, and that's all there is to it.
i7 5960X @ 4.1 Ghz, 64 GB G.Skill RipJaws RAM, Twin Asus ROG Strix OC 11 GB Geforce 2080 Tis

mjlef
Posts: 1427
Joined: Thu Mar 30, 2006 12:08 pm
Contact:

Re: Komodo 12 and MCTS

Post by mjlef » Mon May 14, 2018 5:56 pm

Gian-Carlo Pascutto wrote:
Mon May 14, 2018 5:02 pm
mjlef wrote:
Mon May 14, 2018 1:46 pm
We do it the normal MCTS way (non-min-max) except for one small case in the tree.
You don't average mates or draws-by-rule, I presume :D
Yes, MTCS does average draws and mates. Draws have a 0.5, and giving mate has a 1.0 chance of winning, being mated a 0.0 chance.
they use a neural network, which Komodo doesn't. MCTS has nothing to do with it.
Well, yes and no. There's reasons why people tried MCTS with playouts and then later neural networks, over alpha-beta. You can't entirely decouple those concepts. If it was so easy, they would not lose 330 Elo.

But that doesn't mean mixing them up won't work. For sure a neural network evaluation in an alpha-beta searcher works fine. As for the other way around, that's up to the Komodo guys to prove, right.
A neural network is used in AlphaZero and Leela to predict winning chances. This is used instead of playouts. To be effective, a neural network has to simulate how the rest of the game would turn out. Evaluations and searches in regular chess programs are trying to do the same thing. We think that could be quite powerful if the right amount of tune search and eval are used. Time will tell, but the results are encouraging.

lkaufman
Posts: 3724
Joined: Sun Jan 10, 2010 5:15 am
Location: Maryland USA
Contact:

Re: Komodo 12 and MCTS

Post by lkaufman » Mon May 14, 2018 6:03 pm

shrapnel wrote:
Mon May 14, 2018 5:51 pm
CMCanavessi wrote:
Mon May 14, 2018 1:13 pm
No, you're completely wrong. The only reason why Leela and A0 benefit from GPU (or TPU) is because they use a neural network, which Komodo doesn't. MCTS has nothing to do with it.
MCTS or Neural Network, I couldn't care less ; if Komodo CAN'T use the GPU (or TPUs), its simply not upto snuff, and that's all there is to it.
If it plays better on your hardware without using GPU than any other MCTS engine does with GPU, what is the problem? Some algorithms like CPUs better, some like GPUs better.
Komodo rules!

Post Reply