Evidence That NNs Work Best With Multiple Modules

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

dkappe
Posts: 1632
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: Evidence That NNs Work Best With Multiple Modules

Post by dkappe »

Sopel wrote: Mon Dec 13, 2021 1:12 am Went through the slides (https://neurips.cc/media/neurips-2021/Slides/26740.pdf). I agree with Milos, this is useless for chess.
These concepts are so high level and general that I don’t think you can make such a blanket statement.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
User avatar
towforce
Posts: 12704
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK
Full name: Graham Laight

Re: Evidence That NNs Work Best With Multiple Modules

Post by towforce »

We know what chess computers are doing: generating a list (or tree) of legal moves, and evaluating each one.

A strong human player will look at a position, and as long as the position is reasonably "normal", they will be able to tell you in seconds what the two or three most important things to watch for are.

How does a human do that?

It's something to do with the small (in comparison to LC0 nets, which train on billions of positions) number of positions they've looked at in great detail. They're learning good and bad ways to progress, not to do static evaluations.

So maybe what we should be doing is working out how to generate this data in a form which NNs can learn this skill for a smaller number of positions rather than just sending them a huge database of position/evaluation pairs and saying, "Here NN - do something with that!"
Human chess is partly about tactics and strategy, but mostly about memory
Sopel
Posts: 391
Joined: Tue Oct 08, 2019 11:39 pm
Full name: Tomasz Sobczyk

Re: Evidence That NNs Work Best With Multiple Modules

Post by Sopel »

dkappe wrote: Mon Dec 13, 2021 1:19 am
Sopel wrote: Mon Dec 13, 2021 1:12 am Went through the slides (https://neurips.cc/media/neurips-2021/Slides/26740.pdf). I agree with Milos, this is useless for chess.
These concepts are so high level and general that I don’t think you can make such a blanket statement.
At some point "high level" and "general" becomes synonymous to "useless".
dangi12012 wrote:No one wants to touch anything you have posted. That proves you now have negative reputations since everyone knows already you are a forum troll.

Maybe you copied your stockfish commits from someone else too?
I will look into that.
Sopel
Posts: 391
Joined: Tue Oct 08, 2019 11:39 pm
Full name: Tomasz Sobczyk

Re: Evidence That NNs Work Best With Multiple Modules

Post by Sopel »

towforce wrote: Mon Dec 13, 2021 1:30 am We know what chess computers are doing: generating a list (or tree) of legal moves, and evaluating each one.

A strong human player will look at a position, and as long as the position is reasonably "normal", they will be able to tell you in seconds what the two or three most important things to watch for are.

How does a human do that?

It's something to do with the small (in comparison to LC0 nets, which train on billions of positions) number of positions they've looked at in great detail. They're learning good and bad ways to progress, not to do static evaluations.

So maybe what we should be doing is working out how to generate this data in a form which NNs can learn this skill for a smaller number of positions rather than just sending them a huge database of position/evaluation pairs and saying, "Here NN - do something with that!"
You're free to invent neural networks that can do that.

btw. you might find this interesting https://www.researchgate.net/publicatio ... l_Networks
dangi12012 wrote:No one wants to touch anything you have posted. That proves you now have negative reputations since everyone knows already you are a forum troll.

Maybe you copied your stockfish commits from someone else too?
I will look into that.
dkappe
Posts: 1632
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: Evidence That NNs Work Best With Multiple Modules

Post by dkappe »

Sopel wrote: Mon Dec 13, 2021 1:58 am
dkappe wrote: Mon Dec 13, 2021 1:19 am
Sopel wrote: Mon Dec 13, 2021 1:12 am Went through the slides (https://neurips.cc/media/neurips-2021/Slides/26740.pdf). I agree with Milos, this is useless for chess.
These concepts are so high level and general that I don’t think you can make such a blanket statement.
At some point "high level" and "general" becomes synonymous to "useless".
Category theory is high level and general. When applied properly you get things like Algebraic Topology. You’re complaining that a hammer is not a house. I’m merely pointing out that it can be used to build one.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
User avatar
towforce
Posts: 12704
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK
Full name: Graham Laight

Re: Evidence That NNs Work Best With Multiple Modules

Post by towforce »

Sopel wrote: Mon Dec 13, 2021 2:00 am
towforce wrote: Mon Dec 13, 2021 1:30 am We know what chess computers are doing: generating a list (or tree) of legal moves, and evaluating each one.

A strong human player will look at a position, and as long as the position is reasonably "normal", they will be able to tell you in seconds what the two or three most important things to watch for are.

How does a human do that?

It's something to do with the small (in comparison to LC0 nets, which train on billions of positions) number of positions they've looked at in great detail. They're learning good and bad ways to progress, not to do static evaluations.

So maybe what we should be doing is working out how to generate this data in a form which NNs can learn this skill for a smaller number of positions rather than just sending them a huge database of position/evaluation pairs and saying, "Here NN - do something with that!"
You're free to invent neural networks that can do that.

I was thinking more in terms of how the data is presented to the NN: my concept is to explicitly teach the NN what's important in a position, rather than show it billions of position/evaluation pairs and hope that it works this out for itself.

btw. you might find this interesting https://www.researchgate.net/publicatio ... l_Networks

Looks interesting: I will read it when I get time.
Human chess is partly about tactics and strategy, but mostly about memory