smatovic wrote: ↑Fri Jun 28, 2024 12:17 pmknowledge:search tradeoff corresponds to space:time tradeoff.
It's a good trade off at the extreme: knowing everything is the equivalent of being able to search everything. Otherwise programmed knowledge (the evaluation heuristic) tends to provide different knowledge from generated knowledge (game tree generation).
We know why generated knowledge misses important things: the horizon effect (link). The best remedies for the horizon effect would be a bigger game tree or a selective search. If selective search is used, we will mainly be using heuristics again!
You're saying that to improve an evaluation heuristic requires more knowledge. That's not necessarily so: less, but better, knowledge will be better than more knowledge. Right now, it's almost certainly possible to make an function (EF) that runs more quickly, takes less memory, and produces more accurate results than today's NN based EFs.
There are no science papers to say otherwise.
Want to attract exceptional people? Be exceptional.
towforce wrote: ↑Fri Jun 28, 2024 1:03 pm
[...]
You're saying that to improve an evaluation heuristic requires more knowledge. That's not necessarily so: less, but better, knowledge will be better than more knowledge. Right now, it's almost certainly possible to make an function (EF) that runs more quickly, takes less memory, and produces more accurate results than today's NN based EFs.
[...]
Yes, HGM mentions this now and then too, but now with NNs we have a pretty comfortable way to gain that knowledge.
***edit***
kowlegde:search, space:time, you invest pre-time, pre-computation to gain knowledge, Texel tuning, neural networks, EGTB.
towforce wrote: ↑Fri Jun 28, 2024 1:03 pm
[...]
You're saying that to improve an evaluation heuristic requires more knowledge. That's not necessarily so: less, but better, knowledge will be better than more knowledge. Right now, it's almost certainly possible to make an function (EF) that runs more quickly, takes less memory, and produces more accurate results than today's NN based EFs.
[...]
Yes, HGM mentions this now and then too, but now with NNs we have a pretty comfortable way to gain that knowledge.
The value of NNs isn't that they're comfortable - it's that they pick up things which are missed by authors of HCEs (hand coded evaluations), and hence make more accurate evaluations.
However, they are trained from billions of positions, and may well see millions of positions that contain an important feature of the game - but they can still fail to encode that important feature! The most famous example of this was the top Go engine that could be beaten by a middling Go player once he knew that it didn't understand one of the most basic concepts of that game - a group of stones.
This tells me two things:
1. People who write HCEs are unaware of many surface (simple) features of the game of chess which are actually valuable indicators
2. The person who works out how to capture deep (complex) features is going to own a really valuable piece of code. My plan is for that to be me!
Want to attract exceptional people? Be exceptional.