Re: stockfish 10 vs. Mephisto III S Glasgow
Posted: Thu Dec 05, 2019 10:17 am
As I read in a previous message, 50 Knodes is around +1000 Elo over 1 Knodes. And 50 Knodes is 0.01 seconds CPU.
As I read in a previous message, 50 Knodes is around +1000 Elo over 1 Knodes. And 50 Knodes is 0.01 seconds CPU.
If you have an engine that understands these forks, pins and mate threads without search and has everything in static you don't need to burn resources.PK wrote: ↑Thu Dec 05, 2019 11:04 am Also, misdefining intelligence as adherence to some restriction pulled out from a hat is, well, not very intelligent. I could probably write na engine playing something resembling 1800 Elo chess in 1000 nodes searches, but its code would be ugly beyond measure. Thousands lines of code detecting tactical patterns, static evaluation of upcoming forks etc. - this is not inteligence. This is plain ugly, and searching more nodes in exchange for getting rid of such rubbish is the best thing that happened to chess programming.
On the other hand, working with Brendan Norman on a chess engine impersonating Mikhail Tal (http://www.pkoziol.cal24.pl/opental/) I didn't care how many nodes it burns, as long as it played brilliant if not quite correct sacrifices.
NPS isn't the same as burning resources.mclane wrote: ↑Thu Dec 05, 2019 12:21 pmIf you have an engine that understands these forks, pins and mate threads without search and has everything in static you don't need to burn resources.PK wrote: ↑Thu Dec 05, 2019 11:04 am Also, misdefining intelligence as adherence to some restriction pulled out from a hat is, well, not very intelligent. I could probably write na engine playing something resembling 1800 Elo chess in 1000 nodes searches, but its code would be ugly beyond measure. Thousands lines of code detecting tactical patterns, static evaluation of upcoming forks etc. - this is not inteligence. This is plain ugly, and searching more nodes in exchange for getting rid of such rubbish is the best thing that happened to chess programming.
On the other hand, working with Brendan Norman on a chess engine impersonating Mikhail Tal (http://www.pkoziol.cal24.pl/opental/) I didn't care how many nodes it burns, as long as it played brilliant if not quite correct sacrifices.
And it's not ugly.
Ugly is to burn resources for beeing lazy in programming.
This way computer chess will not make any progress.
How would you know in the first place? Because Mephisto III displays low NPS? Would you love Stockfish if it displayed NPS=1N/S?
No. Ugly is to write expert system with layer upon layer of code to handle corner cases, only to be asked "but why placing a rook on a closed file when there is enemy queen on the same file plus a chance for a pawn break is considered less important than moving a king to h1 to prevent non-threatening bishop check". This would help in short searches, but longer searches would become more rigid, more reliant non predefined stuff and less reactive to board dynamic. Engine with this kind of knowledge guiding search would be far less likely to find a creative move. I always enjoyed coding evaluation function, but even there I gradually moved away from fixed patterns towards approaches like "take weighted average between competing piece/square evaluations, skewing result towards more optimistic one". This actually begins to look like planning, except that plan is not shaped by general considerations, but by statistics. And for this approach to work, you need a big tree, a large pool of positions to be compared, only then a pattern has a chance to emerge.Ugly is to burn resources for beeing lazy in programming.
mclane wrote: ↑Thu Dec 05, 2019 6:33 pm Todays chess programs have no idea what chess is or how they shall win.
They maximise score. They find key moves as if chess is a test position to solve.
And if they see a mate score they do so.
But they do not plan to mate.
Why not.
Mate is the target of the game.
Stockfish solves chess by going very fast very deep into the tree.
It is doing millions of NPS on todays machines.
Chess programming has made much progress.
The algorithms are better today.
But the chess programs are in no way different then they were in the eighties.
Even worse. As some people here frankly said, they found out that all the static knowledge is not needed anymore.
Search can replace this static knowledge.
This is the point of view we had before in computerchess many many times.
We had it when mephisto B+F or beated many selective engines with its A strategy.
We had it when fritz beated many knowledge based programs with nullmove + preprocessing.
Of course the intelligent programs (knowledge based) always striked back, e.g. when mephisto III won the world computer chess championship title against his opponents who did 500 times more NPS, when Ed schröders rebel beated mm2 ( a strategy) . When mchess dominated pc chess scene or hiarcs dominated .
There was always the fight between dumb/stupid programs who did it via tree and knowledge based programs who did it via static or also dynamic knowledge but mainly by chess vocabulary.
I had this discussion over and over again, not only in the newsgroups, also on tournaments directly with the opponents.
I fought with mark uniacke at the championship 1993 in Munich against ossi weiner and his genius wondermachines.
Wit chess system tal i fought at the championship in Paderborn against ossi weiner and genius pc program.
With chess system tal in paris 1997.
And it is always knowledge based versus tree search based.
In the Moment LC0 is the program that fights against the group of tree search based programs.
This is the context I discuss here.
This discussion is old. And the tree search based programs have made progress due to the enormous increase in hardware since 1982.
But you see how it has side effects, many programmers often speak out : why should I put knowledge into my programs when throwing knowledge out gives more ELO.
This is and ever was the problem. Only that in the old days the hardware was 6502 or worse with only 3.7 MHz.
So the hardware went from 1 core 8bit and 32 KB 8 KB RAM to 4 or 8 or even more cores computing parallel and 1130 KB ( size stockfish 10), 64 bit ( makes bitboards possible), hashtables , almost infinite size of opening books, Gigabyte RAM . And instead 3.7 MHz the computers compute 3000 or more MHz and this on many cores parallel.
Do you see the factor of progress the hardware made ?
And all this progress of hardware is not used for new AI ideas and instead invested and used for tree search.
Of course these todays AB programs are smart.
Nobody denies the hours and hours of work been done in those engines,
But IMO it is wasted energy. Because it leads to ELO increase, but not to a quality increase.
While LC0 is in fact bringing a quality increase.