Alphazero news

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

yanquis1972
Posts: 1766
Joined: Wed Jun 03, 2009 12:14 am

Re: Alphazero news

Post by yanquis1972 »

jp wrote: Sat Dec 29, 2018 9:57 am
yanquis1972 wrote: Wed Dec 19, 2018 6:22 pm after 93 games at 5+5, test30 is stable at -70 elo vs SF10 & -21 elo vs SF10+book (perfect2017/2018). will attach pgn's later, going to try to get a decent sample of SF10 vs SF10+book results.
Any more news, yanquis?
Been running ltc (classical otb) vs SF10, but only partly to get an idea of strength. I do believe test30 does scale well and is close to SF10 head to head (~20 draws in as many games so far, which has been a big surprise) but I am curious, given test10s issues, if test30 can win games against top tier but significantly weaker competition, so I might switch to SF8 for a bit.
User avatar
emadsen
Posts: 434
Joined: Thu Apr 26, 2012 1:51 am
Location: Oak Park, IL, USA
Full name: Erik Madsen

Re: Alphazero news

Post by emadsen »

I can't read articles written by the general press about highly technical topics such as artificial intelligence (and its fruit fly analog, computer chess).
As a result, it [Stockfish] is not a particularly elegant program, and it can be hard for coders to understand. Many of the changes programmers make to Stockfish are best formulated in terms of chess, not computer science, and concern how to evaluate a given situation on the board: Should a knight be worth 2.1 points or 2.2? What if it’s on the third rank, and the opponent has an opposite-colored bishop?
Totally wrong. I don't participate in Stockfish development but I do know the engine developers are not focused on evaluation. Most of the engine's strength is derived from it's highly optimized C++ code and advanced alpha / beta PVS search algorithm. You know, computer science.
My C# chess engine: https://www.madchess.net