buying a new computer

Discussion of anything and everything relating to chess playing software and machines.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Zenmastur
Posts: 486
Joined: Sat May 31, 2014 6:28 am

Re: buying a new computer

Post by Zenmastur » Thu Jul 18, 2019 4:31 am

dragontamer5788 wrote:
Wed Jul 17, 2019 10:51 pm
Bah, I have too many project ideas, not enough time to work on them all. Someone else can take this idea if they want, lol.


Thanks for giving us your permission to use “YOUR” ideas. :roll: :roll: :roll:

That's very noble of you! :D :D :D
... Write a simple Alpha-Beta engine …., and run XXXXX games in parallel …. The XXXXX games will try to create …. an opening book.
What your describing is a testing frame work, much like the SF testing frame work! Gee, I wonder where I've heard this before?
Dann Corbit wrote:
Wed Jul 17, 2019 11:27 pm
dragontamer5788 wrote:
Wed Jul 17, 2019 10:51 pm
I was thinking about this. Why aren't modern opening books just permanently stored MCTS trees?

MCTS stores the following information:
1. Number of visits
2. WDL information (#Wins / #Losses / #Draws)
Or, perhaps, why aren't modern opening books database files that contain:
1. Number of visits
2. WDL information (#Wins / #Losses / #Draws)
3. Depth of search
4. ce score
5-999: whatever else you want to collect.

An opening book should be a collection of pv nodes with as much data attached to the nodes as we can possibly find useful.
Well, that's close, but an MAB based learning book needs more than just the PV nodes to function properly. It will use non PV nodes to store info about what has and hasn't actually been seen in OTB play and to store search information (in lieu of actual game play for non-played moves of interest.) e.g. If a leaf node in the book is reached for a second time the MAB algorithm would have you randomly select one of the, as yet, un-played moves to determine it's reward structure. It will continue to do this until all possible moves from this position have been “sampled”. This is a highly sub-optimal strategy for a chess opening book. (many of these moves will drop pieces or worse) Instead of this, a single move should be selected and searched.. If it returns a “reasonable” score (i.e. not too much less the score of the best “sampled” or searched moves then it can be played OTB. If the search returns a winning score (i.e. a mate score or tablebase score that represents a win) AND this score is better than any previously sampled move then it should be played. If however, the search returns a considerably worse score than “sampled” moves then a few options are available. If it's a mate score and we are losing then it should be marked as such and it will NEVER be used by the book. If not too much time has been used searching this move and enough time is left on the clock a second as yet un-sampled move can be searched. If not enough time remains a previously “sampled” move can be made. This saves time in “growing” the book and keeps the program from making terrible moves in the opening just to “claim” that it's sampled each move at least once.

The search scores for such moves should be placed in a non-leaf records (I.e records that are NOT part of the “playable” book) in the book along with pertinent information such as search depth, nodes searched, best move, etc. That doesn't make them a part of the “playable” book AND most such records will never become a playable part of the book but they do hold valuable information that the book algorithm can use to effectively balance the “need” to play “good” moves with the “want” to explore new opening move with the expectation of finding better lines of play.

So, you're on the right track as more information is needed to be stored, but not “JUST” about PV nodes. Non PV nodes need to be saved and information about them stored for the book to be able to “learn” which moves are best.
dragontamer5788 wrote:
Wed Jul 17, 2019 11:50 pm

I guess I was pushing MCTS specifically. The mathematical MCTS "understanding" of a node is that it is a a "armed bandit" (American slang for "slot machine"). If you have a room filled with hundreds-of-thousands of slot machines, the MCTS methodology was created to "solve" the slot machine problem (aka: Multi-armed Bandit problem).

How do you "search" for the best slot machine, in a room filled with many, many slot machines? Where the "best" slot machine is the one that gives you the highest win-rate (or the lowest loss-rate). MCTS is a exploration vs exploitation methodology that will asymptotically find the best slot machine (if given enough time). It is also proven to be the fastest asymptotic complexity for the multi-armed bandit problem. (Much like how Knuth proves alpha-beta pruning to be the best asymptotic search for speeding up minimax... MCTS has been mathematically proven to solve the multi-armed bandit problem)

So other algorithms can be faster only by a constant factor compared to MCTS (where "faster" is the number of times a node is visited). Something faster may exist, but it will be in the same complexity class as MCTS.

All the math behind the multi-armed bandit problem, MCTS, and all that seems to elegantly map to the opening book. Far better than say, PV-search, Alpha-beta, or other concepts.
I disagree!

Using purely random move choices make this a NO-go proposition because of the losses they will cause and because the rate at which useful knowledge will be gained will be glacially slow. In order to make this a viable solution for chess opening books AB engines or similar methods will be used to greatly speed the process and avoid most if not all random choices. i.e few if any random moves will be needed and no random play-outs should be used.

----------
In effect: a given position in an opening book is a "slot machine". It has some win/loss ratio, but we don't know it, and the only way to find out is to repeatedly play that slot machine and collect statistics. But we have other slot machines (other positions) we also need to collect statistics from. MCTS seems to be an unusually good "match" to the opening book problem.
I proposed this approach a few years ago, and more recently, and I said essentially the exact same thing but it hasn't provided much motivation for the community to adopt a “standard” learning book. Or any new books, much less a learning book as far as I can tell. Maybe, the threat posed by NN engines will provide the required motivation. I guess we'll see.

Regards,

Zenmastur
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.

Zenmastur
Posts: 486
Joined: Sat May 31, 2014 6:28 am

Re: buying a new computer

Post by Zenmastur » Thu Jul 18, 2019 5:21 am

Uri Blass wrote:
Thu Jul 18, 2019 1:34 am
I suggest the following experiment to check variety of chess engines.
Of course you need to develop a special tool to do it.

Run the engine against itself in the first 10 moves(20 plies) for million times at some time control that you choose(let say 1 second per move) and tell me how many different games you get and how many different final positions you get.

I expect stockfish to produce more than 10000 different positions and of course theory is more than 20 plies.

Assuming you use time control of 1 second per move it means that 20 plies take 20 seconds and you need 20 million seconds for my experiment.

Maybe more realistic is to run it only for 10000 games and I expect more than 2000 final positions in this case.
I have a better plan. Use an engine match with selfplay i.e. same version of SF plays both sides. And then put all games played in an database program and map which ECO codes aren't ever seen, which are rarely seen, etc. Then we will know how much diversity SF actually displays. I suggest that the games will be very unevenly distributed to the point of making you want to throw up. There are 500 ECO codes (A00 - E99) so maybe 50,000 games should be played. I suspect that roughly 90% of games fall in 10% of the ECO codes. I haven't tried this yet but it would be easy enough to do.

Regards,

Zenmastur
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.

JManion
Posts: 187
Joined: Wed Dec 23, 2009 7:53 am

Re: buying a new computer

Post by JManion » Thu Jul 18, 2019 5:27 am

I am looking for a new computer since it has been a while. I saw a 2990wx with 16 GB ram, 2080ti, 500 SSD drive for 3k. Overall do you think this is a reasonable deal? Will this be a good setup for a few years? I know new chips are coming out, but will this be good "bang for your buck"? I know a few people have said its better to build your own, but I have no experience and am worried about screwing things up. If you see a better/cheaper system in the US please point it out to me, I would be happy for the help!

Thanks again for the advice!

Daniel Shawul
Posts: 3757
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: buying a new computer

Post by Daniel Shawul » Thu Jul 18, 2019 5:46 am

Daniel Shawul wrote:
Tue Jul 16, 2019 8:08 pm
Zenmastur wrote:
Tue Jul 16, 2019 1:44 pm
Daniel Shawul wrote:
Mon Jul 08, 2019 7:37 pm
I am going to try and build the pc myself -- never built one before but watching lots of youtube videos has given me confidence. Fingers crossed I don't toast one of the components since i seem to be highly charged.

The 3900x was out of stock yesterday but managed to buy one today on newegg. Based on pc part picker i need to spend atleast 1500$ before tax if I pair it with an RTX 2070 super. I am going for a cheap X570 motherboard but maybe i should go for a solid X470 without PCIe 4.0 since i probably will not have any use for the extra SSD speed. 16GB 3200 ram.

Daniel
Daniel, would you please post some performance data on your new machine when it's up and running. i.e stockfish NPS and Lc0 speeds on the 2070 super. I'm sure everyone here would like to see how both the CPU and GPU perform! :D :D :D

And if you need any help/info to complete the build let us know.

Best regards and thanks in advance,

Zenmastur
I am working on on the cpu (GPU will arrive this week ). I have everything plugged in where it needs to be but having some issues
with getting it to post. The motherboard doesn't beep, it also doesn't show bios or anything on the monitor.
But the on-board cpu fans and rgb fans do turn on, i see led lights turned on M.2 ssds and hard disk seem to spin as well.
See picture:

https://ibb.co/NT6q9Yw

I tried to trouble shoot by taking out the RAMs and it still doesn't beep..don't know what the issue is.
I will dissassemble and get the motherboard out of the case and try again. Hopefully everything works out, but I am loving the experience anyway!

Daniel
Success at last!! The monitor lit up with the asus bios once the nvidia card (rtx 2070 super) got installed.
I wasted some time again plugging the hdmi card to the cpu's outlet before i realized that it had to be
plugged into the the GPU's hdmi port. But after that bios came up and i installed ubuntu 18.04 (no windows shit) from usb.

I did a quick benchmark on the CPU (ryzen 9 3900x) and GPU (RTX 2070 super).
Stockfish seems to scale linearly across the 12 cores with its lazy smp implementation. So i got about 1.8 mnps on 1 core using latest source
compiled with gcc 7.4, and get 21 mnps using all 12 cores. It goes up to 27 mnps if i use hyperthreading (24 threads). Similar scaling for scorpio as well.
No overclocking for this test, so just the base clock of 3.8ghz.

Then I installed latest nvidia driver 430.xx which i think supports also the new RTX 2070-super. The results, I am not sure about ...
Using scorpioNN on the initial postion and FP32, FP16 and INT8 precision here are the nps numbers

Code: Select all

FP32 - 4 knps
FP16 - 10 knps
INT8 - 33 knps
The FP16 number is pretty disappointing because lc0 benchmark says about 20knps using FP16 and older RTX 2070 gpu !?
Also a poster here showed me yesterday that scorpio gets 20knps using FP16 so I don't know what is going on here since I am only getting half.
However the INT8 nps is pretty satisfying going over 3x of the FP16 speed.
One issue could be that i am using the GPU for the display, besides doing NN calculations, since this CPU don't have integrated graphics card!

I will try out lc0 once i install cuda and everything else it needs to compile it on linux.

The only problem i have now is one of the RGB fans (out of 4) that came with the case is loose and does not rotate (no friction). It was delivered to me
like that and i could not get it hold. The other 2 fans at the front i can't get them out when i tried, so i don't know what is missing in the top fan.
Anyway i think 3 case fans, 1 cpu and 1 motherboard fan should be enough so i have no intention of returning the case.

Daniel

Uri Blass
Posts: 8586
Joined: Wed Mar 08, 2006 11:37 pm
Location: Tel-Aviv Israel

Re: buying a new computer

Post by Uri Blass » Thu Jul 18, 2019 7:00 am

Zenmastur wrote:
Thu Jul 18, 2019 5:21 am
Uri Blass wrote:
Thu Jul 18, 2019 1:34 am
I suggest the following experiment to check variety of chess engines.
Of course you need to develop a special tool to do it.

Run the engine against itself in the first 10 moves(20 plies) for million times at some time control that you choose(let say 1 second per move) and tell me how many different games you get and how many different final positions you get.

I expect stockfish to produce more than 10000 different positions and of course theory is more than 20 plies.

Assuming you use time control of 1 second per move it means that 20 plies take 20 seconds and you need 20 million seconds for my experiment.

Maybe more realistic is to run it only for 10000 games and I expect more than 2000 final positions in this case.
I have a better plan. Use an engine match with selfplay i.e. same version of SF plays both sides. And then put all games played in an database program and map which ECO codes aren't ever seen, which are rarely seen, etc. Then we will know how much diversity SF actually displays. I suggest that the games will be very unevenly distributed to the point of making you want to throw up. There are 500 ECO codes (A00 - E99) so maybe 50,000 games should be played. I suspect that roughly 90% of games fall in 10% of the ECO codes. I haven't tried this yet but it would be easy enough to do.

Regards,

Zenmastur
You may be right but it does not mean that there is no significant variety.
Maybe part of the ECO codes are bad to play.

For example playing the king gambit may be bad for white.

crem
Posts: 123
Joined: Wed May 23, 2018 7:29 pm

Re: buying a new computer

Post by crem » Thu Jul 18, 2019 7:14 am

Uri Blass wrote:
Mon Jul 08, 2019 8:40 am
Note that I do not know how to build a new computer from components so paying for building the computer is part of the price.
I'm probably too much behind in this thread, but assembling a PC is pretty easy. E.g. watch this video (second part) https://www.youtube.com/playlist?list=P ... X2f28R-31x -- and you have all the knowledge.

jp
Posts: 815
Joined: Mon Apr 23, 2018 5:54 am

Re: buying a new computer

Post by jp » Thu Jul 18, 2019 8:17 am

Ovyron wrote:
Thu Jul 18, 2019 4:18 am
But human GMs will play incorrectly. To know what I mean, play a match Shredder 8 v latest Stockfish and see how it's completely destroyed, then realize human GMs play even worse than Shredder 8. Shredder 8 has no clue about how to play what would be clearly best opening theory, humans would do worse than it, so they play opening theory that is suboptimal because their opponents aren't going to find the correct moves that'd refute it.
Humans are doing worse because they are outcalculated after the opening. The human would lose to the computer from both sides of any (reasonable) position. You cannot take the human loss as evidence that the human was worse in the opening.

Zenmastur
Posts: 486
Joined: Sat May 31, 2014 6:28 am

Re: buying a new computer

Post by Zenmastur » Thu Jul 18, 2019 8:19 am

Daniel Shawul wrote:
Thu Jul 18, 2019 5:46 am

Success at last!! The monitor lit up with the asus bios once the nvidia card (rtx 2070 super) got installed. I wasted some time again plugging the hdmi card to the cpu's outlet before i realized that it had to be plugged into the the GPU's hdmi port. But after that bios came up and i installed ubuntu 18.04 (no windows shit) from usb.

I did a quick benchmark on the CPU (ryzen 9 3900x) and GPU (RTX 2070 super). Stockfish seems to scale linearly across the 12 cores with its lazy smp implementation. So i got about 1.8 mnps on 1 core using latest source compiled with gcc 7.4, and get 21 mnps using all 12 cores. It goes up to 27 mnps if i use hyperthreading (24 threads). Similar scaling for scorpio as well. No overclocking for this test, so just the base clock of 3.8ghz.
Unlike Intel's offerings, AMD CPU's should have hyper-threading enabled for best performance. There are several ways the CPU performance can be enhanced. Make sure precision boost PB is enabled. This will allow the CPU to boost up to it's maximum rated frequency. This is not overclocking. It's stock behavior for the CPU and won't allow the thermal limit, boost limit, core power, or die/chiplet current limits to be exceeded. PBO is precision boost overdrive and is an out of the box "plug and play" overclocking method. Personally, on this CPU, I wouldn't even bother with this, you don't gain much and it does technically void the warranty (not that I ever cared about that) and many people don't want that to happen. You should also make sure your memory timings are set properly. The easiest way to do that is to use a program like CPUID. It will give you most everything needed to verify your system setup is actually the way you intended. Alternately you can verify these timings with the UEFI bios. Most motherboards will use the first JDEC timings on the rams SPD chip. This will NOT be optimal for a Ryzen 9 3900X system. AND it will not be the ram timings you paid for when you bought the ram unless you bought plain jane single stick ram modules. Instead the memory timings should be set to the manufactures memory timings which should be on the packaging and in one of the XMP profiles on the memories SPD chip. There may be several XMP profiles stored here, each set to a different speed (e.g. 3200mhz, 3333mhz, 3466mhz etc). Unless you are adventurous or have memory testing software select the profile that matches the ram timings on the packaging.

I suspect from the scores that you posted your RAM is running standard JDEC DDR4 2133 timings which are very slow. If this is true, then changing them to the proper timings will increase SF's NPS considerably.

Then I installed latest nvidia driver 430.xx which i think supports also the new RTX 2070-super. The results, I am not sure about ...
Using scorpioNN on the initial postion and FP32, FP16 and INT8 precision here are the nps numbers

Code: Select all

FP32 - 4 knps
FP16 - 10 knps
INT8 - 33 knps
The FP16 number is pretty disappointing because lc0 benchmark says about 20knps using FP16 and older RTX 2070 gpu !? Also a poster here showed me yesterday that scorpio gets 20knps using FP16 so I don't know what is going on here since I am only getting half. However the INT8 nps is pretty satisfying going over 3x of the FP16 speed. One issue could be that i am using the GPU for the display, besides doing NN calculations, since this CPU don't have integrated graphics card!
Just like modern CPU's, modern GPU's are built with overclocking features and more importantly management software. The manufactures expect you to overclock their hardware. At the very least you need to use this software to look at what clock frequency the GPU and the GPU's memory are using as well as what temps they are running under heavy load. Just to make sure everything is working as advertised. You wouldn't be the first person to get a card that either has a problem or has some of the internal settings set to non-optimal values at the factory. So, always check ALL base parameters after building a new system, you should also stress test the system before using it for it's intended purpose. This will likely detect any flaws in the build/setup and these should be addressed NOW as opposed to finding out much later.
I will try out lc0 once i install cuda and everything else it needs to compile it on linux.

The only problem i have now is one of the RGB fans (out of 4) that came with the case is loose and does not rotate (no friction). It was delivered to me
like that and i could not get it hold. The other 2 fans at the front i can't get them out when i tried, so i don't know what is missing in the top fan.
Anyway i think 3 case fans, 1 cpu and 1 motherboard fan should be enough so i have no intention of returning the case.

Daniel
I almost always replace my case fans when I build a new system even in a brand new case. Few case manufactures (even the good ones) use fans that I consider up to par.

It sounds like you are in the home stretch with just a few loose ends to tie up and a few other things to check!

Congrats!

Regards,

Zenmastur
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.

jp
Posts: 815
Joined: Mon Apr 23, 2018 5:54 am

Re: buying a new computer

Post by jp » Thu Jul 18, 2019 8:22 am

Uri Blass wrote:
Thu Jul 18, 2019 1:06 am
jp wrote:
Wed Jul 17, 2019 11:48 pm
Uri Blass wrote:
Wed Jul 17, 2019 10:42 pm
You can generate that book simply by letting stockfish play against stockfish or against lc0 and add the lines that the engines play at very slow time control to the book (stockfish is not deterministic so the book is going to have some variety).
I think SF and all traditional engines are highly deterministic and that's a problem when you want variety.
You think SF is quite non-deterministic and (elsewhere, for other purposes) that's a problem when you don't want variety.

The main source of variety if you just let SF loose with the same parameters, hardware, time/nodes, is just the multi-threading. If you use a single thread, there's none. I'm not sure multi-threading gives enough variety for what I'd want.

What makes you believe SF gives a lot of variety?
My experience with stockfish shows that it has a lot of variety.
I also know that when stockfish played lc0 from the opening position in TCEC it was usually stockfish that is quilty for not repeating the same game twice and not lc0.
How many games was that from the opening position (at TCEC)?

The thing with TCEC is it's a huge number of nodes per move and a huge number of threads. That's where any variety is coming from. If it's a single thread, it will just repeat moves.

For your own results, how many threads is SF getting?

Zenmastur
Posts: 486
Joined: Sat May 31, 2014 6:28 am

Re: buying a new computer

Post by Zenmastur » Thu Jul 18, 2019 8:44 am

Uri Blass wrote:
Thu Jul 18, 2019 7:00 am
Zenmastur wrote:
Thu Jul 18, 2019 5:21 am
Uri Blass wrote:
Thu Jul 18, 2019 1:34 am
I suggest the following experiment to check variety of chess engines.
Of course you need to develop a special tool to do it.

Run the engine against itself in the first 10 moves(20 plies) for million times at some time control that you choose(let say 1 second per move) and tell me how many different games you get and how many different final positions you get.

I expect stockfish to produce more than 10000 different positions and of course theory is more than 20 plies.

Assuming you use time control of 1 second per move it means that 20 plies take 20 seconds and you need 20 million seconds for my experiment.

Maybe more realistic is to run it only for 10000 games and I expect more than 2000 final positions in this case.
I have a better plan. Use an engine match with selfplay i.e. same version of SF plays both sides. And then put all games played in an database program and map which ECO codes aren't ever seen, which are rarely seen, etc. Then we will know how much diversity SF actually displays. I suggest that the games will be very unevenly distributed to the point of making you want to throw up. There are 500 ECO codes (A00 - E99) so maybe 50,000 games should be played. I suspect that roughly 90% of games fall in 10% of the ECO codes. I haven't tried this yet but it would be easy enough to do.

Regards,

Zenmastur
You may be right but it does not mean that there is no significant variety.
Maybe part of the ECO codes are bad to play.

For example playing the king gambit may be bad for white.
I think you are talking about different things. You seem to be saying SF is not deterministics. Which is technically true. But that's not the same as it playing a wide variety of openings of it's own volition! ( And yes I know, computers don't have volition.) The point is that if a line is "reasonably" playable then it should see at least some play. No AB engine that I know of will do that unless forced to. This is why the tournaments are run they way they are. There are certainly ECO codes that are TOTALLY playable but will never be played by stockfish of it's own accord. This may be the result of the PSQT and other non-changing internal structures used in the engine. This would change drastically if a generalized learning book were implemented. Even the King's Gambit would occasionally (and unexpectedly) be played by stockfish (or any other engine using this type of book) at least until the book found a line that totally refutes it. And the best part is no user intervention is required to make either of these things happen. e.g Stockfish playing The King's Gambit of it's own accord or the learning book finding a refutation for the King's Gambit.

Regards,

Zenmastur
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.

Post Reply