How do you know if your cpu is equal to your gpu?
Moderators: hgm, Rebel, chrisw
-
- Posts: 1534
- Joined: Sun Oct 25, 2009 2:30 am
Re: How do you know if your cpu is equal to your gpu?
A lot of controversial HW configurations can be devised, that would raise questions at either side of the fence. The problem is a legitimate one and can't be dismissed with the sort of easy fixed I've seen proposed so far.
-
- Posts: 1470
- Joined: Mon Apr 23, 2018 7:54 am
Re: How do you know if your cpu is equal to your gpu?
A good idea proposed in the Leela forum long ago was to use the points at which Leela and SF start maxing out. With more nodes per move there are diminishing returns and when it's large the performance starts flatlining. Identify the points at which flatlining starts and use that ratio. That way, it's independent of hardware capabilities, etc.Ozymandias wrote: ↑Thu Jan 23, 2020 4:33 pm The problem is a legitimate one and can't be dismissed with the sort of easy fixed I've seen proposed so far.
I don't know if there was any follow-up on that idea. I think it deserves it. Apart from Leela CPU vs SF CPU, the other ideas feel too contrived.
-
- Posts: 1470
- Joined: Mon Apr 23, 2018 7:54 am
Re: How do you know if your cpu is equal to your gpu?
I just did a search for it, and here's what I found:
https://groups.google.com/forum/#!msg/l ... oJrjDgBQAJ
https://groups.google.com/forum/#!topic ... -0vilWxg8o
A good thing about Cscuile's idea is it encourages everyone to pay more attention to scaling and asymptotic behavior, instead of magical thinking about GPUs.
-
- Posts: 1534
- Joined: Sun Oct 25, 2009 2:30 am
-
- Posts: 1470
- Joined: Mon Apr 23, 2018 7:54 am
Re: How do you know if your cpu is equal to your gpu?
Many on this forum and outside express the belief that all the computing GPUs do "doesn't count", although they don't state it like that, and nothing will make them change their minds.
They could also be the people who don't know or care about scaling, etc.
-
- Posts: 256
- Joined: Wed Oct 02, 2013 12:36 am
Re: How do you know if your cpu is equal to your gpu?
This is not a trivial issue when it comes to chess performance. This is just the beginning of a search, that is, an introduction to a "problem" not clearly resolved: how to compare gpu to cpu performance.
The closest you can get, so far, to a "fair" comparisson is using FLOPS as a performance measure. Yet it is at best, a "partial" story.
In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per second. Manufacturers frequently include FLOPS as a specification on computers so they can talk about how fast they are in a universal way.
The term FLOP is often used for floating-point operation, for example as a unit of counting floating-point operations carried out by an algorithm or computer hardware.
the derivative TFLOP is also used . It is a bit of shorthand for “teraflop.” A teraflop refers to the capability of a processor to calculate one trillion floating-point operations per second. Saying something has “6 TFLOPS,” for example, means that its processor setup is capable of handling 6 trillion floating-point calculations every second, on average.
Some modern workstation GPUs, such as the Nvidia Quadro workstation cards using the Volta and Turing architectures, feature dedicating processing cores for tensor-based deep learning applications. In Nvidia's current series of GPUs these cores are called Tensor Cores. These GPUs usually have significant FLOPS performance increases, utilizing 4x4 matrix multiplication and division, resulting in hardware performance up to 128 TFLOPS in some applications. These tensor cores are also supposed to appear in consumer cards running the Turing architecture, and possibly in the Navi series of consumer cards from AMD.
CPU in GFLOPS (1 gigaFLOP =10^9 FLOPS)
(....)
How many flops is an i7?
Linpack benchmark using the Intel MKL optimizations
Note how very few people talk about the BLASS version of leelachess. It is the version optimized for CPU's.
Stockfish as such is engine optimized for cpu flops, a warning not compare to highly paralleliz software used in gpu.
A realistic performance between leela-gpu vs stockfish-cpu could be glanced in node search per seconds. An equivalence could be inferred on careful testing, but it will be an approximation. As useful as it is, nobody talks about doing one. Why?
The closest you can get, so far, to a "fair" comparisson is using FLOPS as a performance measure. Yet it is at best, a "partial" story.
In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per second. Manufacturers frequently include FLOPS as a specification on computers so they can talk about how fast they are in a universal way.
The term FLOP is often used for floating-point operation, for example as a unit of counting floating-point operations carried out by an algorithm or computer hardware.
the derivative TFLOP is also used . It is a bit of shorthand for “teraflop.” A teraflop refers to the capability of a processor to calculate one trillion floating-point operations per second. Saying something has “6 TFLOPS,” for example, means that its processor setup is capable of handling 6 trillion floating-point calculations every second, on average.
Some modern workstation GPUs, such as the Nvidia Quadro workstation cards using the Volta and Turing architectures, feature dedicating processing cores for tensor-based deep learning applications. In Nvidia's current series of GPUs these cores are called Tensor Cores. These GPUs usually have significant FLOPS performance increases, utilizing 4x4 matrix multiplication and division, resulting in hardware performance up to 128 TFLOPS in some applications. These tensor cores are also supposed to appear in consumer cards running the Turing architecture, and possibly in the Navi series of consumer cards from AMD.
CPU in GFLOPS (1 gigaFLOP =10^9 FLOPS)
(....)
How many flops is an i7?
Linpack benchmark using the Intel MKL optimizations
- Processor Brief Spec Linpack (GFLOPS)
Dual Xeon E5 2687W 16 cores @ 3.2GHz AVX 345
Core i7 5930K (Haswell E) 6 cores @ 3.5GHz AVX2 289
Dual Xeon E5 2650 16 cores @ 2.0GHz AVX 262
Core i7 4770K (Haswell) 4 cores @ 3.5GHz AVX2 182
Note how very few people talk about the BLASS version of leelachess. It is the version optimized for CPU's.
Stockfish as such is engine optimized for cpu flops, a warning not compare to highly paralleliz software used in gpu.
A realistic performance between leela-gpu vs stockfish-cpu could be glanced in node search per seconds. An equivalence could be inferred on careful testing, but it will be an approximation. As useful as it is, nobody talks about doing one. Why?
-
- Posts: 1470
- Joined: Mon Apr 23, 2018 7:54 am
-
- Posts: 343
- Joined: Sun Aug 25, 2019 8:33 am
- Full name: .
Re: How do you know if your cpu is equal to your gpu?
I think price is the best measure. The total should also include RAM cost, as programs could easily have different use requirements. Possibly even motherboard.
-
- Posts: 4556
- Joined: Tue Jul 03, 2007 4:30 am
Re: How do you know if your cpu is equal to your gpu?
Ah, but then GPUs might use more energy over time, so the energy cost would eventually catch up to anything and would make the GPU non-cost effective no matter what.
-
- Posts: 343
- Joined: Sun Aug 25, 2019 8:33 am
- Full name: .
Re: How do you know if your cpu is equal to your gpu?
Good point but the energy cost will only make a difference over months of 24h/day use. A 2080 ti uses around 280W (https://www.tomshardware.com/reviews/nv ... 05-10.html). U.S. consumer electricity cost is about $0.13/Kilowatthour (https://www.eia.gov/electricity/monthly ... epmt_5_6_a). So we get 365.25*24*0.13*0.28 = $319/year. Still, worth a consideration.