Thanks for that link: I rarely look at the Programming And Technical Discussions forum, even though I know that this choice causes me to sometimes miss interesting discussions.
The simple reveals itself after the complex has been exhausted.
Besides selling the altered graphics card, the Goofish seller also offers an upgrade service for $120, which includes labor and materials, for GeForce RTX 2080 Ti owners who want to send in their graphics card for the 22GB modification.
it's about the server class, consumer Blackwell might come in 2025.
Nvidia uses advanced TSMC 4N node, so they increased transistor count (x2.5) by using two tiles, multi-chip, AI throughput increased by FP4 and FP6 Transformer Engines.
smatovic wrote: ↑Tue Mar 19, 2024 5:56 pmNVIDIA Blackwell Architecture and B200/B100 Accelerators Announced: Going Bigger With Smaller Data...
Using this technology, they also announced the DGX B200 supercomputer, which will be at the exascale at the reduced precision used for machine learning. It will be able to train trillion parameter NN models, which should result in a reasonably good knowledge of chess (or other subject matters).
"Nvidia tells us that one of the standout features of the GB200 Superchip is its ability to deliver up to 30 times the performance of Nvidia’s current leading H100 Tensor Core GPU for large language model inference tasks. This remarkable improvement pushes the boundaries of AI supercomputing and will enable the more efficient development and deployment of more sophisticated AI models." - link.
The simple reveals itself after the complex has been exhausted.
towforce wrote: ↑Tue Mar 19, 2024 6:27 pm
This remarkable improvement pushes the boundaries of AI supercomputing and will enable the more efficient development and deployment of more sophisticated AI models."[/i] - link.
smatovic wrote: ↑Fri Apr 16, 2021 9:02 am
- CPU-GPU coherent memory
With Nvidia moving into the CPU realm we have a tight coupling of CPU-GPU arch
for HPC incoming. IBM dropped NVLink support in their POWER10 series, so all
HPC-GPU vendors will come up with a solution for coherent memory between CPU
and GPU, maybe an open standard like CXL over PCIe, maybe something proprietary
like NVLink and Infinity Fabric, unknown if and how this descents to the gamer
gpu market.
--
Srdja
...now with AI as silicon arms-race, we see Amazon, Microsoft and Google as cloud provider with own, ARM based, CPUs and own AI inference/training chips rising, a coupling of cloud provider+CPU+AI chips, interesting times.