Panther Lake is an interesting one. Is it replacing Arrow Lake, Luna Lake, or both?smatovic wrote: ↑Thu Jan 09, 2025 4:02 pm
Intel is sampling 18A-based Panther Lake with customers — Intel Foundry's 18A node and CPUs are on track for 2H 2025 launch
https://www.tomshardware.com/pc-compone ... 025-launch
--One of the important aspects of Intel's Panther Lake is that it will be produced on the company's 18A process technology (1.8nm-class), a make-or-break production node.
Srdja
GPU rumors 2021
Moderator: Ras
-
- Posts: 1961
- Joined: Thu Sep 18, 2008 10:24 pm
Re: GPU rumors 2021
-
- Posts: 3045
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: GPU rumors 2021
https://en.wikipedia.org/wiki/Panther_L ... processor)Panther Lake will be Intel's Core Ultra Series 300 CPUs,
Arrow and Luna Lake are Ultra 200 series, Idk if there will be some Panther Lake derivatives for different market segments.
--
Srdja
-
- Posts: 1961
- Joined: Thu Sep 18, 2008 10:24 pm
Re: GPU rumors 2021
Looks like Panther Lake is a laptop CPU, I was hoping it might have a desktop version.
https://www.digitimes.com/news/a2025010 ... -2025.html
https://www.digitimes.com/news/a2025010 ... -2025.html
-
- Posts: 3045
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: GPU rumors 2021
AMD Zen 5 Ryzen and Zen 5 Epyc "Turin" are fabricated on TSMC N4X node, Zen 5c "Turin "Dense" on TSMC N3E .
https://en.wikipedia.org/wiki/Zen_5
https://en.wikipedia.org/wiki/Epyc#Fift ... rin_Dense)
So we can expect with Zen 6 and likely switch to 3nm or 2nm significant core-count increase for the former ones.
https://en.wikipedia.org/wiki/5_nm_process#Nodes
https://en.wikipedia.org/wiki/3_nm_proc ... cess_nodes
Zen 6 initially on track for 2025, now maybe 2026-2027?
TSMC 2nm process to enter volume production in H2 2025.
https://en.wikipedia.org/wiki/2_nm_proc ... cess_nodes
Intel skipped 2nm node and bets on 18A, Samsung is in with SF2.
--
Srdja
-
- Posts: 3045
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: GPU rumors 2021
TSMC profits surge 57% as demand for AI chips remains highsmatovic wrote: ↑Wed Dec 07, 2022 10:42 am Maybe worth to mention:
https://fudzilla.com/news/55843-nvidia- ... t-that-hotConsidering that Nvidia's data center sales now exceed sales of client-oriented products by 2.4 times, the green company can be officially called a data center company, or rather an AI company like the company has preferred in recent years.
GPGPU was driven at first by gamer gpu sales, now it might revert. Shrinking of transistor size was some time driven by mobile SoCs, now the AI sector might take over, or alike.
--
Srdja
https://www.businessinsider.com/tsmc-ea ... 025-1?op=1
TSMC Posts 57% Jump in Q4 Profit, Eyes $42B Investment in AI TechnologyIn 2024, 35% of TSMC's revenue came from smartphones, while 51% came from high-performance computing.
https://finance.yahoo.com/news/tsmc-pos ... 09652.html
--Regarding investment in the future, TSMC sees 2025 sees a big ramp-up in capital spending because it plans to invest $38 billion or $42 billion that's a lot, that being much more than the $29.8 billion spent last year.
Srdja
-
- Posts: 3045
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: GPU rumors 2021
smatovic wrote: ↑Tue Dec 17, 2024 10:49 am [...]
FLOPS
The first (programmable) FLOPS was achieved by the Zuse Z3 from 1941, based on electro-mechanical relays @5Hz.
https://en.wikipedia.org/wiki/Z3_(computer)
kiloFLOPS
The first (programmable) kiloFLOPS was achieved by UNIVAC I from 1951, based on vaccum-tubes @2.25MHz.
https://en.wikipedia.org/wiki/UNIVAC_I
megaFLOPS
The first megaFLOPS were achieved by transistor based mainframes like the IBM System/360 series from the 60s and 70s.
https://en.wikipedia.org/wiki/IBM_System/360
Maybe worth to add...smatovic wrote: ↑Tue Dec 17, 2024 6:33 am [...]
gigaFLOPS
The first gigaFLOPS was achieved by the NEC SX2 and Cray 2 supercomputers from 1985, both via vector-processors, the former with 1 gigaFLOPS and 512 MB RAM the latter with 2 gigaFLOPS and 1 GB RAM.
https://en.wikipedia.org/wiki/NEC_SX
https://en.wikipedia.org/wiki/Cray-2
teraFLOPS
The first teraFLOPS was achieved by the ASCI Red supercomputer from 1997 by Intel @ Sandia National Laboratories, ~10,000 Pentium Pro CPUs@200MHz, here we see the switch from single supercomputers to clusters of nodes.
https://en.wikipedia.org/wiki/ASCI_Red
petaFLOPS
The first petaFLOPS was achieved by Roadrunner from 2008 by IBM @ Los Alamos National Laboratory, with 12,960 IBM PowerXCell 8i accelerator boards, here we see the switch to heterogeneous CPU+accelerator architecture.
https://en.wikipedia.org/wiki/Roadrunne ... rcomputer)
exaFLOPS
The first (official) exaFLOPS was achieved by Frontier from 2022 by Cray+AMD @ Oak Ridge National Laboratory , with 9,472 AMD Epyc CPUs plus 37,888 AMD Instinct MI250X GPUs.
https://en.wikipedia.org/wiki/Frontier_(supercomputer)
Outlook first zettaFLOPS?
Idk but it looks atm like this will be a machine for training AIs, either a private cluster by one of the big tech players, or maybe a public grid? Something like Folding@Home + Lc0 for gen AI.
[...]
FLOPS
Here we see the switch from mechanical computers to electro-mechanical, relay-based, machines.
kiloFLOPS
Here we see the switch fom relays to vacuum-tubes.
megaFLOPS
Here we see the switch from vaccum-tubes to transistors and integrated circuits.
gigaFLOPS
Here we see the switch from ICs to microprocessors.
trraFLOPS
Here we see the switch from single supercomputers to clusters of nodes.
petaFLOPS
Here we see the switch to heterogeneous CPU+accelerator architecture.
exaFLOPS
Here we see the switch to heterogeneous CPU+GPU architecture with unified/coherent memory.
Outlook zettaFLOPS
What kind of switch will we see here?
---
Srdja
-
- Posts: 3045
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: GPU rumors 2021
AFAIK Lc0 relies on FP16 compute:smatovic wrote: ↑Wed Jan 08, 2025 11:15 amYes, interesting, Lc0 benchmarks will tell, probably better with BT series network than T series, cos Transformers can utilize better the new TensorCores than CNNs....with the step from RTX 20xx series to 30xx we had good looking numbers on paper, but it did not translate to a big jump in NPS for Lc0 for multiple reasons.Werewolf wrote: ↑Wed Jan 08, 2025 10:12 am If they're right, that's almost a 3x increase in TFLOPS. I doubt we'll see 3x Lc0 nps, but it's interesting
https://www.tomshardware.com/pc-compone ... redecessor
--
Srdja
Code: Select all
Model Cores FP16 TFLOPS FP16 matrix TFLOPS
GeForce RTX 4090 16384 82.6 330
GeForce RTX 5090 21760 104.8 419
https://en.wikipedia.org/wiki/List_of_N ... _50_series
--
Srdja