One on supercomputers and FLOPS*:
gigaFLOPS
The first gigaFLOPS was achieved by the NEC SX2 and Cray 2 supercomputers from 1985, both via vector-processors, the former with 1 gigaFLOPS and 512 MB RAM the latter with 2 gigaFLOPS and 1 GB RAM.
https://en.wikipedia.org/wiki/NEC_SX
https://en.wikipedia.org/wiki/Cray-2
teraFLOPS
The first teraFLOPS was achieved by the ASCI Red supercomputer from 1997 by Intel @ Sandia National Laboratories, ~10,000 Pentium Pro CPUs@200MHz, here we see the switch from single supercomputers to clusters of nodes.
https://en.wikipedia.org/wiki/ASCI_Red
petaFLOPS
The first petaFLOPS was achieved by Roadrunner from 2008 by IBM @ Los Alamos National Laboratory, with 12,960 IBM PowerXCell 8i accelerator boards, here we see the switch to heterogeneous CPU+accelerator architecture.
https://en.wikipedia.org/wiki/Roadrunne ... rcomputer)
exaFLOPS
The first (official) exaFLOPS was achieved by Frontier from 2022 by Cray+AMD @ Oak Ridge National Laboratory , with 9,472 AMD Epyc CPUs plus 37,888 AMD Instinct MI250X GPUs.
https://en.wikipedia.org/wiki/Frontier_(supercomputer)
Outlook first zettaFLOPS?
Idk
but it looks atm like this will be a machine for training AIs, either a private cluster by one of the big tech players, or maybe a public grid? Something like Folding@Home + Lc0 for gen AI.
*FLOPS from the TOP500 supercomputer list are measured via LINPACK benchmark in FP64 performance, double precision.
https://en.wikipedia.org/wiki/LINPACK_benchmarks
https://en.wikipedia.org/wiki/TOP500
--
Srdja