Posted by HSSL Technologies on Dec 12th 2025
In recent years, NVIDIA’s GPU roadmap has leaned heavily toward artificial intelligence, often at the expense of traditional high-performance computing (HPC) workloads. Architectures such as Hopper and Blackwell have prioritized low-precision performance for AI training and inference, leading to a noticeable drop in FP64 (double-precision) throughput—an essential requirement for many scientific and simulation-based applications.
However, NVIDIA has now made it clear that FP64 is not being abandoned.
Speaking to HPCWire, Dion Harris, Senior Director of HPC and AI Hyperscale Infrastructure Solutions at NVIDIA, reaffirmed the company’s long-term commitment to high-precision computing. According to Harris, NVIDIA is “definitely looking to bring some additional [FP64] capabilities in our future generation architectures” and remains “very serious about making sure that we can deliver the required performance to power those simulation workloads.”
Why FP64 Still Matters in HPC
Double-precision floating-point performance remains critical across many HPC domains, particularly in life sciences, physics simulations, climate modeling, and engineering workloads, where numerical accuracy and stability cannot be compromised. These applications rely on sustained FP64 throughput—something recent NVIDIA accelerators have struggled to deliver.
The contrast between NVIDIA’s latest and previous generations highlights this shift:
-
Blackwell Ultra (B300): ~1.2 TFLOPS FP64
-
Hopper (H200): ~34 TFLOPS FP64
Meanwhile, the situation reverses dramatically when looking at AI-focused low-precision compute:
-
B300 FP8: ~9 PFLOPS
-
H200 FP8: ~3.96 PFLOPS
This imbalance clearly shows NVIDIA’s recent focus on optimizing GPUs for AI workloads rather than traditional HPC precision.
AI First—But Not at the Expense of HPC
While the explosion of AI has driven demand for lower-precision formats such as FP8 and FP16, the HPC community has felt increasingly underserved. Many researchers and institutions have been forced to explore alternative vendors to meet FP64 performance needs.
NVIDIA’s renewed messaging suggests this may soon change. Future architectures are expected to reintroduce stronger FP64 acceleration, restoring balance between AI and HPC requirements.
A Strategy Shift Inspired by Competition?
Industry observers believe NVIDIA may be moving toward a strategy similar to AMD’s Instinct accelerator lineup, which separates products by workload focus. AMD currently offers:
-
AI-optimized accelerators emphasizing low-precision throughput
-
HPC-focused accelerators designed for high FP64 performance
NVIDIA could adopt a comparable approach—developing distinct GPU lines, one optimized for AI inference and training, and another purpose-built for precision-heavy HPC workloads.
Looking Ahead
Although NVIDIA has not yet disclosed concrete specifications or timelines, its confirmation that FP64 remains a priority is a positive signal for the HPC ecosystem. As scientific computing continues to demand both accuracy and scale, future NVIDIA accelerators may once again deliver the high-precision performance researchers depend on—without sacrificing the AI advancements that have defined recent generations.
For now, the message is clear: FP64 is not going away, and NVIDIA intends to bring meaningful HPC improvements in its next wave of GPU architectures.