Nvidia has just unveiled their Nvidia h100 GPU at the latest GPU technology conference and it’s looking pretty impressive. although plenty of rumors surrounding the Gpu’s development was already seen in plenty of forms now we have the concrete spec sheet.
in this review, we will be breaking down the specs and details surrounding the upcoming hopper h100 GPU to give you a brief idea of what to expect from it.
Designed for supercomputers h100 focuses more on ai capabilities to streamline performance and work efficiency even further. built on custom 4-nanometer TSMC node h-100 packs a whopping 80 billion transistors which is a huge leap from the a154 billion that currently exists on the market.
Although no core counter clock speed has been tested yet they did mention the h100 supporting PCIe 4.0 enveloping interface. that ensures a speed of up to 900 gigabytes per second. you can also expect a similar bandwidth for PCIe 5.0 systems that don’t use envy link. keep in mind that this GPU supports hp m3 memory of 80 gigabytes with 3 terabytes per second of bandwidth right out of the box.
This is significantly higher above 1.5 times to be precise compared to a100 hp m2e memory. consequently, these major upgrades enable the h100 to deliver up to 1000 teraflops of fp16 computing 500 of tf32, and 60 of general usage fp64 computing.
While it all seems like huge improvements there are some downsides too. despite being based on a smaller node the h100 has a TDP rating of up to 700 watts. nevertheless, we are still looking at about 75 performance gain on the h100 GPU over its predecessor.
These upgrades will brilliantly enhance supercomputing and ai workflows resulting in precise engineering and scientific work. there is no official release date yet however Nvidia does plan to equip systems with h100 GPU by quarter 3 of 2022.
So that was all about the Nvidia hopper h100 GPU thanks for reading if you found this information helpful please like it and share it with your friends and comment below to let us know your thoughts.